Setup a production-ready Istio
Istio는 service mesh 기술로 네트워크에 추상화 계층을 추가하여 마이크로 서비스의 연결, 관리, 보안 기능들을 제공합니다. 이 포스트에서는 Istio를 Kubernetes 플랫폼 상에 Production 환경에 맞게 설치하는 방식에 대해서 알아보겠습니다.
Istio는 다양한 기본 Configuration Profile을 제공합니다. 하지만 Production에서 Istio를 구성하기 위해서는 기본 Configuration Profile에서 다음 사항들을 적용하여 설치가 필요합니다.
- 기본 설정에서 최적화된 설정값을 적용하여 성능을 개선
- Secure Gateway를 위한 SDS 활성화
- Prometheus, Grafana, Jaeger, Kiali 와 같은 구성요소들을 별도로 구성하여 통합
위 사항들이 반영될 수 있도록 Istio 설치 절차를 살펴보도록 하겠습니다.
최적화된 설정값을 이용하여 Istio를 설치
- Istio release를 download합니다.
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.5 sh -
- Istio release directory로 이동합니다.
cd istio-1.2.5
istio-system
namespace를 생성합니다.
kubectl create namespace istio-system
kubectl apply
를 사용하여 모든 Istio CRD를 설치하고 Kubernetes API-server에서 CRD들이 커밋 될 때까지 몇 초 기다립니다.
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
- 다음 명령어를 사용하여
23
개의 모든 Istio CRD들이 Kubernetes Api-server에서 커밋 되었는지 확인합니다.
kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l
23
- SDS를 enable하고 최적화된 설정들이 적용된 custom-value.yaml을 작성합니다.
gateways:
istio-ingressgateway:
type: LoadBalancer
sds:
enabled: true
replicas: 2
autoscaleMin: 2
autoscaleMax: 5
resources:
limits:
cpu: 4000m
memory: 2048Mi
mixer:
telemetry:
replicaCount: 2
autoscaleMin: 2
autoscaleMax: 15
nodeagent:
enabled: true
image: node-agent-k8s
env:
CA_PROVIDER: "Citadel"
CA_ADDR: "istio-citadel:8060"
VALID_TOKEN: true
SECRET_GRACE_DURATION: "10m"
SECRET_JOB_RUN_INTERVAL: "30s"
SECRET_TTL: "20m"
pilot:
replicaCount: 2
autoscaleMin: 2
autoscaleMax: 10
env:
PILOT_PUSH_THROTTLE: 50
resources:
limits:
cpu: 5800m
memory: 12G
prometheus:
enabled: false
global:
controlPlaneSecurityEnabled: false
mtls:
enabled: true
sds:
enabled: true
udsPath: "unix:/var/run/sds/uds_path"
useNormalJwt: true
tracer
zipkin
address: "simplest-query.istio-system:16686"
- helm template 명령을 사용하여 위 custom value.yaml를 인수로 받아서 Manifest들을 생성하여 Istio’s component들을 설치합니다.
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --values install/istio-prod-values.yaml | kubectl apply -f -
Istio의 prometheus 대신 prometheus operator를 사용하여 metrics를 수집하도록 구성
- Istio metrics를 수집하도록
prometheus-additional.yaml
을 작성합니다.
- job_name: 'istio-mesh'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'istio-policy'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring
- job_name: 'istio-telemetry'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'galley'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring
- job_name: 'citadel'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-citadel;http-monitoring
# scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
scrape_interval: 15s
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
scrape_interval: 15s
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Keep target if there's no sidecar or if prometheus.io/scheme is explicitly set to "http"
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: keep
regex: ((;.*)|(.*;http))
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
action: drop
regex: (true)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'kubernetes-pods-istio-secure'
scheme: https
tls_config:
ca_file: /etc/prometheus/secrets/istio.default/root-cert.pem
cert_file: /etc/prometheus/secrets/istio.default/cert-chain.pem
key_file: /etc/prometheus/secrets/istio.default/key.pem
insecure_skip_verify: true # prometheus does not support secure naming.
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# sidecar status annotation is added by sidecar injector and
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
action: keep
regex: (([^;]+);([^;]*))|(([^;]*);(true))
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: drop
regex: (http)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__] # Only keep address that is host:port
action: keep # otherwise an extra target with ':443' is added for https scheme
regex: ([^:]+):(\d+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
prometheus-addtional.yaml
파일을 Secret으로 생성합니다.
kubectl --namespace=monitoring create secret generic additional-scrape-configs --from-file=setup-manifests/prometheus-additional.yaml
- Prometheus operator CRD에
prometheus-addtional.yaml
을 참조하도록 수정합니다.
kubectl --namespace=monitoring edit prometheus k8s
...
spec:
...
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
...
secrets:
- istio.default
Istio 대시보드들을 Grafana에 추가
- Istio 대시보드들을 포함하는 configmap을 생성합니다.
cd install/kubernetes/helm/istio/charts/grafana/dashboards
# 여기에서 dashboard json file의 다음 내용을 변경
# "style": "dark",\n "tags": \[\] ==> "style": "dark",\n "tags": \["istio"\] 으로 변경
# "datasource": "Prometheus", ==> "datasource": "prometheus", 으로 변경
kubectl --namespace=monitoring create configmap istio-dashboards --from-file=galley-dashboard.json --from-file=istio-mesh-dashboard.json --from-file=istio-performance-dashboard.json --from-file=istio-service-dashboard.json --from-file=istio-workload-dashboard.json --from-file=mixer-dashboard.json --from-file=pilot-dashboard.json
- 생성된 dashboards configmap을 포함하도록 Grafana deployment를 수정합니다.
kubectl --namespace=monitoring edit deployment grafana
...
volumeMounts:
- mountPath: /grafana-dashboard-definitions/0/istio
name: istio-dashboards
readOnly: false
...
volumes:
- configMap:
name: istio-dashboards
name: istio-dashboards
- Grafana에 Istio 대시보드가 추가된 것을 확인합니다.
tracing을 위한 Jaeger 구성
Istio helm package에 포함된 Tracing 설정을 사용할 수도 있지만 Jaeger operator를 사용하여 Tracing을 구성하는 것이 production 환경에 권장됩니다.
https://istio.io/docs/tasks/telemetry/distributed-tracing/jaeger/#before-you-begin
a production environment by referencing an existing Jaeger instance, e.g. created with the operator, and then setting the –set global.tracer.zipkin.address=
. :16686 Helm install option.
- Jaeger operator를 설치합니다.
kubectl create namespace observability
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
- Jaeger Allinone instance를 생성합니다.
kubectl -n istio-system apply -f - <<EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
EOF
$ kubectl -n istio-system get pods -l app.kubernetes.io/instance=simplest
NAME READY STATUS RESTARTS AGE
simplest-6d5bd6dbbd-fb589 1/1 Running 0 58s
- Ingress rule을 생성합니다.
cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jaeger-example-com
namespace: istio-system
annotations:
kubernetes.io/ingress.class: "nginx-ingress"
spec:
tls:
- hosts:
- jaeger.example.com
secretName: wild-example-com-ssl
rules:
- host: jaeger.example.com
http:
paths:
- backend:
serviceName: simplest-query
servicePort: 16686
EOF
- 웹브라우져를 사용하여 Jaeger UI에 접속할 수 있습니다.
Kiali Setup
Kiali 는 service mesh 설정 기능을 가진 Istio 감시 콘솔입니다. Kiali는 topology를 유추하여 service mesh의 구조에 대한 이해를 도와주며 mesh의 상태도 제공합니다. Kiali는 상세한 metrics를 제공하며 기본 Grafana 통합을 통해 고급 쿼리를 사용할 수 있습니다. 분산 추적 기능은 Jaeger와의 통합을 통해 제공합니다.
kiali-k8s.yaml
파일을 작성합니다.
apiVersion: v1
kind: ServiceAccount
metadata:
name: kiali-service-account
namespace: istio-system
labels:
app: kiali
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali
labels:
app: kiali
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- meshpolicies
- policies
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- servicerolebindings
- serviceroles
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali-viewer
labels:
app: kiali
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- get
- list
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- get
- list
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- meshpolicies
- policies
verbs:
- get
- list
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- servicerolebindings
- serviceroles
verbs:
- get
- list
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-kiali-admin-role-binding-istio-system
labels:
app: kiali
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali
subjects:
- kind: ServiceAccount
name: kiali-service-account
namespace: istio-system
---
apiVersion: v1
data:
config.yaml: |
api:
namespaces:
exclude:
- istio-operator
- kube.*
- openshift.*
- ibm.*
- kiali-operator
label_selector: kiali.io/member-of=istio-system
apidocs:
annotations:
api_spec_annotation_name: kiali.io/api-spec
api_type_annotation_name: kiali.io/api-type
auth:
strategy: anonymous
deployment:
accessible_namespaces:
- istio-system
- monitoring
- nginx-ingress
- observability
affinity:
node: {}
pod: {}
pod_anti: {}
image_name: kiali/kiali
image_pull_policy: IfNotPresent
image_pull_secrets: []
image_version: v1.3.0
ingress_enabled: true
namespace: istio-system
secret_name: kiali
tolerations: []
verbose_mode: '3'
version_label: v1.3.0
view_only_mode: false
external_services:
grafana:
auth:
ca_file: ''
insecure_skip_verify: false
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
enabled: true
in_cluster_url: http://grafana.monitoring:3000
url: https://grafana.example.com:32443
istio:
istio_identity_domain: svc.cluster.local
istio_sidecar_annotation: sidecar.istio.io/status
url_service_version: http://istio-pilot.istio-system:8080/version
jaeger:
in_cluster_url: http://simplest-query.istio-system:16686
url: https://jaeger.example.com:32443
prometheus:
auth:
ca_file: ''
insecure_skip_verify: false
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
custom_metrics_url: http://prometheus-k8s.monitoring:9090
url: http://prometheus-k8s.monitoring:9090
tracing:
auth:
ca_file: ''
insecure_skip_verify: true
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
enabled: true
namespace: istio-system
port: 16686
service: ''
url: https://jaeger.example.com:32443
installation_tag: ''
istio_labels:
app_label_name: app
version_label_name: version
istio_namespace: istio-system
kubernetes_config:
burst: 200
cache_duration: 300000000
cache_enabled: false
qps: 175
login_token:
expiration_seconds: 86400
signing_key: kiali
server:
address: ''
audit_log: true
cors_allow_all: false
metrics_enabled: true
metrics_port: 9090
port: 20001
web_root: /kiali
kind: ConfigMap
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
spec:
replicas: 1
selector:
matchLabels:
app: kiali
version: v1.3.0
template:
metadata:
annotations:
kiali.io/runtimes: go,kiali
prometheus.io/port: "9090"
prometheus.io/scrape: "true"
labels:
app: kiali
version: v1.3.0
name: kiali
spec:
containers:
- command:
- /opt/kiali/kiali
- -config
- /kiali-configuration/config.yaml
- -v
- "3"
env:
- name: ACTIVE_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kiali/kiali:v1.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /kiali/healthz
port: api-port
scheme: HTTP
name: kiali
ports:
- containerPort: 20001
name: api-port
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /kiali/healthz
port: api-port
scheme: HTTP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kiali-configuration
name: kiali-configuration
# - mountPath: /kiali-secret
# name: kiali-secret
serviceAccount: kiali-service-account
serviceAccountName: kiali-service-account
volumes:
- configMap:
defaultMode: 420
name: kiali
name: kiali-configuration
# - name: kiali-secret
# secret:
# defaultMode: 420
# optional: true
# secretName: kiali
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
spec:
ports:
- name: tcp
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
version: v1.3.0
type: ClusterIP
- Kubernetes 클러스터에 Kiali를 설치합니다.
kubectl apply -f setup-manifests/kiali-k8s.yaml
serviceaccount/kiali-service-account created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrolebinding.rbac.authorization.k8s.io/istio-kiali-admin-role-binding-istio-system created
configmap/kiali created
deployment.extensions/kiali created
service/kiali created
- 웹브라우져를 사용하여 Kiali Web UI “http://localhost:20001/kiali"에 접속할 수 있습니다.
kubectl port-forward svc/kiali 20001:20001 -n istio-system
설치 검증
- configuration profiles 의 구성 테이블을 참조하여 선택한 profile에 해당되는 Kubernetes 서비스들이 배포되었는지 확인합니다.
kubectl get svc -n istio-system
- 해당 Kubernetes pod들이 배포되어
STATUS
가Running
인지 확인합니다.
kubectl get pods -n istio-system