Setup a production-ready Istio
Istio is open source service mesh. It adds an abstraction layer to the network. It also provides capabilities to connect, manage and secure microservices.
Istio provides built-in configuration profiles. But for production environment, we need to :
- improve performance by using tuned settings.
- enable SDS to secure gateways.
- integrate with Prometheus operator, Grafana, Jaeger and Kiali.
Now we’ll set up Istio on Kubernetes for production environment.
Install a istio using tuned settings
- Download Istio release.
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.2.5 sh -
- Move to Istio release directory.
cd istio-1.2.5
- Create
istio-system
namespace.
kubectl create namespace istio-system
- Install all the Istio CRDs using
kubectl apply
, and wait a few seconds for the CRDs to be committed in the Kubernetes API-server
helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f -
- Verify that all
23
Istio CRDs were committed to the Kubernetes api-server using the following command:
kubectl get crds | grep 'istio.io\|certmanager.k8s.io' | wc -l
23
- Write
istio-prod-value.yaml
that enables SDS and applies tuned settings
gateways:
istio-ingressgateway:
type: NodePort
sds:
enabled: true
replicas: 2
autoscaleMin: 2
autoscaleMax: 5
resources:
limits:
cpu: 4000m
memory: 2048Mi
mixer:
telemetry:
replicaCount: 2
autoscaleMin: 2
autoscaleMax: 15
nodeagent:
enabled: true
image: node-agent-k8s
env:
CA_PROVIDER: "Citadel"
CA_ADDR: "istio-citadel:8060"
VALID_TOKEN: true
SECRET_GRACE_DURATION: "10m"
SECRET_JOB_RUN_INTERVAL: "30s"
SECRET_TTL: "20m"
pilot:
replicaCount: 2
autoscaleMin: 2
autoscaleMax: 10
env:
PILOT_PUSH_THROTTLE: 50
resources:
limits:
cpu: 5800m
memory: 12G
prometheus:
enabled: false
global:
controlPlaneSecurityEnabled: false
mtls:
enabled: true
sds:
enabled: true
udsPath: "unix:/var/run/sds/uds_path"
useNormalJwt: true
tracer
zipkin
address: "simplest-query.istio-system:16686"
- Run
helm template
command that generate the manifests usingistio-prod-values.yaml
and install Istio.
helm template install/kubernetes/helm/istio --name istio --namespace istio-system --values install/istio-prod-values.yaml | kubectl apply -f -
Configure to collect metrics using prometheus operator instead of Istio’s prometheus
- Write
prometheus-additional.yaml
to collect Istio’s metrics.
- job_name: 'istio-mesh'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;prometheus
# Scrape config for envoy stats
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:15090
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'istio-policy'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-policy;http-monitoring
- job_name: 'istio-telemetry'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-telemetry;http-monitoring
- job_name: 'pilot'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring
- job_name: 'galley'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-galley;http-monitoring
- job_name: 'citadel'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-citadel;http-monitoring
# scrape config for API servers
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- default
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kubernetes;https
# scrape config for nodes (kubelet)
- job_name: 'kubernetes-nodes'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
scrape_interval: 15s
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics
# Scrape config for Kubelet cAdvisor.
#
# This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics
# (those whose names begin with 'container_') have been removed from the
# Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to
# retrieve those metrics.
#
# In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor
# HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics"
# in that case (and ensure cAdvisor's HTTP server hasn't been disabled with
# the --cadvisor-port=0 Kubelet flag).
#
# This job is not necessary and should be removed in Kubernetes 1.6 and
# earlier versions, or it will cause the metrics to be scraped twice.
- job_name: 'kubernetes-cadvisor'
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
kubernetes_sd_configs:
- role: node
scrape_interval: 15s
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
- target_label: __address__
replacement: kubernetes.default.svc:443
- source_labels: [__meta_kubernetes_node_name]
regex: (.+)
target_label: __metrics_path__
replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
# scrape config for service endpoints.
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# Keep target if there's no sidecar or if prometheus.io/scheme is explicitly set to "http"
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: keep
regex: ((;.*)|(.*;http))
- source_labels: [__meta_kubernetes_pod_annotation_istio_mtls]
action: drop
regex: (true)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- job_name: 'kubernetes-pods-istio-secure'
scheme: https
tls_config:
ca_file: /etc/prometheus/secrets/istio.default/root-cert.pem
cert_file: /etc/prometheus/secrets/istio.default/cert-chain.pem
key_file: /etc/prometheus/secrets/istio.default/key.pem
insecure_skip_verify: true # prometheus does not support secure naming.
kubernetes_sd_configs:
- role: pod
scrape_interval: 15s
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
# sidecar status annotation is added by sidecar injector and
# istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic.
- source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls]
action: keep
regex: (([^;]+);([^;]*))|(([^;]*);(true))
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
action: drop
regex: (http)
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__] # Only keep address that is host:port
action: keep # otherwise an extra target with ':443' is added for https scheme
regex: ([^:]+):(\d+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: namespace
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- Create Secret using
prometheus-addtional.yaml
file.
kubectl --namespace=monitoring create secret generic additional-scrape-configs --from-file=setup-manifests/prometheus-additional.yaml
- Edit the Prometheus operator CRD to add
prometheus-addtional.yaml
to Prometheus config.
kubectl --namespace=monitoring edit prometheus k8s
...
spec:
...
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
...
secrets:
- istio.default
Add Istio’s dashboards to Grafana
- Create ConfigMap to include Istio dashboards.
cd install/kubernetes/helm/istio/charts/grafana/dashboards
# Change contents of dashboard json files.
# "style": "dark",\n "tags": \[\] ==> "style": "dark",\n "tags": \["istio"\]
# "datasource": "Prometheus", ==> "datasource": "prometheus",
kubectl --namespace=monitoring create configmap istio-dashboards --from-file=galley-dashboard.json --from-file=istio-mesh-dashboard.json --from-file=istio-performance-dashboard.json --from-file=istio-service-dashboard.json --from-file=istio-workload-dashboard.json --from-file=mixer-dashboard.json --from-file=pilot-dashboard.json
- Edit grafana deployment to contain Istio’s dashboards configmaps.
kubectl --namespace=monitoring edit deployment grafana
...
volumeMounts:
- mountPath: /grafana-dashboard-definitions/0/istio
name: istio-dashboards
readOnly: false
...
volumes:
- configMap:
name: istio-dashboards
name: istio-dashboards
- Access Grafana dashboard using web browser.
Setup Jaeger for tracing
https://istio.io/docs/tasks/telemetry/distributed-tracing/jaeger/#before-you-begin
a production environment by referencing an existing Jaeger instance, e.g. created with the operator, and then setting the –set global.tracer.zipkin.address=
. :16686 Helm install option.
- Install Jaeger operator.
kubectl create namespace observability
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing_v1_jaeger_crd.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
- Create Jaeger Allinone instance.
kubectl -n istio-system apply -f - <<EOF
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
name: simplest
EOF
$ kubectl -n istio-system get pods -l app.kubernetes.io/instance=simplest
NAME READY STATUS RESTARTS AGE
simplest-6d5bd6dbbd-fb589 1/1 Running 0 58s
- Create Ingress rule.
cat <<EOF | kubectl apply -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jaeger-example-com
namespace: istio-system
annotations:
kubernetes.io/ingress.class: "nginx-ingress-internal"
spec:
tls:
- hosts:
- jaeger.example.com
secretName: wild-example-com-ssl
rules:
- host: jaeger.example.com
http:
paths:
- backend:
serviceName: simplest-query
servicePort: 16686
EOF
- Access Jaeger UI using web browser.
Kiali Setup
Kiali is an observability console for Istio with service mesh configuration capabilities. It helps you to understand the structure of your service mesh by inferring the topology, and also provides the health of your mesh. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger.
- Write
kiali-k8s.yaml
file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: kiali-service-account
namespace: istio-system
labels:
app: kiali
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali
labels:
app: kiali
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- meshpolicies
- policies
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- servicerolebindings
- serviceroles
verbs:
- create
- delete
- get
- list
- patch
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kiali-viewer
labels:
app: kiali
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- namespaces
- nodes
- pods
- pods/log
- replicationcontrollers
- services
verbs:
- get
- list
- watch
- apiGroups: ["extensions", "apps"]
resources:
- deployments
- replicasets
- statefulsets
verbs:
- get
- list
- watch
- apiGroups: ["autoscaling"]
resources:
- horizontalpodautoscalers
verbs:
- get
- list
- watch
- apiGroups: ["batch"]
resources:
- cronjobs
- jobs
verbs:
- get
- list
- watch
- apiGroups: ["config.istio.io"]
resources:
- adapters
- apikeys
- bypasses
- authorizations
- checknothings
- circonuses
- cloudwatches
- deniers
- dogstatsds
- edges
- fluentds
- handlers
- instances
- kubernetesenvs
- kuberneteses
- listcheckers
- listentries
- logentries
- memquotas
- metrics
- noops
- opas
- prometheuses
- quotas
- quotaspecbindings
- quotaspecs
- rbacs
- redisquotas
- reportnothings
- rules
- signalfxs
- solarwindses
- stackdrivers
- statsds
- stdios
- templates
- tracespans
- zipkins
verbs:
- get
- list
- watch
- apiGroups: ["networking.istio.io"]
resources:
- destinationrules
- gateways
- serviceentries
- virtualservices
verbs:
- get
- list
- watch
- apiGroups: ["authentication.istio.io"]
resources:
- meshpolicies
- policies
verbs:
- get
- list
- watch
- apiGroups: ["rbac.istio.io"]
resources:
- clusterrbacconfigs
- rbacconfigs
- servicerolebindings
- serviceroles
verbs:
- get
- list
- watch
- apiGroups: ["monitoring.kiali.io"]
resources:
- monitoringdashboards
verbs:
- get
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: istio-kiali-admin-role-binding-istio-system
labels:
app: kiali
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kiali
subjects:
- kind: ServiceAccount
name: kiali-service-account
namespace: istio-system
---
apiVersion: v1
data:
config.yaml: |
api:
namespaces:
exclude:
- istio-operator
- kube.*
- openshift.*
- ibm.*
- kiali-operator
label_selector: kiali.io/member-of=istio-system
apidocs:
annotations:
api_spec_annotation_name: kiali.io/api-spec
api_type_annotation_name: kiali.io/api-type
auth:
strategy: anonymous
deployment:
accessible_namespaces:
- istio-system
- monitoring
- nginx-ingress
- observability
affinity:
node: {}
pod: {}
pod_anti: {}
image_name: kiali/kiali
image_pull_policy: IfNotPresent
image_pull_secrets: []
image_version: v1.3.0
ingress_enabled: true
namespace: istio-system
secret_name: kiali
tolerations: []
verbose_mode: '3'
version_label: v1.3.0
view_only_mode: false
external_services:
grafana:
auth:
ca_file: ''
insecure_skip_verify: false
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
enabled: true
in_cluster_url: http://grafana.monitoring:3000
url: https://grafana.example.com:32443
istio:
istio_identity_domain: svc.cluster.local
istio_sidecar_annotation: sidecar.istio.io/status
url_service_version: http://istio-pilot.istio-system:8080/version
jaeger:
in_cluster_url: http://simplest-query.istio-system:16686
url: https://jaeger.example.com:32443
prometheus:
auth:
ca_file: ''
insecure_skip_verify: false
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
custom_metrics_url: http://prometheus-k8s.monitoring:9090
url: http://prometheus-k8s.monitoring:9090
tracing:
auth:
ca_file: ''
insecure_skip_verify: true
password: ''
token: ''
type: none
use_kiali_token: false
username: ''
enabled: true
namespace: istio-system
port: 16686
service: ''
url: https://jaeger.example.com:32443
installation_tag: ''
istio_labels:
app_label_name: app
version_label_name: version
istio_namespace: istio-system
kubernetes_config:
burst: 200
cache_duration: 300000000
cache_enabled: false
qps: 175
login_token:
expiration_seconds: 86400
signing_key: kiali
server:
address: ''
audit_log: true
cors_allow_all: false
metrics_enabled: true
metrics_port: 9090
port: 20001
web_root: /kiali
kind: ConfigMap
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
spec:
replicas: 1
selector:
matchLabels:
app: kiali
version: v1.3.0
template:
metadata:
annotations:
kiali.io/runtimes: go,kiali
prometheus.io/port: "9090"
prometheus.io/scrape: "true"
labels:
app: kiali
version: v1.3.0
name: kiali
spec:
containers:
- command:
- /opt/kiali/kiali
- -config
- /kiali-configuration/config.yaml
- -v
- "3"
env:
- name: ACTIVE_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: kiali/kiali:v1.3.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /kiali/healthz
port: api-port
scheme: HTTP
name: kiali
ports:
- containerPort: 20001
name: api-port
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /kiali/healthz
port: api-port
scheme: HTTP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kiali-configuration
name: kiali-configuration
# - mountPath: /kiali-secret
# name: kiali-secret
serviceAccount: kiali-service-account
serviceAccountName: kiali-service-account
volumes:
- configMap:
defaultMode: 420
name: kiali
name: kiali-configuration
# - name: kiali-secret
# secret:
# defaultMode: 420
# optional: true
# secretName: kiali
---
apiVersion: v1
kind: Service
metadata:
labels:
app: kiali
version: v1.3.0
name: kiali
namespace: istio-system
spec:
ports:
- name: tcp
port: 20001
protocol: TCP
targetPort: 20001
selector:
app: kiali
version: v1.3.0
type: ClusterIP
- Install Kiali to Kubernetes cluster.
kubectl apply -f setup-manifests/kiali-k8s.yaml
serviceaccount/kiali-service-account created
clusterrole.rbac.authorization.k8s.io/kiali created
clusterrole.rbac.authorization.k8s.io/kiali-viewer created
clusterrolebinding.rbac.authorization.k8s.io/istio-kiali-admin-role-binding-istio-system created
configmap/kiali created
deployment.extensions/kiali created
service/kiali created
- Access Kiali Web UI “http://localhost:20001/kiali" using web browser.
kubectl port-forward svc/kiali 20001:20001 -n istio-system
Verifying the installation
- Referring to components table in configuration profiles, verify that the Kubernetes services corresponding to your selected profile have been deployed.
kubectl get svc -n istio-system
- Ensure the corresponding Kubernetes pods are deployed and have a
STATUS
ofRunning
:
kubectl get pods -n istio-system