Kube:使用Helm安装Istio
Posted UpInTheVir
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kube:使用Helm安装Istio相关的知识,希望对你有一定的参考价值。
Kube:使用Helm安装Istio
CRDs统计
Charts统计
使用Helm安装Istio
[~/K8s/istio/istio-1.0.2]$ kubectl apply -f install/kubernetes/helm/helm-service-account.yaml
serviceaccount/tiller unchanged
clusterrolebinding.rbac.authorization.k8s.io/tiller configured
[~/K8s/istio/istio-1.0.2]$
[~/K8s/istio/istio-1.0.2]$ helm init --service-account tiller --tiller-image 192.168.0.61/helm/tiller:v2.10.0 --upgrade
$HELM_HOME has been configured at /Users/shenxg13/.helm.
Tiller (the Helm server-side component) has been upgraded to the current version.
Happy Helming!
[~/K8s/istio/istio-1.0.2]$
为helm tiller创建service account。
安装helm tiller。
# Common settings.
global:
# Default hub for Istio images.
# Releases are published to docker hub under 'istio' project.
# Daily builds from prow are on gcr.io, and nightly builds from circle on docker.io/istionightly
hub: 192.168.0.62/istio
# Default tag for Istio images.
tag: 1.0.2
# Gateway used for legacy k8s Ingress resources. By default it is
# using 'istio:ingress', to match 0.8 config. It requires that
# ingress.enabled is set to true. You can also set it
# to ingressgateway, or any other gateway you define in the 'gateway'
# section.
k8sIngressSelector: ingress
# k8sIngressHttps will add port 443 on the ingress and ingressgateway.
# It REQUIRES that the certificates are installed in the
# expected secrets - enabling this option without certificates
# will result in LDS rejection and the ingress will not work.
k8sIngressHttps: false
proxy:
image: proxyv2
# Resources for the sidecar.
resources:
requests:
cpu: 10m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
# Controls number of Proxy worker threads.
# If set to 0 (default), then start worker thread for each CPU thread/core.
concurrency: 0
# Configures the access log for each sidecar. Setting it to an empty string will
# disable access log for sidecar.
accessLogFile: "/dev/stdout"
#If set to true, istio-proxy container will have privileged securityContext
privileged: false
# If set, newly injected sidecars will have core dumps enabled.
enableCoreDump: false
# istio egress capture whitelist
# https://istio.io/docs/tasks/traffic-management/egress.html#calling-external-services-directly
# example: includeIPRanges: "172.30.0.0/16,172.20.0.0/16"
# would only capture egress traffic on those two IP Ranges, all other outbound traffic would
# be allowed by the sidecar
includeIPRanges: "*"
excludeIPRanges: ""
# istio ingress capture whitelist
# examples:
# Redirect no inbound traffic to Envoy: --includeInboundPorts=""
# Redirect all inbound traffic to Envoy: --includeInboundPorts="*"
# Redirect only selected ports: --includeInboundPorts="80,8080"
includeInboundPorts: "*"
excludeInboundPorts: ""
# This controls the 'policy' in the sidecar injector.
autoInject: enabled
# Sets the destination Statsd in envoy (the value of the "--statsdUdpAddress" proxy argument
# would be <host>:<port>).
# Can also be disabled (e.g. when Mixer is not installed).
envoyStatsd:
enabled: true
host: istio-statsd-prom-bridge
port: 9125
proxy_init:
# Base name for the proxy_init container, used to configure iptables.
image: proxy_init
# imagePullPolicy is applied to istio control plane components.
# local tests require IfNotPresent, to avoid uploading to dockerhub.
# TODO: Switch to Always as default, and override in the local tests.
imagePullPolicy: IfNotPresent
# controlPlaneMtls enabled. Will result in delays starting the pods while secrets are
# propagated, not recommended for tests.
controlPlaneSecurityEnabled: false
# disablePolicyChecks disables mixer policy checks.
# Will set the value with same name in istio config map - pilot needs to be restarted to take effect.
disablePolicyChecks: false
# EnableTracing sets the value with same name in istio config map, requires pilot restart to take effect.
enableTracing: true
# Default mtls policy. If true, mtls between services will be enabled by default.
mtls:
# Default setting for service-to-service mtls. Can be set explicitly using
# destination rules or service annotations.
enabled: false
# ImagePullSecrets for all ServiceAccount, list of secrets in the same namespace
# to use for pulling any images in pods that reference this ServiceAccount.
# Must be set for any clustser configured with privte docker registry.
imagePullSecrets:
- sec-harbor02-istio
# - private-registry-key
# Specify pod scheduling arch(amd64, ppc64le, s390x) and weight as follows:
# 0 - Never scheduled
# 1 - Least preferred
# 2 - No preference
# 3 - Most preferred
arch:
amd64: 2
s390x: 2
ppc64le: 2
# Whether to restrict the applications namespace the controller manages;
# If not set, controller watches all namespaces
oneNamespace: false
# Whether to perform server-side validation of configuration.
configValidation: true
# If set to true, the pilot and citadel mtls will be exposed on the
# ingress gateway
meshExpansion: false
# If set to true, the pilot and citadel mtls and the plain text pilot ports
# will be exposed on an internal gateway
meshExpansionILB: false
# A minimal set of requested resources to applied to all deployments so that
# Horizontal Pod Autoscaler will be able to function (if set).
# Each component can overwrite these default values by adding its own resources
# block in the relevant section below and setting the desired resources values.
defaultResources:
requests:
cpu: 10m
# memory: 128Mi
# limits:
# cpu: 100m
# memory: 128Mi
# Not recommended for user to configure this. Hyperkube image to use when creating custom resources
hyperkube:
hub: 192.168.0.62/istio
tag: v1.7.6_coreos.0
# Kubernetes >=v1.11.0 will create two PriorityClass, including system-cluster-critical and
# system-node-critical, it is better to configure this in order to make sure your Istio pods
# will not be killed because of low prioroty class.
# Refer to https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
# for more detail.
priorityClassName: ""
# Include the crd definition when generating the template.
# For 'helm template' and helm install > 2.10 it should be true.
# For helm < 2.9, crds must be installed ahead of time with
# 'kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
# and this options must be set off.
crds: true
#
# ingress configuration
#
ingress:
enabled: false
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
service:
annotations: {}
loadBalancerIP: ""
type: LoadBalancer #change to NodePort, ClusterIP or LoadBalancer if need be
ports:
- port: 80
name: http
nodePort: 32000
- port: 443
name: https
selector:
istio: ingress
#
# Gateways Configuration
# By default (if enabled) a pair of Ingress and Egress Gateways will be created for the mesh.
# You can add more gateways in addition to the defaults but make sure those are uniquely named
# and that NodePorts are not conflicting.
# Disable specifc gateway by setting the `enabled` to false.
#
gateways:
enabled: true
istio-ingressgateway:
enabled: true
labels:
app: istio-ingressgateway
istio: ingressgateway
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
resources: {}
# limits:
# cpu: 100m
# memory: 128Mi
#requests:
# cpu: 1800m
# memory: 256Mi
cpu:
targetAverageUtilization: 80
loadBalancerIP: ""
serviceAnnotations: {}
type: NodePort #change to NodePort, ClusterIP or LoadBalancer if need be
ports:
## You can add custom gateway ports
- port: 80
targetPort: 80
name: http2
nodePort: 31380
- port: 443
name: https
nodePort: 31390
- port: 31400
name: tcp
nodePort: 31400
# Pilot and Citadel MTLS ports are enabled in gateway - but will only redirect
# to pilot/citadel if global.meshExpansion settings are enabled.
- port: 15011
targetPort: 15011
name: tcp-pilot-grpc-tls
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
- port: 853
targetPort: 853
name: tcp-dns-tls
- port: 15030
targetPort: 15030
name: http2-prometheus
- port: 15031
targetPort: 15031
name: http2-grafana
secretVolumes:
- name: ingressgateway-certs
secretName: istio-ingressgateway-certs
mountPath: /etc/istio/ingressgateway-certs
- name: ingressgateway-ca-certs
secretName: istio-ingressgateway-ca-certs
mountPath: /etc/istio/ingressgateway-ca-certs
istio-egressgateway:
enabled: true
labels:
app: istio-egressgateway
istio: egressgateway
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
cpu:
targetAverageUtilization: 80
serviceAnnotations: {}
type: ClusterIP #change to NodePort or LoadBalancer if need be
ports:
- port: 80
name: http2
- port: 443
name: https
secretVolumes:
- name: egressgateway-certs
secretName: istio-egressgateway-certs
mountPath: /etc/istio/egressgateway-certs
- name: egressgateway-ca-certs
secretName: istio-egressgateway-ca-certs
mountPath: /etc/istio/egressgateway-ca-certs
# Mesh ILB gateway creates a gateway of type InternalLoadBalancer,
# for mesh expansion. It exposes the mtls ports for Pilot,CA as well
# as non-mtls ports to support upgrades and gradual transition.
istio-ilbgateway:
enabled: false
labels:
app: istio-ilbgateway
istio: ilbgateway
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
resources:
requests:
cpu: 800m
memory: 512Mi
#limits:
# cpu: 1800m
# memory: 256Mi
cpu:
targetAverageUtilization: 80
loadBalancerIP: ""
serviceAnnotations:
cloud.google.com/load-balancer-type: "internal"
type: LoadBalancer
ports:
## You can add custom gateway ports - google ILB default quota is 5 ports,
- port: 15011
name: grpc-pilot-mtls
# Insecure port - only for migration from 0.8. Will be removed in 1.1
- port: 15010
name: grpc-pilot
- port: 8060
targetPort: 8060
name: tcp-citadel-grpc-tls
# Port 853 is reserved for the kube-dns gateway
- port: 853
name: tcp-dns
secretVolumes:
- name: ilbgateway-certs
secretName: istio-ilbgateway-certs
mountPath: /etc/istio/ilbgateway-certs
- name: ilbgateway-ca-certs
secretName: istio-ilbgateway-ca-certs
mountPath: /etc/istio/ilbgateway-ca-certs
#
# sidecar-injector webhook configuration
#
sidecarInjectorWebhook:
enabled: true
replicaCount: 1
image: sidecar_injector
enableNamespacesByDefault: false
#
# galley configuration
#
galley:
enabled: true
replicaCount: 1
image: galley
#
# mixer configuration
#
mixer:
enabled: true
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
image: mixer
istio-policy:
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
cpu:
targetAverageUtilization: 80
istio-telemetry:
autoscaleEnabled: true
autoscaleMin: 1
autoscaleMax: 5
cpu:
targetAverageUtilization: 80
prometheusStatsdExporter:
hub: 192.168.0.62/istio
tag: v0.6.0
#
# pilot configuration
#
pilot:
enabled: true
replicaCount: 1
autoscaleMin: 1
autoscaleMax: 5
image: pilot
sidecar: true
traceSampling: 100.0
# Resources for a small pilot install
resources:
requests:
cpu: 500m
memory: 2048Mi
env:
PILOT_PUSH_THROTTLE_COUNT: 100
GODEBUG: gctrace=2
cpu:
targetAverageUtilization: 80
#
# security configuration
#
security:
enabled: true
replicaCount: 1
image: citadel
selfSigned: true # indicate if self-signed CA is used.
#
# addons configuration
#
telemetry-gateway:
gatewayName: ingressgateway
grafanaEnabled: true
prometheusEnabled: true
grafana:
enabled: true
replicaCount: 1
image: grafana
persist: false
storageClassName: ""
security:
enabled: false
adminUser: admin
adminPassword: admin
service:
annotations: {}
name: http
type: ClusterIP
externalPort: 3000
internalPort: 3000
prometheus:
enabled: true
replicaCount: 1
hub: 192.168.0.62/istio
tag: v2.3.1
service:
annotations: {}
nodePort:
enabled: false
port: 32090
servicegraph:
enabled: true
replicaCount: 1
image: servicegraph
service:
annotations: {}
name: http
type: ClusterIP
externalPort: 8088
internalPort: 8088
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- servicegraph.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: servicegraph-tls
# hosts:
# - servicegraph.local
# prometheus addres
prometheusAddr: http://prometheus:9090
tracing:
enabled: true
provider: jaeger
jaeger:
hub: 192.168.0.62/istio
tag: 1.5
memory:
max_traces: 50000
ui:
port: 16686
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- jaeger.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: jaeger-tls
# hosts:
# - jaeger.local
replicaCount: 1
service:
annotations: {}
name: http
type: ClusterIP
externalPort: 9411
internalPort: 9411
ingress:
enabled: false
# Used to create an Ingress record.
hosts:
- tracing.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: tracing-tls
# hosts:
# - tracing.local
kiali:
enabled: false
replicaCount: 1
hub: docker.io/kiali
tag: istio-release-1.0
ingress:
enabled: false
## Used to create an Ingress record.
# hosts:
# - kiali.local
annotations:
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
tls:
# Secrets must be manually created in the namespace.
# - secretName: kiali-tls
# hosts:
# - kiali.local
dashboard:
username: admin
# Default admin passphrase for kiali. Must be set during setup, and
# changed by overriding the secret
passphrase: admin
# Override the automatically detected Grafana URL, usefull when Grafana service has no ExternalIPs
# grafanaURL:
# Override the automatically detected Jaeger URL, usefull when Jaeger service has no ExternalIPs
# jaegerURL:
# Certmanager uses ACME to sign certificates. Since Istio gateways are
# mounting the TLS secrets the Certificate CRDs must be created in the
# istio-system namespace. Once the certificate has been created, the
# gateway must be updated by adding 'secretVolumes'. After the gateway
# restart, DestinationRules can be created using the ACME-signed certificates.
certmanager:
enabled: false
hub: quay.io/jetstack
tag: v0.3.1
resources: {}
配置所有镜像使用安全的私有镜像仓库。
配置imagePullSecrets为sec-harbor02-istio。
将ingressgateway类型修改为NodePort。
启用grafana。
启用servicegraph。
启用tracing。
1.0.2版本,如果启用安全的私有镜像仓库,需要手动为grafana,servicegraph,tracing的deployment,以及grafana post install的service account添加imagePullSecrets。
[~/K8s/istio/istio-1.0.2]$ helm install install/kubernetes/helm/istio --name istio --namespace istio-system
NAME: istio
LAST DEPLOYED: Tue Sep 18 18:37:24 2018
NAMESPACE: istio-system
STATUS: DEPLOYED
...
使用helm安装istio。
[~/K8s/istio/istio-1.0.2]$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-658bcffcf7-vfq6g 1/1 Running 0 4m
istio-citadel-6b655998dd-kwns6 1/1 Running 0 4m
istio-egressgateway-7466ccd8f7-td7p6 1/1 Running 0 4m
istio-galley-7d4c777685-qpph8 1/1 Running 0 4m
istio-ingressgateway-548d5bf58-dhq97 1/1 Running 0 4m
istio-pilot-56c7fcfcb4-v4bld 2/2 Running 0 4m
istio-policy-74cff89874-q2kz4 2/2 Running 0 4m
istio-sidecar-injector-74877fb885-w4jpj 1/1 Running 0 4m
istio-statsd-prom-bridge-66fc8c8f65-xj6rp 1/1 Running 0 4m
istio-telemetry-58b6dbbf8c-crbrh 2/2 Running 0 4m
istio-tracing-5c4f9f98d9-kp9b6 1/1 Running 0 4m
prometheus-7d7d67b4f7-9l6xq 1/1 Running 0 4m
servicegraph-85fdc45d75-9s4nz 1/1 Running 0 4m
[~/K8s/istio/istio-1.0.2]$
安装完成。
以上是关于Kube:使用Helm安装Istio的主要内容,如果未能解决你的问题,请参考以下文章
如何配置 kube-prometheus-stack helm 安装来抓取 Kubernetes 服务?
Kubernetes集群监控解决方案kube-prometheus-stack(prometheus-operator)helm安装