chmod +x helm.sh ./helm.sh # 输出 Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.13.1-linux-amd64.tar.gz Preparing to install helm and tiller into /usr/local/bin helm installed into /usr/local/bin/helm tiller installed into /usr/local/bin/tiller Run ‘helm init‘ to configure helm. # 验证 helm help
注:可能在执行脚本时出现curl: (7) Failed connect to kubernetes-helm.storage.googleapis.com:443; 网络不可达异常信息,多执行几次即可。
2. 服务端Tiller
直接helm init,即可在K8S集群中安装Tiller(在kube-system命名空间中),但执行的时虽然提示成功了,但K8S查看容器状态发现有Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.13.1"....的异常,查看tiller-deployment的yaml文件发现容器的镜像为gcr.io/kubernetes-helm/tiller:v2.13.1,拉不到,去dockerhub上查看谷歌复制镜像命名空间中mirrorgooglecontainers是否有,没有又查看是否有用户镜像docker search tiller:v2.13.1,拉取一个用户的镜像,修改tag、删除旧的(建议在每个节点都干一下,选择器可能没有指定):
1 2 3
docker pull hekai/gcr.io_kubernetes-helm_tiller_v2.13.1 docker tag hekai/gcr.io_kubernetes-helm_tiller_v2.13.1 gcr.io/kubernetes-helm/tiller:v2.13.1 docker rmi hekai/gcr.io_kubernetes-helm_tiller_v2.13.1
# Default values for testapi-chart. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 image: repository: nginx tag: stable pullPolicy: IfNotPresent nameOverride: "" fullnameOverride: "" service: type: ClusterIP port: 80 ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" hosts: - host: chart-example.local paths: [] tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after ‘resources:‘. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi nodeSelector: {} tolerations: [] affinity: {}
可以进入Chart.yaml所在目录运行Chart:
1 2 3 4
cd testapi-chart # 运行chart helm lint
一切OK的话可以进行打包(在Chart.yaml的父目录外):
1 2 3 4 5 6
# --debug标识可选,加上可以看到输出,testapi-chart是要打包的chart目录,打出的包在当前目录下 helm package testapi-chart --debug # 输出 Successfully packaged chart and saved it to: /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz [debug] Successfully saved /root/k8s/helm/testapi/testapi-chart-0.1.0.tgz to /root/.helm/repository/local
NAME: lumbering-zebu LAST DEPLOYED: Fri Apr 26 18:54:26 2019 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==> v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE lumbering-zebu-testapi-chart 0/1 1 0 0s ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE lumbering-zebu-testapi-chart-7fb48fc7b6-n6824 0/1 ContainerCreating 0 0s ==> v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE lumbering-zebu-testapi-chart ClusterIP 10.97.1.55 <none> 80/TCP 0s NOTES: 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=testapi-chart,app.kubernetes.io/instance=lumbering-zebu" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl port-forward $POD_NAME 8080:80