Kubernetes学习二:Kubernetes集群搭建之部署kubernetes server

Posted JAIR_FOREVER

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kubernetes学习二:Kubernetes集群搭建之部署kubernetes server相关的知识,希望对你有一定的参考价值。

目录

1、解压缩文件

2、部署kube-apiserver组件 创建TLS Bootstrapping Token

3、创建Apiserver配置文件

4、创建apiserver systemd文件

5、启动服务

6、部署kube-scheduler组件 创建kube-scheduler配置文件

7、部署kube-controller-manager组件 创建kube-controller-manager配置文件

8、验证kubeserver服务


1、解压缩文件

tar -zxvf kubernetes-server-linux-amd64.tar.gz 
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

2、部署kube-apiserver组件 创建TLS Bootstrapping Token

[root@elasticsearch01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
f2c50331f07be89278acdaf341ff1ecc
 
vim /k8s/kubernetes/cfg/token.csv
f2c50331f07be89278acdaf341ff1ecc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

3、创建Apiserver配置文件

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://192.168.10.200:2379,https://192.168.10.201:2379,https://192.168.10.202:2379 \\
--bind-address=192.168.10.200 \\
--secure-port=6443 \\
--advertise-address=192.168.10.200 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.254.0.0/16 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--enable-bootstrap-token-auth \\
--token-auth-file=/k8s/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \\
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \\
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/k8s/etcd/ssl/ca.pem \\
--etcd-certfile=/k8s/etcd/ssl/server.pem \\
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"

4、创建apiserver systemd文件

vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

5、启动服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
[root@k8s-master1 ~]# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2019-03-13 14:24:39 CST; 17min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 2351 (kube-apiserver)
   CGroup: /system.slice/kube-apiserver.service
           └─2351 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.10.200:2379,https://192.168.10.201:2379,https:/...

3月 13 14:41:49 k8s-master1 kube-apiserver[2351]: I0313 14:41:49.849422    2351 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-contr...1:55958]
3月 13 14:41:49 k8s-master1 kube-apiserver[2351]: I0313 14:41:49.861752    2351 wrap.go:47] PUT /api/v1/namespaces/kube-system/endpoints/kube-contr...1:55958]
3月 13 14:41:49 k8s-master1 kube-apiserver[2351]: I0313 14:41:49.882887    2351 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-sched...1:51944]
3月 13 14:41:49 k8s-master1 kube-apiserver[2351]: I0313 14:41:49.993119    2351 wrap.go:47] PUT /api/v1/namespaces/kube-system/endpoints/kube-sched...1:51944]
3月 13 14:41:51 k8s-master1 kube-apiserver[2351]: I0313 14:41:51.606388    2351 wrap.go:47] GET /apis/batch/v1/jobs: (9.175259ms) 200 [kube-control...1:55958]
3月 13 14:41:51 k8s-master1 kube-apiserver[2351]: I0313 14:41:51.621644    2351 wrap.go:47] GET /apis/batch/v1beta1/cronjobs: (8.903237ms) 200 [kub...1:55958]
3月 13 14:41:51 k8s-master1 kube-apiserver[2351]: I0313 14:41:51.868728    2351 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-contr...1:55958]
3月 13 14:41:51 k8s-master1 kube-apiserver[2351]: I0313 14:41:51.880853    2351 wrap.go:47] PUT /api/v1/namespaces/kube-system/endpoints/kube-contr...1:55958]
3月 13 14:41:52 k8s-master1 kube-apiserver[2351]: I0313 14:41:52.002439    2351 wrap.go:47] GET /api/v1/namespaces/kube-system/endpoints/kube-sched...1:51944]
3月 13 14:41:52 k8s-master1 kube-apiserver[2351]: I0313 14:41:52.015891    2351 wrap.go:47] PUT /api/v1/namespaces/kube-system/endpoints/kube-sched...1:51944]
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-master1 ~]# ps -ef |grep kube-apiserver
root      2351     1  7 14:24 ?        00:01:16 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.10.200:2379,https://
192.168.10.201:2379,https://192.168.10.202:2379 --bind-address=192.168.10.200 --secure-port=6443 --advertise-address=192.168.10.200 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pemroot      2571  2221  0 14:42 pts/0    00:00:00 grep --color=auto kube-apiserver
[root@k8s-master1 ~]# netstat -tulpn |grep kube-apiserve
tcp        0      0 192.168.10.200:6443     0.0.0.0:*               LISTEN      2351/kube-apiserver 
tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      2351/kube-apiserver 

6、部署kube-scheduler组件 创建kube-scheduler配置文件

vim  /k8s/kubernetes/cfg/kube-scheduler 
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

参数备注: –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求; –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

创建kube-scheduler systemd文件

vim /usr/lib/systemd/system/kube-scheduler.service 
 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service
[root@k8s-master1 ~]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2019-03-13 14:27:32 CST; 17min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 2409 (kube-scheduler)
   CGroup: /system.slice/kube-scheduler.service
           └─2409 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect

3月 13 14:37:17 k8s-master1 kube-scheduler[2409]: I0313 14:37:17.866295    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:38:01 k8s-master1 kube-scheduler[2409]: I0313 14:38:01.868272    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:39:02 k8s-master1 kube-scheduler[2409]: I0313 14:39:02.882379    2409 reflector.go:357] k8s.io/kubernetes/cmd/kube-scheduler/app/server.g...received
3月 13 14:40:43 k8s-master1 kube-scheduler[2409]: I0313 14:40:43.864520    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:41:08 k8s-master1 kube-scheduler[2409]: I0313 14:41:08.867776    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:41:40 k8s-master1 kube-scheduler[2409]: I0313 14:41:40.866866    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:42:15 k8s-master1 kube-scheduler[2409]: I0313 14:42:15.866286    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:44:02 k8s-master1 kube-scheduler[2409]: I0313 14:44:02.868629    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:44:22 k8s-master1 kube-scheduler[2409]: I0313 14:44:22.865937    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
3月 13 14:44:58 k8s-master1 kube-scheduler[2409]: I0313 14:44:58.869754    2409 reflector.go:357] k8s.io/client-go/informers/factory.go:132: Watch ...received
Hint: Some lines were ellipsized, use -l to show in full.

7、部署kube-controller-manager组件 创建kube-controller-manager配置文件

vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.254.0.0/16 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"

创建kube-controller-manager systemd文件

vim /usr/lib/systemd/system/kube-controller-manager.service 
 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
[root@k8s-master1 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2019-03-13 14:28:54 CST; 19min ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 2461 (kube-controller)
   CGroup: /system.slice/kube-controller-manager.service
           └─2461 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --ser...

3月 13 14:48:02 k8s-master1 kube-controller-manager[2461]: I0313 14:48:02.808241    2461 cronjob_controller.go:122] Found 0 groups
3月 13 14:48:04 k8s-master1 kube-controller-manager[2461]: I0313 14:48:04.028191    2461 resource_quota_controller.go:422] no resource updates from ...ta sync
3月 13 14:48:06 k8s-master1 kube-controller-manager[2461]: I0313 14:48:06.114487    2461 reflector.go:357] k8s.io/client-go/informers/factory.go:132...eceived
3月 13 14:48:12 k8s-master1 kube-controller-manager[2461]: I0313 14:48:12.815889    2461 cronjob_controller.go:111] Found 0 jobs
3月 13 14:48:12 k8s-master1 kube-controller-manager[2461]: I0313 14:48:12.824930    2461 cronjob_controller.go:119] Found 0 cronjobs
3月 13 14:48:12 k8s-master1 kube-controller-manager[2461]: I0313 14:48:12.824966    2461 cronjob_controller.go:122] Found 0 groups
3月 13 14:48:14 k8s-master1 kube-controller-manager[2461]: I0313 14:48:14.880255    2461 reflector.go:215] k8s.io/client-go/informers/factory.go:132... resync
3月 13 14:48:15 k8s-master1 kube-controller-manager[2461]: I0313 14:48:15.183844    2461 pv_controller_base.go:408] resyncing PV controller
3月 13 14:48:20 k8s-master1 kube-controller-manager[2461]: I0313 14:48:20.599495    2461 gc_controller.go:144] GC'ing orphaned
3月 13 14:48:20 k8s-master1 kube-controller-manager[2461]: I0313 14:48:20.605533    2461 gc_controller.go:173] GC'ing unscheduled pods which are terminating.
Hint: Some lines were ellipsized, use -l to show in full.

8、验证kubeserver服务

设置环境变量

vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile

查看master服务状态

[root@k8s-master1 ~]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-0               Healthy   "health":"true"   
componentstatus/etcd-2               Healthy   "health":"true"   
componentstatus/etcd-1               Healthy   "health":"true"   

参考:https://www.kubernetes.org.cn/5025.html

以上是关于Kubernetes学习二:Kubernetes集群搭建之部署kubernetes server的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes学习总结(12)—— 学习 kubernetes 的10个技巧或建议

Kubernetes学习总结(12)—— 学习 kubernetes 的10个技巧或建议

二进制部署Kubernetes(K8S)报错集记录

k8s初识02:容器管理工具编排部署工具kubernetes相关学习

kubernetes学习14—Dashboard搭建和认证

Kubernetes 状态集缩减