ARM架构服务器(飞腾平台)centos7.5上yum安装k8s教程

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ARM架构服务器(飞腾平台)centos7.5上yum安装k8s教程相关的知识,希望对你有一定的参考价值。

1安装环境

[root@k8s-master ~]# uname -a
Linux k8s-master 4.14.0-49.12.ts7.aarch64 #1 SMP Tue Nov 12 19:06:54 CST 2019 aarch64 aarch64 aarch64 GNU/Linux
[root@k8s-master ~]# cat /etc/redhat-release
TongyuanOS release 7.5.1810
主机 IP 功能
k8s-master 192.168.0.239 Master
k8s-node1 192.168.0.244 node

2修改master和node的hosts文件

[root@k8s-master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.2.152.78 k8s-master
10.2.152.72 k8s-node1

3安装ntp实现所有服务器间的时间同步

[root@k8s-master ~]# yum install ntp -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * epel: mirrors.tuna.tsinghua.edu.cn
file:///mnt/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn‘t open file /mnt/repodata/repomd.xml"
Trying other mirror.
Package ntp-4.2.6p5-28.ts7.aarch64 already installed and latest version
Nothing to do
[root@k8s-master ~]# vim /etc/ntp.conf

# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not 
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1 
restrict ::1 

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.0.244 iburst
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

#broadcast 192.168.1.255 autokey        # broadcast server
#broadcastclient                        # broadcast client
#broadcast 224.0.1.1 autokey            # multicast server
#multicastclient 224.0.1.1              # multicast client
#manycastserver 239.255.254.254         # manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.
#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats

# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor
[root@k8s-master ~]# 

4关闭master和node的防火墙和selinux

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# vim /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 

[root@k8s-master ~]# reboot  

主要查看SELINUX=disabled,如果SELINUX=eabled 需要修改为disabled

5master和node上安装docker

安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
添加docker软件包的yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
关闭测试版本list(只显示稳定版)
yum-config-manager --enable docker-ce-edge
yum-config-manager --enable docker-ce-test
更新yum包索引
yum makecache fast
安装docker
直接安装Docker CE
yum install docker-ce
安装指定版本的Docker CE
yum list docker-ce --showduplicates|sort -r      #找到需要安装的
yum install docker-ce-18.06.0.ce -y    #启动docker
systemctl start docker & systemctl enable docker
Error
Transaction check error:
  file /usr/bin/docker from install of docker-ce-18.06.0.ce-3.el7.centos.aarch64 conflicts with file from package docker-ce-cli-1:18.09.7-3.el7.aarch64
  file /usr/share/bash-completion/completions/docker from install of docker-ce-18.06.0.ce-3.el7.centos.aarch64 conflicts with file from package docker-ce-cli-1:18.09.7-3.el7.aarch64
  file /usr/share/man/man1/docker-attach.1.gz from install of docker-ce-18.06.0.ce-3.el7.centos.aarch64 conflicts with file from package docker-ce-cli-1:18.09.7-3.el7.aarch64

卸载旧版本的docker包

yum erase docker-ce-cli-1:18.09.7-3.el7.aarch64

重新安装docker

Error

安装完docker用“docker version”查看docker版本报:

Cannot connect to the Docker daemon at tcp://0.0.0.0:2375. Is the docker daemon running?

解决方法:配置DOCKER_HOST

vim /etc/profile.d/docker.sh
#添加内容
export DOCKER_HOST=tcp://localhost:2375
#保存后执行应用命令
source /etc/profile
#配置启动文件
vim /lib/systemd/system/docker.service
#将
ExecStart=/usr/bin/dockerd
#修改为
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock -H tcp://0.0.0.0:7654
# 注:2375 是管理端口 ;7654 是备用端口
# 重载配置和重启
systemctl daemon-reload
systemctl restart docker.service

再次执行 “docker version” 查看docker程序。

master和node上安装k8s
更换yum源为阿里源
vim   /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
yum安装k8s
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
#或指定版本安装
yum install -y kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2  --disableexcludes=kubernetes
启动k8s服务
systemctl enable kubelet && systemctl start kubelet
查看版本号
kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:14:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/arm64"}
配置iptable
vim  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
#保存后执行
sysctl --system
关掉swap
 swapoff -a
 #要永久禁掉swap分区,打开如下文件注释掉swap那一行
 vi /etc/sta
安装etcd和flannel(master上安装etcd+flannel,node上只安装flannel)
yum  -y  install  etcd
systemctl start etcd;systemctl enable etc
yum  -y  install  flannel
master上初始化镜像
kubeadm init --kubernetes-version=v1.15.2 --pod-network-cidr=10.2.0.0/16 --apiserver-advertise-address=192.168.0.239
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.239 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.239 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.239]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 32.003346 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: r0y84o.kcmv4dumghku67cj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.239:6443 --token r0y84o.kcmv4dumghku67cj     --discovery-token-ca-cert-hash sha256:e09ecb1421e7370c473b8ac56d1f6a993afafdb9c27729e4889422781d3c51d3 
[root@k8s-master ~]# 

执行过程中如果不能拉取镜像请手动拉取arm64版本的镜像

docker pull mirrorgcrio/kube-apiserver-arm64:v1.15.2
docker pull mirrorgcrio/kube-controller-manager-arm64:v1.15.2
docker pull mirrorgcrio/kube-scheduler-arm64:v1.15.2
docker pull mirrorgcrio/kube-proxy-arm64:v1.15.2
docker pull mirrorgcrio/etcd-arm64:3.3.10
docker pull mirrorgcrio/pause-arm64:3.1
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-arm64

执行输出;

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

集群主节点安装成功,这里要记得保存这条命令,以便之后各个节点加入集群:
You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.0.239:6443 --token r0y84o.kcmv4dumghku67cj     --discovery-token-ca-cert-hash sha256:e09ecb1421e7370c473b8ac56d1f6a993afafdb9c27729e4889422781d3c51d3 
配置kubetl认证信息
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
或
export KUBECONFIG=/etc/kubernetes/admin.conf
source ~/.bash_profile

查看一下集群状态,确认个组件都处于healthy状态:

kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}
配置flannel网络
kdir -p ~/k8s/
cd ~/k8s
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f  kube-flannel.yml

如果 https://raw.githubusercontent.com 不能访问请使用:
https://site.ip138.com/raw.Githubusercontent.com/ 输入raw.githubusercontent.com 查询IP地址
修改hosts Ubuntu,CentOS及macOS直接在终端输入

sudo vi /etc/hosts
151.101.76.133 raw.githubusercontent.com

执行命令结果为:

kubectl apply -f  kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

以上是关于ARM架构服务器(飞腾平台)centos7.5上yum安装k8s教程的主要内容,如果未能解决你的问题,请参考以下文章

免费申请基于飞腾硬件平台上的麒麟云试用!

kylin银河麒麟飞腾等arm64国产系统缺少jdk报错解决

国产CPU命名为啥?

国产X86 CPU的现状:研发近10年,市场份额基本为0

deepin-for-arm64支持

华芯通关闭,华为等国产服务器芯片企业再受打击