本机部署多节点Eureka

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了本机部署多节点Eureka相关的知识,希望对你有一定的参考价值。

参考技术A 本地实现Eureka互相注册,实现高可用集群。

```

---

spring:

application:

    name: ad-eureka

profiles: server1

server:

  port: 8000

eureka:

instance:

    hostname: server1

prefer-ip-address: false

client:

service-url:

        defaultZone: http://server2:8001/eureka/,http://server3:8002/eureka/

---

spring:

application:

    name: ad-eureka

profiles: server2

server:

  port: 8001

eureka:

instance:

    hostname: server2

prefer-ip-address: false

client:

service-url:

        defaultZone: http://server1:8000/eureka/,http://server3:8002/eureka/

---

spring:

application:

    name: ad-eureka

profiles: server3

server:

  port: 8002

eureka:

instance:

    hostname: server3

prefer-ip-address: false

client:

service-url:

        defaultZone: http://server2:8001/eureka/,http://server1:8000/eureka/

```

需要注意的是,如果使用相同的IP地址注册到Eureka,会导致注册失败。所以我们这里需要去C:\Windows\System32\drivers\etc\hosts中去配置host,让不同的服务名字指向同一个IP地址。在文件末尾添加

127.0.0.1 server1

127.0.0.1 server2

127.0.0.1 server3

mvn clean package -Dmaven.test.skip=true -U

使用
java -jar ***.jar --spring.profiles.active=server1

命令启动项目,注册节点server1

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

K8S——单master节点和基于单master节点的双master节点二进制部署

一、准备

角色分配操作系统IP
k8s-mastercentos7:1708192.168.184.140
k8s-master2centos7:1708192.168.184.145
k8s-node01centos7:1708192.168.184.141
k8s-node02centos7:1708192.168.184.142
nginx_lbm IPnginx192.168.184.146
nginx_lbb IPnginx192.168.184.147
VIP IP192.168.184.200

二、ETCD集群

1、master节点

#创建/k8s目录
mkdir k8s
cd k8s

#创建证书制作的脚本
vim etcd-cert.sh 
cat > ca-config.json <<EOF			#CA证书配置文件

  "signing": 					#键名称
    "default": 
      "expiry": "87600h"			#证书有效期(10年)
    ,
    "profiles": 				#简介
      "www": 					#名称
         "expiry": "87600h",
         "usages": [				#使用方法
            "signing",				#键
            "key encipherment",			#密钥验证(密钥验证要设置在CA证书中)
            "server auth",			#服务器端验证
            "client auth"			#客户端验证
        ]
      
    
  

EOF
cat > ca-csr.json <<EOF				#CA签名

    "CN": "etcd CA",				#CA签名为etcd指定(三个节点均需要)
    "key": 
        "algo": "rsa",				#使用rsa非对称密钥的形式
        "size": 2048				#密钥长度为2048
    ,
    "names": [					#在证书中定义信息(标准格式)
        
            "C": "CN",				#名称
            "L": "Beijing",		
            "ST": "Beijing"		
        
    ]

EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cat > server-csr.json <<EOF			#服务器端的签名

    "CN": "etcd",
    "hosts": [					#定义三个节点的IP地址
    "192.168.184.140",
    "192.168.184.141",
    "192.168.184.142"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        
    ]

EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server     #cfssl 为证书制作工具

#创建启动脚本
cat etcd.sh 
#!/bin/bash
#以下为使用格式:etcd名称 当前etcd的IP地址+完整的集群名称和地址
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1						#位置变量1:etcd节点名称
ETCD_IP=$2						#位置变量2:节点地址
ETCD_CLUSTER=$3						#位置变量3:集群
WORK_DIR=/opt/etcd					#指定工作目录
cat <<EOF >$WORK_DIR/cfg/etcd				#在指定工作目录创建ETCD的配置文件
#[Member]
ETCD_NAME="$ETCD_NAME"				#etcd名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://$ETCD_IP:2380"		#etcd IP地址:2380端口。用于集群之间通讯
ETCD_LISTEN_CLIENT_URLS="https://$ETCD_IP:2379"	#etcd IP地址:2379端口,用于开放给外部客户端通讯
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://$ETCD_IP:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://$ETCD_IP:2379"	#对外提供的url使用https的协议进行访问
ETCD_INITIAL_CLUSTER="etcd01=https://$ETCD_IP:2380,$ETCD_CLUSTER"		#多路访问
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"		#tokens 令牌环名称:etcd-cluster
ETCD_INITIAL_CLUSTER_STATE="new"			#状态,重新创建
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service		#定义ectd的启动脚本
[Unit]								#基本项			
Description=Etcd Server					#类似为 etcd 服务
After=network.target					#vu癌症
After=network-online.target
Wants=network-online.target
[Service]						#服务项
Type=notify
EnvironmentFile=$WORK_DIR/cfg/etcd	#etcd文件位置
ExecStart=$WORK_DIR/bin/etcd \\			#准启动状态及以下的参数
--name=\\$ETCD_NAME \\
--data-dir=\\$ETCD_DATA_DIR \\
--listen-peer-urls=\\$ETCD_LISTEN_PEER_URLS \\
--listen-client-urls=\\$ETCD_LISTEN_CLIENT_URLS,http://127.0.0.1:2379 \\
--advertise-client-urls=\\$ETCD_ADVERTISE_CLIENT_URLS \\ #以下为群集内部的设定
--initial-advertise-peer-urls=\\$ETCD_INITIAL_ADVERTISE_PEER_URLS \\
--initial-cluster=\\$ETCD_INITIAL_CLUSTER \\
--initial-cluster-token=\\$ETCD_INITIAL_CLUSTER_TOKEN \\	#群集内部通信,也是使用的令牌,为了保证安全(防范中间人窃取)
--initial-cluster-state=new \\
--cert-file=$WORK_DIR/ssl/server.pem \\		#证书相关参数
--key-file=$WORK_DIR/ssl/server-key.pem \\
--peer-cert-file=$WORK_DIR/ssl/server.pem \\
--peer-key-file=$WORK_DIR/ssl/server-key.pem \\
--trusted-ca-file=$WORK_DIR/ssl/ca.pem \\
--peer-trusted-ca-file=$WORK_DIR/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536					#开放最多的端口号
[Install]
WantedBy=multi-user.target				#进行启动
EOF
systemctl daemon-reload					#参数重载
systemctl enable etcd
systemctl restart etcd

#创建证书目录,复制k8s目录下的证书创建脚本
mkdir etcd-cert
cd etcd-cert/
mv ../etcd-cert.sh ./

#从官网源中下载制作证书的工具
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo

#执行证书制作脚本(etcd-cert目录下)
chmod +x /usr/local/bin/cfssl
chmod +x /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssljson
bash etcd-cert.sh

#ETCD 部署
(下载并将软件包放在k8s目录下:etcd-v3.3.10-linux-amd64.tar.gz、flannel-v0.10.0-linux-amd64.tar.gz、kubernetes-server-linux-amd64.tar.gz)

#解压etcd-v3.3.10-linux-amd64.tar.gz
cd /etcd
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz

#创建ETCD工作目录(cfg:配置文件目录、bin:命令文件目录、ssl:证书文件目录)
mkdir /opt/etcd/cfg,bin,ssl -p

#拷贝命令文件
mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin

#拷贝证书文件
cp etcd-cert/*.pem /opt/etcd/ssl

#进入卡住状态等待其他节点加入
bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380


#另起终端,查看产生的配置文件
cd /opt/etcd/cfg
cat etcd 
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.226.128:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.226.128:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.226.128:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.226.128:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.226.128:2380,etcd02=https://192.168.226.132:2380,etcd03=https://192.168.226.133:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#查看etcd 状态/进程
ps -ef | grep etcd

#将证书和启动脚本推送/复制到两台node节点中
scp -r /opt/etcd/ root@192.168.184.141:/opt
scp -r /opt/etcd/ root@192.168.184.142:/opt
scp -r /usr/lib/systemd/system/etcd.service root@192.168.184.141:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service root@192.168.184.142:/usr/lib/systemd/system/


2、node节点

#查看、修改配置文件
ls /usr/lib/systemd/system/ | grep etcd
vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"					#需修改节点名称
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.184.141:2380"	#将url:2380端口的IP地址改为141(本地节点IP)
ETCD_LISTEN_CLIENT_URLS="https://192.168.184.141:2379"		#将url:2379端口的IP地址改为141(本地节点IP)
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.184.141:2380"	
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.184.141:2379"
#以上两条选项的地址也改为本地IP
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.184.140:2380,etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#同理修改node2节点配置文件

#启动服务(先在master节点使用命令,开启等待节点加入,其他两个node节点启动etcd 服务)
[root@k8s-master ~/k8s]# bash etcd.sh etcd01 192.168.184.140 etcd02=https://192.168.184.141:2380,etcd03=https://192.168.184.142:2380
[root@k8s-node01 /opt/etcd/cfg]# systemctl start etcd 
[root@k8s-node02 /opt/etcd/cfg]# systemctl start etcd 

#检查集群状态(master上执行)
[root@k8s-master ~/k8s]# cd etcd-cert/
[root@k8s-master ~/k8s/etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" cluster-health

三、Flannel网络部署

#首先两个node节点需要先安装docker引擎,具体流程可见:docker容器简介及安装

#写入分配的子网段到ETCD中,供flannel使用(master主机)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" set /coreos.com/network/config ' "Network": "172.17.0.0/16", "Backend": "Type": "vxlan"'
#命令简介--------------------------------------------------
#使用etcdctl命令,借助ca证书,目标断点为三个ETCD节点IP,端口为2379
#set /coreos.com/network/config 设置网段信息
#"Network": "172.17.0.0/16" 此网段必须是集合网段(B类地址),而Pod分配的资源必须在此网段中的子网段(C类地址)
#"Backend": "Type": "vxlan" 外部通讯的类型是VXLAN
----------------------------------------------------------

#查看写入的信息(master主机)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379" get /coreos.com/network/config

#上传flannel软件包到所有的 node 节点并解压(所有node节点)
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 

#创建k8s工作目录(所有node节点)
mkdir /opt/kubernetes/cfg,bin,ssl -p
mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

#创建启动脚本(两个node节点)
vim flannel.sh
#!/bin/bash
ETCD_ENDPOINTS=$1:-"http://127.0.0.1:2379"
cat <<EOF >/opt/kubernetes/cfg/flanneld 		#创建配置文件
FLANNEL_OPTIONS="--etcd-endpoints=$ETCD_ENDPOINTS \\	#flannel在使用的时候需要参照CA证书
-etcd-cafile=/opt/etcd/ssl/ca.pem \\
-etcd-certfile=/opt/etcd/ssl/server.pem \\
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service	#创建启动脚本
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \\$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env	#Docker使用的网络是flannel提供的
Restart=on-failure
[Install]
WantedBy=multi-user.target				#多用户模式
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld

#开启flannel网络功能(两个node节点)
bash flannel.sh https://192.168.184.140:2379,https://192.168.184.141:2379,https://192.168.184.142:2379

#配置 docker 连接 flannel(两个node节点)
vim /usr/lib/systemd/system/docker.service
-----12行添加
EnvironmentFile=/run/flannel/subnet.env 
-----13行修改(添加参数$DOCKER_NETWORK_OPTIONS)
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

#查看flannel分配的子网段
cat /run/flannel/subnet.env 

#重载进程、重启docker
systemctl daemon-reload 
systemctl restart docker





四、测试容器间互通


五、单master节点部署

1、部署master组件

#创建k8s工作目录和apiserver的证书目录
cd ~/k8s
mkdir /opt/kubernetes/cfg,bin,ssl -p
mkdir k8s-cert

#生成证书
cd k8s-cert
vim k8s-cert.sh
cat > ca-config.json <<EOF

  "signing": 
    "default": 
      "expiry": "87600h"
    ,
    "profiles": 
      "kubernetes": 
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      
    
  

EOF

cat > ca-csr.json <<EOF

    "CN": "kubernetes",
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        
    ]

EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

cat > server-csr.json <<EOF

    "CN": "kubernetes",
    "hosts": [
      "10.0.0.1",
      "127.0.0.1",
      "192.168.184.140",			#master1节点
      "192.168.184.145",			#master2节点(为之后做多节点做准备)
      "192.168.184.200",			#VIP飘逸地址
      "192.168.184.146",			#nginx1负载均衡地址()
      "192.168.184.147",			#nginx2负载均衡地址(备)
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        
    ]

E

以上是关于本机部署多节点Eureka的主要内容,如果未能解决你的问题,请参考以下文章

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

一文搞定 Eureka 集群高可用配置

docker-compose 完整打包发布, 多服务,多节点SPRING CLOUD ,EUREKA 集群

Eureka集群

HyperLedger Fabric 1.2 多机多节点部署(10.3)