GreenPlum 5.10.0 集群部署

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了GreenPlum 5.10.0 集群部署相关的知识,希望对你有一定的参考价值。

第1部分 初始化系统配置

1.1 部署环境

序号 ip地址 主机名 内存 系统版本 内核版本
1 192.168.61.61 gpmaster61 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
2 192.168.61.62 gpsegment62 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
3 192.168.61.63 gpsegment63 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
4 192.168.61.64 gpsegment64 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64
5 192.168.61.65 gpstandby65 16Gb CentOS 7.5.1804 3.10.0-862.9.1.el7.x86_64

1.2 设置主机名、同步时间

# 192.168.61.61
hostnamectl set-hostname gpmaster61
ntpdate -u ntp1.aliyun.com

# 192.168.61.62
hostnamectl set-hostname gpsegment62
ntpdate -u ntp1.aliyun.com

# 192.168.61.63
hostnamectl set-hostname gpsegment63
ntpdate -u ntp1.aliyun.com

# 192.168.61.64
hostnamectl set-hostname gpsegment64
ntpdate -u ntp1.aliyun.com

# 192.168.61.65
hostnamectl set-hostname gpstandby65
ntpdate -u ntp1.aliyun.com

1.3 添加hosts解析

cat > /etc/hosts << EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.61.61 gpmaster61
192.168.61.62 gpsegment62
192.168.61.63 gpsegment63
192.168.61.64 gpsegment64
192.168.61.65 gpstandby65
EOF

1.4 系统内核参数优化

cat > /etc/sysctl.conf << EOF
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).

kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 500 1024000 200 4096
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
net.core.rmem_max = 2097152
net.core.wmem_max = 2097152
vm.overcommit_memory = 2
vm.swappiness = 1
kernel.pid_max = 655350
EOF
sysctl -p

1.5 修改Linux最大限制

cat > /etc/security/limits.conf << EOF
* soft nofile 65536
* hard nofile 65536
* soft nproc 131072
* hard nproc 131072
EOF

1.6 关闭 selinux 和 防火墙

setenforce 0
sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config
systemctl stop firewalld && systemctl disable firewalld

1.7 设置XFS文件系统并挂载

#   单独挂载磁盘,设置文件系统为XFS,修改挂载方式
mkfs.xfs /dev/sdb1
mkdir /greenplum
mount /dev/sdb1 /greenplum

cat >> /etc/fstab << EOF
/dev/sdb1 /greenplum xfs nodev,noatime,inode64,allocsize=16m 0 0
EOF

1.8 禁用THP、调整磁盘预读扇区数

# 禁用THP
cat /sys/kernel/mm/transparent_hugepage/enabled # 查看THP
grubby --update-kernel=ALL --args="transparent_hugepage=never" # 设置为 never

# 创建 init.d 脚本
echo ‘#!/bin/sh
case $1 in
  start)
    if [ -d /sys/kernel/mm/transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/transparent_hugepage
    elif [ -d /sys/kernel/mm/redhat_transparent_hugepage ]; then
      thp_path=/sys/kernel/mm/redhat_transparent_hugepage
    else
      exit 0
    fi

    echo never > ${thp_path}/enabled
    echo never > ${thp_path}/defrag

    unset thp_path
    ;;
esac‘ > /etc/init.d/disable-transparent-hugepages

# 注册systemd文件
echo ‘[Unit]
Description=Disable Transparent Hugepages
After=multi-user.target

[Service]
ExecStart=/etc/init.d/disable-transparent-hugepages start
Type=simple

[Install]
WantedBy=multi-user.target‘ > /etc/systemd/system/disable-thp.service

# 磁盘预读扇区数
/sbin/blockdev --getra /dev/sdb1 # 查看大小
/sbin/blockdev --setra 65535 /dev/sdb1 # 设置大小

# 创建 init.d 脚本
echo ‘#!/bin/sh
device_name=/dev/sdb1
case $1 in
  start)
    if `mount | grep "^${device_name}" > /dev/null`;then
      /sbin/blockdev --setra 65535 ${device_name}
    else
      exit 0
    fi

    unset device_name
    ;;
esac‘ > /etc/init.d/blockdev-setra-sdb

# 注册systemd文件
echo ‘[Unit]
Description=Blocdev --setra N
After=multi-user.target

[Service]
ExecStart=/etc/init.d/blockdev-setra-sdb start
Type=simple

[Install]
WantedBy=multi-user.target‘ > /etc/systemd/system/blockdev-setra-sdb.service

# 授权并设置开机启动
chmod 755 /etc/init.d/disable-transparent-hugepages
chmod 755 /etc/init.d/blockdev-setra-sdb
chmod 755 /etc/systemd/system/disable-thp.service
chmod 755 /etc/systemd/system/blockdev-setra-sdb.service
systemctl enable disable-thp blockdev-setra-sdb

1.9 配置完毕,重启服务器

reboot

第2部分 安装GreenPlum

2.1 所有节点安装依赖包

yum -y install epel-release
yum -y install wget cmake3 git gcc gcc-c++ bison flex libedit-devel zlib zlib-devel perl-devel perl-ExtUtils-Embed python-devel libevent libevent-devel libxml2 libxml2-devel libcurl libcurl-devel bzip2 bzip2-devel net-tools libffi-devel openssl-devel

2.2 创建安装目录

mkdir /greenplum/soft

2.3 主节点安装软件

./greenplum-db-5.10.0-rhel7-x86_64.bin

If Customer has a currently enforceable, written and separately signed

*****************************************************************************
Do you accept the Pivotal Database license agreement? [yes|no]
*****************************************************************************

yes # 同意许可

*****************************************************************************
Provide the installation path for Greenplum Database or press ENTER to 
accept the default installation path: /usr/local/greenplum-db-5.10.0
*****************************************************************************

/greenplum/soft/greenplum-db-5.10.0 # 指定安装目录

*****************************************************************************
Install Greenplum Database into /greenplum/soft/greenplum-db-5.10.0? [yes|no]
*****************************************************************************

yes 确认安装

*****************************************************************************
/greenplum/soft/greenplum-db-5.10.0 does not exist.
Create /greenplum/soft/greenplum-db-5.10.0 ? [yes|no]
(Selecting no will exit the installer)
*****************************************************************************

yes 创建安装目录

Extracting product to /greenplum/soft/greenplum-db-5.10.0

*****************************************************************************
Installation complete.
Greenplum Database is installed in /greenplum/soft/greenplum-db-5.10.0

Pivotal Greenplum documentation is available
for download at http://gpdb.docs.pivotal.io
*****************************************************************************

2.4 创建所有主机列表文件

cat > all_nodes << EOF
gpmaster61
gpsegment62
gpsegment63
gpsegment64
gpstandby65
EOF

2.5 设置主机免密码登陆

source  /greenplum/soft/greenplum-db/greenplum_path.sh
gpssh-exkeys -f /root/all_nodes
[STEP 1 of 5] create local ID and authorize on local host

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to gpsegment62
  ***
  *** Enter password for gpsegment62: # 主机密码
  ... send to gpsegment63
  ... send to gpsegment64
  ... send to gpstandby65

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with gpsegment62
  ... finished key exchange with gpsegment63
  ... finished key exchange with gpsegment64
  ... finished key exchange with gpstandby65

[INFO] completed successfully

2.6 检查主机连接状态

gpssh -f /root/all_nodes -e "ls -l"

2.7 批量创建安装用户

gpssh -f /root/all_nodes
=> groupadd -g 3000 gpadmin
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> useradd -u 3000 -g gpadmin -m -s /bin/bash gpadmin
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> echo gpadmin | passwd  gpadmin --stdin
[gpsegment64] Changing password for user gpadmin.
[gpsegment64] passwd: all authentication tokens updated successfully.
[ gpmaster61] Changing password for user gpadmin.
[ gpmaster61] passwd: all authentication tokens updated successfully.
[gpstandby65] Changing password for user gpadmin.
[gpstandby65] passwd: all authentication tokens updated successfully.
[gpsegment62] Changing password for user gpadmin.
[gpsegment62] passwd: all authentication tokens updated successfully.
[gpsegment63] Changing password for user gpadmin.
[gpsegment63] passwd: all authentication tokens updated successfully.
=> chown -R gpadmin.gpadmin /greenplum
[gpsegment64]
[ gpmaster61]
[gpstandby65]
[gpsegment62]
[gpsegment63]
=> exit

第3部分 同步greenplum软件到所有节点

3.1 切换用户初始化环境变量

su - gpadmin
cat >> .bashrc << EOF
export MASTER_DATA_DIRECTORY=/greenplum/data/gpmaster/gpseg-1
source /greenplum/soft/greenplum-db/greenplum_path.sh
EOF
source .bashrc

3.2 创建主机列表文件

cat > all_nodes << EOF
gpmaster61
gpsegment62
gpsegment63
gpsegment64
gpstandby65
EOF

3.3 设置gpadmin免密登陆

gpssh-exkeys -f all_nodes
[STEP 1 of 5] create local ID and authorize on local host

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] authorize current user on remote hosts
  ... send to gpsegment62
  ***
  *** Enter password for gpsegment62: 
  ... send to gpsegment63
  ... send to gpsegment64
  ... send to gpstandby65

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts
  ... finished key exchange with gpsegment62
  ... finished key exchange with gpsegment63
  ... finished key exchange with gpsegment64
  ... finished key exchange with gpstandby65

[INFO] completed successfully

3.4 同步greenplum软件包

gpseginstall -f all_nodes -u gpadmin -p gpadmin

20180801:16:39:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /greenplum/soft/greenplum-db-5.10.0
binary_dir_location /greenplum/soft
binary_dir_name greenplum-db-5.10.0
20180801:16:39:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-check cluster password access
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-de-duplicate hostnames
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-master hostname: gpmaster61
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-rm -f /greenplum/soft/greenplum-db-5.10.0.tar; rm -f /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:39:21:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-cd /greenplum/soft; tar cf greenplum-db-5.10.0.tar greenplum-db-5.10.0
20180801:16:39:26:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-gzip /greenplum/soft/greenplum-db-5.10.0.tar
20180801:16:40:18:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: mkdir -p /greenplum/soft
20180801:16:40:19:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: rm -rf /greenplum/soft/greenplum-db-5.10.0
20180801:16:40:20:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-scp software to remote location
20180801:16:40:34:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: gzip -f -d /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:40:43:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-md5 check on remote location
20180801:16:40:46:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: cd /greenplum/soft; tar xf greenplum-db-5.10.0.tar
20180801:16:40:49:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: rm -f /greenplum/soft/greenplum-db-5.10.0.tar
20180801:16:40:49:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: cd /greenplum/soft; rm -f greenplum-db; ln -fs greenplum-db-5.10.0 greenplum-db
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-rm -f /greenplum/soft/greenplum-db-5.10.0.tar.gz
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-version string on master: gpssh version 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce
20180801:16:40:50:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: . /greenplum/soft/greenplum-db/./greenplum_path.sh; /greenplum/soft/greenplum-db/./bin/gpssh --version
20180801:16:40:51:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-remote command: . /greenplum/soft/greenplum-db-5.10.0/greenplum_path.sh; /greenplum/soft/greenplum-db-5.10.0/bin/gpssh --version
20180801:16:40:52:025448 gpseginstall:gpmaster61:gpadmin-[INFO]:-SUCCESS -- Requested commands completed

3.5 gpadmin用户创建mdw和sdw的数据目录

cat > seg_nodes <<EOF
gpsegment62
gpsegment63
gpsegment64
EOF
mkdir -p /greenplum/data/gpmaster
gpssh -h gpstandby65 -e ‘mkdir -p /greenplum/data/gpmaster‘
gpssh -f seg_nodes -e ‘mkdir -p /greenplum/data/gpdatap{1..4}‘
gpssh -f seg_nodes -e ‘mkdir -p /greenplum/data/gpdatam{1..4}‘

3.6 创建初始化文件

cat > gpinitsystem_config << EOF
ARRAY_NAME="ChinaDaas Data Platform"
SEG_PREFIX=gpseg
PORT_BASE=40000
MASTER_MAX_CONNECT=1000
declare -a DATA_DIRECTORY=(/greenplum/data/gpdatap1 /greenplum/data/gpdatap2 /greenplum/data/gpdatap3 /greenplum/data/gpdatap4)
MASTER_HOSTNAME=gpmaster61
MASTER_DIRECTORY=/greenplum/data/gpmaster
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE
MIRROR_PORT_BASE=50000
REPLICATION_PORT_BASE=41000
MIRROR_REPLICATION_PORT_BASE=51000
declare -a MIRROR_DATA_DIRECTORY=(/greenplum/data/gpdatam1 /greenplum/data/gpdatam2 /greenplum/data/gpdatam3 /greenplum/data/gpdatam4)
DATABASE_NAME=testdb
MACHINE_LIST_FILE=/home/gpadmin/seg_nodes
EOF

3.7 初始化sdw主节点,部署smdw备份节点


-a: 不询问用户
-c:指定初始化文件。
-h:指定segment主机文件。
-s:指定standby主机,创建standby节点。

gpinitsystem -a -c gpinitsystem_config

20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Reading Greenplum configuration file gpinitsystem_config
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Locale has not been set in gpinitsystem_config, will set to default value
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Locale set to en_US.utf8
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking configuration parameters, Completed
20180801:16:42:41:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing multi-home checks, please wait...
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Configuring build for standard array
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing multi-home checks, Completed
20180801:16:42:42:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building primary segment instance array, please wait...
20180801:16:42:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building group mirror array type , please wait...
20180801:16:42:54:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking Master host
20180801:16:42:54:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking new segment hosts, please wait...
20180801:16:43:15:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Checking new segment hosts, Completed
20180801:16:43:15:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Building the Master instance database, please wait...
20180801:16:43:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Starting the Master in admin mode
20180801:16:43:45:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20180801:16:43:45:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
20180801:16:43:46:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Parallel process exit status
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as completed        = 12
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as killed           = 0
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as failed           = 0
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Commencing parallel build of mirror segment instances
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Spawning parallel processes    batch [1], please wait...
20180801:16:44:31:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Parallel process exit status
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as completed        = 12
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as killed           = 0
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Total processes marked as failed           = 0
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Deleting distributed backout files
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Removing back out file
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-No errors generated from parallel processes
20180801:16:44:48:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -i -m -d /greenplum/data/gpmaster/gpseg-1
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Gathering information and validating the environment...
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Obtaining Segment details from master...
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Greenplum Version: ‘postgres (Greenplum Database) 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce‘
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-There are 0 connections to the database
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=‘immediate‘
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Master host=gpmaster61
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate
20180801:16:44:48:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Master segment instance directory=/greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20180801:16:44:50:038105 gpstop:gpmaster61:gpadmin-[INFO]:-Terminating processes for segment /greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /greenplum/data/gpmaster/gpseg-1
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Gathering information and validating the environment...
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Greenplum Binary Version: ‘postgres (Greenplum Database) 5.10.0 build commit:a075db4267fa1ca9e11c2c3813e3e058da4608ce‘
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Greenplum Catalog Version: ‘301705051‘
20180801:16:44:50:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting Master instance in admin mode
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Obtaining Segment details from master...
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Setting new master era
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Master Started...
20180801:16:44:52:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Shutting down master
20180801:16:44:54:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Process results...
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Successful segment starts                  = 24
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Failed segment starts                      = 0
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)   = 0
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Successfully started 24 of 24 segment instances
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-----------------------------------------------------
20180801:16:45:35:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Starting Master instance gpmaster61 directory /greenplum/data/gpmaster/gpseg-1 
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Command pg_ctl reports Master gpmaster61 instance active
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-No standby master configured.  skipping...
20180801:16:45:36:038131 gpstart:gpmaster61:gpadmin-[INFO]:-Database successfully started
20180801:16:45:37:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Scanning utility log file for any warning messages
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Log file scan check passed
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Greenplum Database instance successfully created
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:----------------------------------------------
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To complete the environment configuration, please
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-update gpadmin .bashrc file with the following
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/greenplum/data/gpmaster/gpseg-1"
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   to access the Greenplum scripts for this instance:
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   or, use -d /greenplum/data/gpmaster/gpseg-1 option for the Greenplum scripts
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-   Example gpstate -d /greenplum/data/gpmaster/gpseg-1
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20180801.log
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Review options for gpinitstandby
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-----------------------------------------------
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-The Master /greenplum/data/gpmaster/gpseg-1/pg_hba.conf post gpinitsystem
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-has been configured to allow all hosts within this new
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-new array must be explicitly added to this file
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-located in the /greenplum/soft/greenplum-db/./docs directory
20180801:16:46:14:025963 gpinitsystem:gpmaster61:gpadmin-[INFO]:-------------------------------------------------

# 查看数据分片
psql -d testdb -c ‘select a.dbid,a.content,a.role,a.port,a.hostname,b.fsname,c.fselocation from gp_segment_configuration a,pg_filespace b,pg_filespace_entry c where a.dbid=c.fsedbid and b.oid=c.fsefsoid order by content;‘

此脚本用于安装失败时,回退删除任何实用程序创建的数据目录,postgres进程和日志文件。

cd ~/gpAdminLogs/
bash backout_gpinitsystem_gpadmin_<创建日期>

3.8 添加访问权限

echo "host     all         gpadmin         0.0.0.0/0       md5" >> $MASTER_DATA_DIRECTORY/pg_hba.conf
gpstop -u

3.9 添加standby主机(可选项)

gpinitstandby -a -F pg_system:/greenplum/data/gpmaster/gpseg-1/ -s gpstandby65

20180801:16:50:44:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
20180801:16:50:44:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Checking for filespace directory /greenplum/data/gpmaster/gpseg-1 on gpstandby65
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master initialization parameters
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master hostname               = gpmaster61
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master data directory         = /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum master port                   = 5432
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master hostname       = gpstandby65
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master port           = 5432
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum standby master data directory = /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Greenplum update system catalog         = On
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:- Filespace locations
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:------------------------------------------------
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-pg_system -> /greenplum/data/gpmaster/gpseg-1
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
20180801:16:50:45:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-The packages on gpstandby65 are consistent.
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Adding standby master to catalog...
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Database catalog updated successfully.
20180801:16:50:46:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Updating pg_hba.conf file...
20180801:16:50:48:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Updating filespace flat files...
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Filespace flat file updated successfully.
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Starting standby master
20180801:16:50:51:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Checking if standby master is running on host: gpstandby65  in directory: /greenplum/data/gpmaster/gpseg-1
20180801:16:50:53:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
20180801:16:50:54:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
20180801:16:50:54:038511 gpinitstandby:gpmaster61:gpadmin-[INFO]:-Successfully created standby master on gpstandby65

# 查看主备节点
psql -d testdb -c ‘select * from gp_segment_configuration where content=‘-1‘;‘

# 同步pg_hba.conf到gpstandby65备份节点,重新加载gpdb配置文件
gpscp -h gpstandby65 -v $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/
gpstop -u

3.10 测试GPDB集群状态

gpstate -e

3.11 设置gpadmin远程访问密码

psql postgres gpadmin
alter user gpadmin encrypted password ‘gpadmin‘;
q

3.12 查询测试

psql -hgpmaster61 -p 5432 -d postgres -U gpadmin -c ‘select dfhostname, dfspace,dfdevice from gp_toolkit.gp_disk_free order by dfhostname;‘
psql -h gpmaster61 -p 5432 -d postgres -U gpadmin -c ‘l+‘

第4部分 部署安装 GreenPlum-cc-web 4.3

4.1 gpadmin创建gpperfmon数据库, 默认用户gpmon

gpperfmon_install --enable --password gpmon --port 5432

20180801:17:04:00:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-createdb gpperfmon >& /dev/null
20180801:17:04:54:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql -f /greenplum/soft/greenplum-db/./lib/gpperfmon/gpperfmon.sql gpperfmon >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "DROP ROLE IF EXISTS gpmon"  >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 psql template1 -c "CREATE ROLE gpmon WITH SUPERUSER CREATEDB LOGIN ENCRYPTED PASSWORD ‘gpmon‘"  >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "local    gpperfmon         gpmon         md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "host     all         gpmon         127.0.0.1/28    md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "host     all         gpmon         ::1/128    md5" >> /greenplum/data/gpmaster/gpseg-1/pg_hba.conf
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-touch /home/gpadmin/.pgpass >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-mv -f /home/gpadmin/.pgpass /home/gpadmin/.pgpass.1533114240 >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-echo "*:5432:gpperfmon:gpmon:gpmon" >> /home/gpadmin/.pgpass
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-cat /home/gpadmin/.pgpass.1533114240 >> /home/gpadmin/.pgpass
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-chmod 0600 /home/gpadmin/.pgpass >& /dev/null
20180801:17:04:59:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_enable_gpperfmon -v on >& /dev/null
20180801:17:05:02:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_port -v 8888 >& /dev/null
20180801:17:05:05:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gp_external_enable_exec -v on --masteronly >& /dev/null
20180801:17:05:06:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-PGPORT=5432 gpconfig -c gpperfmon_log_alert_level -v warning >& /dev/null
20180801:17:05:09:039490 gpperfmon_install:gpmaster61:gpadmin-[INFO]:-gpperfmon will be enabled after a full restart of GPDB

4.2 修改gpmon密码

psql -d gpperfmon -c "alter user gpmon encrypted password ‘gpmon‘;"

4.3 添加监控授权

echo "host     all         gpmon         0.0.0.0/0    md5" >> $MASTER_DATA_DIRECTORY/pg_hba.conf
gpstop -afr

# 验证gpperfmon数据库是否有数据写入
psql -d gpperfmon -c ‘select * from system_now‘

# 拷贝主配置文件到备份配置文件(需要安装gpstandby备份节点)
gpscp -h gpstandby65 -v $MASTER_DATA_DIRECTORY/pg_hba.conf =:$MASTER_DATA_DIRECTORY/
gpscp -h gpstandby65 -v ~/.pgpass =:~/
gpstop -afr

4.5 安装greenplum-cc-web

./gpccinstall-4.3.0

Do you agree to the Pivotal Greenplum Command Center End User License Agreement? Yy/Nn (Default=Y)
y

Where would you like to install Greenplum Command Center? (Default=/usr/local)
/greenplum/soft/

Path not exist, create it? Yy/Nn (Default=Y)
y

What would you like to name this installation of Greenplum Command Center? (Default=gpcc)
<Enter>

What port would you like gpcc webserver to use? (Default=28080)
<Enter>

Would you like to enable kerberos? Yy/Nn (Default=N)
<Enter>

Would you like enable SSL? Yy/Nn (Default=N)
<Enter>

Installation in progress...
2018/08/01 17:54:49 
Successfully installed Greenplum Command Center.

We recommend ssh to standby master before starting GPCC webserver

To start the GPCC webserver on the current host, run gpcc start

To manage Command Center, use the gpcc utility.
Usage:
  gpcc [OPTIONS] <command>

Application Options:
  -v, --version   Show Greenplum Command Center version
      --settings  Print the current configuration settings

Help Options:
  -h, --help      Show this help message

Available commands:
  help        Print list of commands
  krbdisable  Disables kerberos authentication
  krbenable   Enables kerberos authentication
  start       Starts Greenplum Command Center webserver and metrics collection agents [-W]  option to force password prompt for GPDB user gpmon [optional]
  status      Print agent status  with  [-W]  option to force password prompt for GPDB user gpmon [optional]
  stop        Stops Greenplum Command Center webserver and metrics collection agents [-W]  option to force password prompt for GPDB user gpmon [optional]

4.6 添加环境变量

ln -s /greenplum/soft/greenplum-cc-web-4.3.0 /greenplum/soft/greenplum-cc-web
source /greenplum/soft/greenplum-cc-web/gpcc_path.sh

4.7 启动gpcc web服务

gpcc start

4.8 查询生成数据

# 查看时区
psql -d gpperfmon -c ‘show timezone‘

#查看监控日志
psql -d gpperfmon -c ‘select * from system_now‘

4.9 访问web页面

通过 http://192.168.61.61:28080 访问web监控

第5部分 集群性能检测

5.1 验证网络性能

gpcheckperf -f all_nodes -r N -d /tmp/

5.2 验证磁盘I/O和内存带宽性能

gpcheckperf -f all_nodes -r ds -d /tmp

5.3 验证内存带宽性能

gpcheckperf -f all_nodes -r s -d /tmp

以上是关于GreenPlum 5.10.0 集群部署的主要内容,如果未能解决你的问题,请参考以下文章

GreenPlum 集群常用命令

Greenplum集群部署和架构优化,我总结了5000字的心得

Greenplum源码编译安装(单机及集群模式)完整版

最新版Greenplum Command Center 安装部署

greenplum数据库常用操作

Greenplu数据库的部署