集群与存储
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了集群与存储相关的知识,希望对你有一定的参考价值。
LVS-NAT集群环境:RH6
客户端
192.168.2.253
分发器
eth0:192.168.4.50
eth1:192.168.2.50
网站服务
192.168.4.51
192.168.4.52
[[email protected] ~]# service iptables stop //关防火墙
[[email protected] ~]# chkconfig iptables off //开机不起
[[email protected] ~]# setenforce 0
[[email protected] ~]# yum -y install httpd
[[email protected] ~]# echo "192.168.4.51" > /var/www/html/test.html
[[email protected] ~]# service httpd start
[[email protected] ~]# chkconfig httpd on
[[email protected] ~]# yum -y install elinks
[[email protected] ~]# elinks --dump http://localhost/test.html
52与51操作相同,网页内容不同以后期测试
++++++++++++++++++++++++++++++++++++++++++++++++
网站服务器51/52
1 指定网关地址
#route -n
#route add default gw 192.168.4.50 //临时改
#route -n
#vim /etc/sysconfig/network-scripts/ifcfg-eth0 //永久配置网关
GATEWAY=192.168.4.50
#sysctl -p
客户端192.168.2.253也指定网关
配置分发器50:
[[email protected] ~]# sed -i ‘7s/0/1/‘ /etc/sysctl.conf 开启内核路由转发功能
sed -n ‘7p‘ /etc/sysctl.conf
装包ipvsadm
添加虚拟服务
[[email protected] ~]# ipvsadm -L
[[email protected] ~]# ipvsadm -A -t 192.168.2.50:80 -s rr
添加realserver
[[email protected] ~]# ipvsadm -a -t 192.168.2.50:80 -r 192.168.4.51:80 -m
[[email protected] ~]# ipvsadm -a -t 192.168.2.50:80 -r 192.168.4.52:80 -m
启动服务
[[email protected] ~]# service ipvsadm save
[[email protected] ~]# chkconfig ipvsadm on
[[email protected] ~]# ipvsadm -Ln --stats
++++++++++++++++++++++++++++++++++++++++++++++++++
客户端访问网站
[[email protected] ~]# elinks --dump http://192.168.2.50/test.html //多次访问,会发现访问内容为51与52的test页面内容
[[email protected] ~]# ipvsadm -Ln --stats
ipvsadm -C 删除
删除后要使用service ipvsadm save保存配置才能删除掉
ipvsadm -d -t 192.168.2.50:80 -r 192.168.4.51:80 //删除51
ipvsadm -e -t 192.168.2.50:80 -r 192.168.4.51.80 -w 3 -m //将51的传输值改为3
ipvsadm -E -t 192.168.2.50:80 -s wwr //将算法改为wwr
++++++++++++++++++++++++++++++++++++++++++++++++++
##################################################
LVS-DR集群
环境:
客户端 192.168.4.253
分发器 eth0:192.168.4.50
eth1:192.168.2.50
网站服务web 192.168.4.51
192.168.4.52
[[email protected] ~]# service iptables stop //关防火墙
[[email protected] ~]# chkconfig iptables off //开机不起
[[email protected] ~]# setenforce 0
[[email protected] ~]# yum -y install httpd
[[email protected] ~]# echo "192.168.4.51" > /var/www/html/test.html
[[email protected] ~]# service httpd start
[[email protected] ~]# chkconfig httpd on
[[email protected] ~]# yum -y install elinks
[[email protected] ~]# elinks --dump http://localhost/test.html
52与51操作相同,网页内容不同以后期测试
+++++++++++++++++++++++++++++++++++++++++++++++
在web51,web52上进行vip设置
ifconfig lo
ifconfig lo:1 192.168.4.252/32
ifconfig lo
cd /proc/sys/net/ipv4/conf
echo 1 > lo/arp_ignore //只回应入口处的广播包
echo 2 > lo/arp_announce //选择最适应的口收发包
echo 1 > all/arp_ignore
echo 2 > all/arp_announce
vim /etc/rc.local //永久配置在此项加上述命令
+++++++++++++++++++++++++++++++++++++++++++++
分发器host50
[[email protected] ~]# sed -i ‘7s/0/1/‘ /etc/sysctl.conf 开启内核路由转发功能
sed -n ‘7p‘ /etc/sysctl.conf
装包ipvsadm
ipvsadm -C
ipvsadm -Ln
service ipvsadm save
ipvsadm -A -t 192.168.4.252:80 -s rr
ipvsadm -a -t 192.168.4.252:80 -r 192.168.4.51:80 -g
ipvsadm -a -t 192.168.4.252:80 -r 192.168.4.52:80 -g
service ipvsadm save
+++++++++++++++++++++++++++++++++++++++++++++++
客户机192.168.4.253
down掉eth1,在eth0上配192.168.4.253
elinks -dump http://192.168.4.252/test.html
elinks -dump http://192.168.4.252/test.html
解绑ip地址:
方法一 ifdown lo;ifup lo;ifconfig lo:1
方法二 service network restart
+++++++++++++++++++++++++++++++++++++++++++++++++
##############################################################
haproxy (LB)
分发器
一、部署LAMP服务 (53,54)
yum -y install mysql mysql-server httpd
yum -y install php php-mysql
vim /var/www/html/test.php
<?php
$x=mysql_connect("localhost","root","123456");
if($x){ echo "ok"; }else{ echo "err"; };
?>
service iptables stop
chkconfig iptables off
setenforce 0
service mysqld start
service httpd start
chkconfig httpd on
chkconfig mysqld on
配置分发器 50
yum -y install haproxy
chkconfig haproxy on
vim /etc/haproxy/haproxy.cfg
stats uri /admin
listen websrv-rewrite 0.0.0.0:80
cookie SERVERID rewrite
balance roundrobin
server web1 192.168.4.53:80 cookie app1inst1 check inter 2000 rise 2 fall 5
server web2 192.168.4.54:80 cookie app1inst2 check inter 2000 rise 2 fall 5
service haproxy start
客户端测试
火狐访问http://192.168.4.50/admin
++++++++++++++++++++++++++++++++++++++++++++++++++
集群分组
四台网络集群
html 51 52
php 53 54
host50
vim /etc/haproxy/haproxy.cfg
59 stats uri /admin
60 frontend weblb *:80
61 acl urlhtml path_end -i .html
62 acl urlphp path_end -i .php
63
64 use_backend htmlgrp if urlhtml
65 use_backend phpgrp if urlphp
66
67 default_backend htmlgrp
68 backend htmlgrp
69 balance roundrobin
70 server web51 192.168.4.51:80 check
71 server web52 192.168.4.52:80 check
72 backend phpgrp
73 balance roundrobin
74 server web53 192.168.4.53:80 check
75 server web54 192.168.4.54:80 check
service haproxy start
火狐访问http://192.168.4.50/admin
客户端测试
elinks -dump http://192.168.4.50/
elinks -dump http://192.168.4.50/test.php
elinks -dump http://192.168.4.50/test.html
####################################################
keepalived (HA)
任意单故障节点的高可用集群
主 web53
备 web54
vip(虚拟IP) 192.168.4.251
1 在高可用集群的主机上安装keepalived软件
yum -y install keepalived
rpm -qc keepalived //查看该软件的主配置程序
2 分别修改服务的主配置文件
vim /etc/keepalived/keepalived.conf
删除31行之后的内容
192.168.4.53(主)
15 vrrp_instance webha {
16 state MASTER
17 interface eth0
18 virtual_router_id 51
19 priority 150 //优先级
20 advert_int 1
21 authentication {
22 auth_type PASS //验证集群的方式
23 auth_pass 654321
24 }
25 virtual_ipaddress {
26 192.168.4.251 //虚拟IP地址,要保证环境重没有此ip存在
27 }
28 }
192.168.4.54(备)
15 vrrp_instance VI_1 {
16 state BACKUP
17 interface eth0
18 virtual_router_id 51
19 priority 100
20 advert_int 1
21 authentication {
22 auth_type PASS
23 auth_pass 654321
24 }
25 virtual_ipaddress {
26 192.168.4.251
27 }
28 }
3 启动keepalived服务
优先启动优先级较高的主机
service keepalived start
4 查看高可用集群主机上是否获取vip地址
ip addr show|grep 192.168.4. //查看过滤ip,看是否有虚拟ip产生,若操作正确,web53会多一个虚拟ip,web54只有本机ip
5 验证
火狐访问http://192.168.4.251/test.php
显示的是web53的文件内容
关闭web53上的keepalived服务,再访问网页
显示的是web54的文件内容
修复重启web53的keepalived服务,再访问网页,由于缓存。可能无法得到主服务器的内容,可通过ip addr show|grep 192 来查看虚拟IP是否被web53抢回
################################################################
keepalived+lvs/DR
1 配置realserver 51/52
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[[email protected] ~]# ifconfig lo:1 192.168.4.252/32
52同51
2 配置分发器50(主)、55(备)
yum -y install ipvsadm
yum -y install keepalived
vim /etc/keepalived/keepalived.conf
删掉55行后文件
15 vrrp_instance VI_1 {
16 state MASTER
17 interface eth0
18 virtual_router_id 51
19 priority 150
20 advert_int 1
21 authentication {
22 auth_type PASS
23 auth_pass 123456
24 }
25 virtual_ipaddress {
26 192.168.4.252
27 }
28 }
29
30 virtual_server 192.168.4.252 80 {
31 delay_loop 6
32 lb_algo rr
33 lb_kind DR
34 nat_mask 255.255.255.0
35# persistence_timeout 50 //决定是否轮询
36 protocol TCP
37 connect_timeout 3
38 nb_get_retry 3
39 delay_before_retry 3
40
41 real_server 192.168.4.51 80 {
42 weight 1
43 }
44 real_server 192.168.4.52 80 {
45 weight 1
46 }
47 }
55参照此操作做备用分发器
16 state BACKUP
19 priority 100
3 起服务
service keepalived start
ip addr show | grep 252 //查看有没有252虚拟IP
ipvsadm -Ln
4 客户端测试http://192.168.4.252/test.html
########################################################
搭建存储服务器
存储介质: 内存 磁盘
存储: 数据
数据存储位置: 云存储 本地 共享存储 分布式存储
存储技术: DAS NAS SAN(FC_SAN/IP_SAN) SCSI NFS/CIFS ISCSI
51 52 LB
53 54 HA
50 55 [LVS/DR(分发器) HA]
56 存储服务器(添加3块3G的盘)
在56主机上使用SAN共享存储空间(/dev/vdb /dev/vdc)给前端应用服务器53和54使用
一、共享存储 56
1.1 装包
yum -y install scsi-target-utils
rpm -qc scsi-target-utils
service tgtd status
1.2 修改配置文件
vim /etc/tgtd/target.conf
62 <target iqn.2018-01.cn.tedu:host56.diskb>
63 backing-store /dev/vdb
64 write-cache off //存储缓存关
65 # initiator-address 192.168.4.53 //提供的ip
66 # initiator-address 192.168.4.54
67 vendor_id tarena //厂家
68 product_id disktwo //型号
69 </target>
72 <target iqn.2018-01.cn.tedu:host56.diskc>
73 backing-store /dev/vdc
74 write-cache off
75 # initiator-address 192.168.4.53
76 # initiator-address 192.168.4.54
77 vendor_id tarena
78 product_id disktwo2
79 </target>
1.3 起服务
service tgtd start
chkconfig tgtd on
tgt-admin --show //查看共享信息
二、配置前端应用
(53 54)
2.1 装包
yum -y install iscsi-initiator-utils
rpm -qc iscsi-initiator-utils
2.2 发现设备
iscsiadm --mode discoverydb --type sendtargets --portal 192.168.4.56 --discover
2.3 登陆设备
iscsiadm --mode node --targetname iqn.2018-01.cn.tedu:host56.diskb --portal 192.168.4.56:3260 -l
iscsiadm --mode node --targetname iqn.2018-01.cn.tedu:host56.diskc --portal 192.168.4.56:3260 -l -l(--login)登入,-u登出
ls /dev/sd
/dev/sda /dev/sdb //出现2块盘
ls
2.4 查看登入信息
fdisk -l
ls /dev/vd
ls /dev/sd*
登入共享磁盘的顺序,会影响设备在本机的命名名称。若先登diskc 。则在前端diskc放在/dev/sda中。
配置UDV (53 54)
1 获取硬件设备的参数信息
[[email protected] ~]# ls /etc/udev/rules.d/
[[email protected] ~]# udevadm info --query=path --name=/dev/sda
/devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sda
[[email protected] ~]# udevadm info --path=/devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sda --attribute-walk
[[email protected] ~]# udevadm info --query=path --name=/dev/sdb
/devices/platform/host3/session2/target3:0:0/3:0:0:1/block/sdb
[[email protected] ~]# udevadm info --path=/devices/platform/host3/session2/target3:0:0/3:0:0:1/block/sdb --attribute-walk
2 编写存放硬件设备的参数的UDV程序文件
vim /etc/udev/rules.d/70-iscsidisk.rules
SUBSYSTEM=="block", ATTR{size}=="6291456", ATTRS{vendor}=="tarena ", ATTRS{model}=="disktwo ", SYMLINK+="iscsi/sdb"
SUBSYSTEM=="block", ATTR{size}=="6291456", ATTRS{vendor}=="tarena ", ATTRS{model}=="disktwo2 ", SYMLINK+="iscsi/sdc"
3 启动UDV
start_udev
4 查看命名信息
[[email protected] ~]# ls /dev/iscsi/ -l
总用量 0
lrwxrwxrwx. 1 root root 6 1月 13 06:08 sdb -> ../sda
lrwxrwxrwx. 1 root root 6 1月 13 06:08 sdc -> ../sdb
/dev/iscsi/sda
[[email protected] ~]# ls /dev/iscsi/
sdb sdc
使用共享存储磁盘存储数据
fdisk /dev/iscsi/sdb
进行分区操作 /dev/iscsi/sdb锁分的区等同于/dev/sda所分的区
格式化
mkfs.ext4 /dev/sda1 //格式化时无法时别/dev/iscsi/sdb1,只能对/dev/sdb1进行格式化处理。在上面udev操作中可知此2个路径指定的是同一块磁盘。
53,54只用选取一个进行格式化,格式化完成后,另一个主机上的也会被格式化(需登出再登陆才可以看到格式化的分区)
挂载
blkid /dev/sdb1 //查看UUID
vim /etc/fstab
UUID=7a8cc741-9500-401d-8c2b-add3e08f2e74 /var/www/html ext4 defaults 0 0
mount -a
mount | grep var
###############################################################3
多路径
存储服务器有2个网络地址
eth0 192.168.4.56/24
eth1 192.168.2.56/24
应用服务器
eth0 192.168.4.53/24
eth1 192.168.2.53/24
在存储服务器共享一个diskd存储
[[email protected] ~]# vim /etc/tgt/targets.conf
<target iqn.2018-01.cn.tedu:host56.diskd>
backing-store /dev/vdd
write-cache off
</target>
[[email protected] ~]# /etc/init.d/tgtd stop
[[email protected] ~]# /etc/init.d/tgtd start
[[email protected] ~]# tgt-admin --show
在应用器
[[email protected] ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.2.56 --discover
192.168.4.56:3260,1 iqn.2018-01.cn.tedu:host56.diskd
[[email protected] ~]# iscsiadm --mode discoverydb --type sendtargets --portal 192.168.4.56 --discover
192.168.2.56:3260,1 iqn.2018-01.cn.tedu:host56.diskd
都可找到同一个存储设备,实现多路径应用
同时登陆后,发现相同的一个存储生成2个路径
[[email protected] ~]# iscsiadm --mode node --targetname iqn.2018-01.cn.tedu:host56.diskd --portal 192.168.2.56:3260 -l
[[email protected] ~]# iscsiadm --mode node --targetname iqn.2018-01.cn.tedu:host56.diskd --portal 192.168.4.56:3260 -l
[[email protected] ~]# ls /dev/sd*
/dev/sda /dev/sdb
[[email protected] ~]# scsi_id --whitelisted --device=/dev/sda
1IET 00030001
[[email protected] ~]# scsi_id --whitelisted --device=/dev/sdb
1IET 00030001
2个不同的存储路径拥有相同的wwid
[[email protected] ~]# yum -y install device-mapper-multipath
[[email protected] ~]# mpathconf --user_friendly_names n //生成配置文件
[[email protected] ~]# ls /etc/multipath.conf
[[email protected] ~]# vim /etc/multipath.conf
25 defaults {
26 user_friendly_names no
27 getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
28 }
101 multipaths {
102
103 multipath {
104 wwid "1IET 00030001"
105 alias mpatha
106 }
107 }
[[email protected] ~]# /etc/init.d/multipathd start
[[email protected] ~]# chkconfig multipathd on
[[email protected] ~]# ls /dev/mapper/mpatha //定义固定wwid的别名路径
/dev/mapper/mpatha
分区 --> 格式化 --> 挂载 --> 存储
[[email protected] ~]# mkfs.ext4 /dev/mapper/mpatha
[[email protected] ~]# blkid /dev/mapper/mpatha
/dev/mapper/mpatha: UUID="8846c46b-2ac8-4207-858b-69dec3439ae5" TYPE="ext4"
[[email protected] ~]# vim /etc/fstab
[[email protected] ~]# multipath -ll //查看多路径信息
mpatha (1IET 00030001) dm-2 IET,VIRTUAL-DISK
size=3.0G features=‘0‘ hwhandler=‘0‘ wp=rw
|-+- policy=‘round-robin 0‘ prio=1 status=active
| - 4:0:0:1 sda 8:0 active ready running<br/>
-+- policy=‘round-robin 0‘ prio=1 status=enabled
`- 5:0:0:1 sdb 8:16 active ready running
[[email protected] ~]# multipath -rr //重新加载多路径信息
可以通过down掉eth0或eth1来进行测试
#############################################################
lvs50 lvs55 分发器 (vip192.168.4.252)
| |
--------------------
| |
web51 web52
\ /
storage57 /dev/sde(3G)
NAS ----->(NFS/Samba)
nfs/cifs
keepalived+lvs/DR环境下
1 配置realserver 51/52
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[[email protected] ~]# ifconfig lo:1 192.168.4.252/32
52同51
2 配置分发器50(主)、55(备)
yum -y install ipvsadm
yum -y install keepalived
vim /etc/keepalived/keepalived.conf
删掉55行后文件
15 vrrp_instance VI_1 {
16 state MASTER
17 interface eth0
18 virtual_router_id 51
19 priority 150
20 advert_int 1
21 authentication {
22 auth_type PASS
23 auth_pass 123456
24 }
25 virtual_ipaddress {
26 192.168.4.252
27 }
28 }
29
30 virtual_server 192.168.4.252 80 {
31 delay_loop 6
32 lb_algo rr
33 lb_kind DR
34 nat_mask 255.255.255.0
35# persistence_timeout 50 //决定是否轮询
36 protocol TCP
37 connect_timeout 3
38 nb_get_retry 3
39 delay_before_retry 3
40
41 real_server 192.168.4.51 80 {
42 weight 1
43 }
44 real_server 192.168.4.52 80 {
45 weight 1
46 }
47 }
55参照此操作做备用分发器
16 state BACKUP
19 priority 100
3 起服务
service keepalived start
ip addr show | grep 252 //查看有没有252虚拟IP
ipvsadm -Ln
4 客户端测试http://192.168.4.252/test.html
在50上每隔1秒查看情况watch -n 1 ipvsadm -Ln --stats
NFS存储技术-----> NFS
57
1 分区与格式化
fdisk /dev/vde
mkfs.ext4 /dev/vde1
blkid /dev/vde1
/dev/vde1: UUID="b06de208-c031-4ad0-be55-9273938ac2b4" TYPE="ext4"
开机挂载
mkdir /sharespace
vim /etc/fstab
mount /dev/vde1 /sharespace
2 使用nfs服务共享挂载目录
装包
nfs-utils
rpcbind
/etc/init.d/rpcbind start
chkconfig rpcbind on
vim /etc/exports
/sharespace (rw) //表示所有客户端,(rw)表示权限
ls -ld /sharespace/
chmod o+x /sharespace/
/etc/init.d/nfs start
chkconfig nfs on
showmount -e localhost //查看本机的共享信息
3 客户端访问(web51 web52)
yum -y install nfs-utils
showmount -e 192.168.4.57 //查看57的共享目录
mount -t nfs 192.168.4.57:/sharespace /var/www/html
########################################################
分布式存储
一、理论知识
1.1 什么是分布式文件系统
指文件系统管理的物理存储资源不一定直接连接在本地节点上,而是通过计算机网络与节点相连。
分布式文件系统的设计基于客户机/服务器模式。
1.2 分布式文件系统特点
将固定于某个地点的某个文件系统。扩展到任意多个地点/多个文件系统
众多的节点组成一个文件系统网络,每个节点可以分布在不同的地点,通过网络进行节点间的通信和数据传输。
1.3 衡量分布式文件系统的优劣
数据的存储方式
数据的读取速率
数据的安全机制(采取冗余、备份、镜像等方式)
1.4 服务器角色
主控服务器
-- Master管理各个数据服务器收集它们的信息,了解所有数据服务器的生存现状,然后给他们分配任务。
-- 主控服务器上存放着所有的文件目录信息,要找一个文件,必须先访问它。
数据服务器
-- 存放数据的服务器
-- 设计为冗余模式
-- 主要的工作模式就是定期向主控服务器汇报其状况,然后等待并处理命令,更快更安全的存放号数据。
1.5 数据分布
以块的方式存储
-- 把文件数据切成数据块
-- 将数据块存储在数据服务器上
以独立文件的方式存储
-- 每台数据服务器存储独立文件
-- 多台数据服务器存储相同的文件,实现冗余及负载均衡
++++++++++++++++++++++++++++++++++++++++++++++
配置 FastDFS
1 配置主控服务器 70
yum -y install gcc gcc-c++
yum -y install libevent
libevent-devel-1.4.13-4.el6.x86_64.rpm
libevent-doc-1.4.13-4.el6.noarch.rpm
libevent-headers-1.4.13-4.el6.noarch.rpm
tar -zxf FastDFS_v4.06.tar.gz
[[email protected] ~]# cd FastDFS
[[email protected] FastDFS]# ls
[[email protected] FastDFS]# vim INSTALL
[[email protected] FastDFS]# ./make.sh
[[email protected] FastDFS]# ./make.sh install
[[email protected] FastDFS]# mkdir -pv /data/fastdfs
[[email protected] FastDFS]#
vim /etc/fdfs/tracker.conf
22 base_path=/data/fastdfs
36 store_lookup=0 //0决定轮询
182 use_storage_id = true //使用存储服务器的IP时别
[[email protected] FastDFS]# ls conf/
anti-steal.jpg http.conf storage.conf tracker.conf
client.conf mime.types storage_ids.conf
[[email protected] FastDFS]# cp conf/storage_ids.conf /etc/fdfs/
[[email protected] FastDFS]# ls /etc/fdfs/
client.conf mime.types storage_ids.conf
http.conf storage.conf tracker.conf
[[email protected] FastDFS]# vim /etc/fdfs/storage_ids.conf
100001 group1 192.168.4.71
100002 group1 192.168.4.72
+++++++++++++++++++++++++++++++++++++++++++++++
2 配置存储点71、72
71、72磁盘分区格式化
[[email protected] ~]# mkdir -p /data/fastdfs
[[email protected] ~]# mount /dev/vdb1 /data/fastdfs/
[[email protected] ~]# df -h
[[email protected] ~]# blkid /dev/vdb1
/dev/vdb1: UUID="b73856c6-93ca-4621-839d-ee35eb7b3c14" TYPE="ext4"
[[email protected] ~]# vim /etc/fstab
UUID="b73856c6-93ca-4621-839d-ee35eb7b3c14 /data/fastdfs/ ext4 defaults 0 0
[[email protected] ~]# yum -y install gcc gcc-c++ libevent
libevent-devel-1.4.13-4.el6.x86_64.rpm
libevent-doc-1.4.13-4.el6.noarch.rpm
libevent-headers-1.4.13-4.el6.noarch.rpm
tar -zxf FastDFS_v4.06.tar.gz
[[email protected] fastdfs]# cd FastDFS
[[email protected] FastDFS]# ./make.sh && ./make.sh install
[[email protected] FastDFS]# vim /etc/fdfs/storage.conf
37 base_path=/data/fastdfs
100 store_path0=/data/fastdfs
109 tracker_server=192.168.4.70:22122
++++++++++++++++++++++++++++++++++++++++++++++
3 起服务
70
[[email protected] FastDFS]# cp init.d/fdfs_trackerd /etc/init.d/
[[email protected] FastDFS]# chmod +x /etc/init.d/fdfs_trackerd
[[email protected] FastDFS]# chkconfig --add fdfs_trackerd
[[email protected] FastDFS]# chkconfig fdfs_trackerd on
[[email protected] FastDFS]# service fdfs_trackerd status
fdfs_trackerd 已停
[[email protected] FastDFS]# service fdfs_trackerd start
Starting FastDFS tracker server:
[[email protected] FastDFS]# netstat -utnapl |grep :22122
tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN 2453/fdfs_trackerd
71\72
[[email protected] FastDFS]# ls
client COPYING-3_0.txt INSTALL README storage
common HISTORY make.sh restart.sh test
conf init.d php_client stop.sh tracker
[[email protected] FastDFS]# cp init.d/fdfs_storaged /etc/init.d/
[[email protected] FastDFS]# chmod +x /etc/init.d/fdfs_storaged
[[email protected] FastDFS]# chkconfig --add fdfs_storaged
[[email protected] FastDFS]# chkconfig fdfs_storaged on
[[email protected] FastDFS]# service fdfs_storaged start
[[email protected] FastDFS]# netstat -untlap |grep :23000
tcp 0 0 0.0.0.0:23000 0.0.0.0:* LISTEN 2479/fdfs_storaged
[[email protected] FastDFS]# ls /data/fastdfs/data/
+++++++++++++++++++++++++++++++++++++++++++
4 客户端访问主控服务器,验证配置 (客户端57)
[[email protected] ~]# mkdir bin
[[email protected] ~]# mkdir /etc/fdfs/
提供访问命令:上传文件的命令 下载文件的命令
[[email protected] FastDFS]# ls /usr/local/bin/
[[email protected] FastDFS]# scp /usr/local/bin/fdfs_test 192.168.4.57:/root/bin/
[[email protected] FastDFS]# scp /usr/local/bin/fdfs_upload_file 192.168.4.57:/root/bin/
[[email protected] FastDFS]# scp /usr/local/bin/fdfs_download_file 192.168.4.57:/root/bin/
[[email protected] FastDFS]# scp /usr/local/bin/fdfs_delete_file 192.168.4.57:/root/bin/
[[email protected] fdfs]# scp /etc/fdfs/client.conf 192.168.4.57:/etc/fdfs
编辑连接主控节点主机的配置文件
[[email protected] ~]# vim /etc/fdfs/client.conf
base_path=/data/fastdfs
tracker_server=192.168.4.70:22122
[[email protected] ~]# mkdir -p /data/fastdfs
上传文件:
[[email protected] ~]# head -3 /etc/passwd > user.txt
[[email protected] ~]# fdfs_test /etc/fdfs/client.conf upload user.txt
group_name=group1, remote_filename=M00/00/00/wKgER1pdKxGAEmwmAAAAaejzMko741.txt
source ip address: 192.168.4.71
可在71、72的/data/fastdfs/data/00/00/目录下找到客户端上传的文件
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf tedu.jpg
group1/M00/00/00/wKgESFpdLkyAczsGAACwEV-ILDc772.jpg
上传一张图片,不加test不显示访问过程
下载文件: 下载并重命名
fdfs_download_file /etc/fdfs/client.conf group1/M00/00/00/wKgESFpdLkyAczsGAACwEV-ILDc772.jpg ttt.jpg
删除文件:
[[email protected] ~]# fdfs_delete_file /etc/fdfs/client.conf group1/M00/00/00/wKgESFpdLkyAczsGAACwEV-ILDc772.jpg
+++++++++++++++++++++++++++++++++++++++++
5 配置web(71,72)
安装nginx
[[email protected] fastdfs]# tar -zxf fastdfs-nginx-[[email protected] nginx-1.7.10]# yum -y install zlib-devel pcre-devel
[[email protected] fastdfs]# tar -zxf nginx-1.7.10.tar.gz
[[email protected] fastdfs]# useradd nginx
[[email protected] nginx-1.7.10]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --add-module=../fastdfs-nginx-module/src/
[[email protected] nginx-1.7.10]# make && make install
修改模块配置文件
[[email protected] fastdfs]# cp fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/
[[email protected] fastdfs]# vim /etc/fdfs/mod_fastdfs.conf
40 tracker_server=192.168.4.70:22122
53 url_have_group_name = true
62 store_path0=/data/fastdfs
修改nginx配置文件
[[email protected] fastdfs]# vim /usr/local/nginx/conf/nginx.conf
43 # location / {
44 # root html;
45 # index index.html index.htm;
46 # }
47 location / {
48 ngx_fastdfs_module;
49 }
起服务
[[email protected] fastdfs]# /etc/init.d/fdfs_storaged stop
[[email protected] fastdfs]# /etc/init.d/fdfs_storaged start
[[email protected] fastdfs]# netstat -untlpa |grep :23000
[[email protected] fastdfs]# /usr/local/nginx/sbin/nginx
[[email protected] fastdfs]# netstat -antulp |grep nginx
ca
+++++++++++++++++++++++++++++++++++++++++++
客户端访问
[[email protected] ~]# fdfs_upload_file /etc/fdfs/client.conf tedu.jpggroup1/M00/00/00/wKgER1pdQv6AAC8VAACwEV-ILDc211.jpg
[[email protected] ~]# firefox http://192.168.4.72/group1/M00/00/00/wKgER1pdQv6AAC8VAACwEV-ILDc211.jpg
以上是关于集群与存储的主要内容,如果未能解决你的问题,请参考以下文章