Oracle数据库集群添加表空间操作规范
Posted 堕落白天使
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Oracle数据库集群添加表空间操作规范相关的知识,希望对你有一定的参考价值。
1、首先检查dg空间是否充足
SQL> select name,total_mb,free_mb,USABLE_FILE_MB from v$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
--------------------------------- ---------- ---------- --------------
DATADG 4198790 230531 230531
OCRDG 15360 14434 4657
RECODG 512078 497578 497578
REDODG 204800 42117 42117
说明:
第一列数字,表示总的空间
第二列数字,表示剩余空间,由于冗余度的原因,可能是实际可使用的1倍、2倍、3倍,属于虚值
第三列数字,表示剩余空间,但与第二列意义有差别,第三列表示实际可用的空间
所以重点看第三列的值,这是实际可用的空间。
特别提醒:灾备两端均要进行检查,否则会造成严重问题
2、计划表空间添加
--首先根表空间的实际大小,确定本次添加的值
通常情况下,如果这个表空间总体大小才100G范围内,一次性添加20G即可,如果更小的表空间,一次性添加10G也可行的
如果表空间整体比较大,一次性添加应该在30G
具体的情况具体分析,注意保证整体表空间在80%左右即可,以此来确定添加文件的大小。
特别注意,每次添加时,一定要确保dg的值是本次总体添加值的2倍以上,灾备两端都要保证!
如果要添加磁盘到dg中,如下一步
3、添加磁盘到dg
--首先通知存储管理员划分相应的盘到指定的机器,说明共享
--扫描磁盘(两个节点执行)
[[email protected] ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
注意,有的机器有多个光纤接口,就要多次,如下:
[[email protected] scsi_host]# ls -a
. .. host0 host1 host10 host2 host3 host4 host5 host6 host7 host8 host9
如这里有10个,就要执行10次,预先写好脚本
[[email protected] ~]# echo "- - -" > /sys/class/scsi_host/host2/scan
[[email protected] ~]# echo "- - -" > /sys/class/scsi_host/host3/scan
执行完成后,在另一个节点执行相同的操作
--扫描完成后,查看最新加入的磁盘
[[email protected] scsi_host]# for i in `cat /proc/partitions | awk {‘print $4‘} |grep sd`; do echo "### $i: `scsi_id --whitelist /dev/$i`"; done
### sda: 361866da04f1063001e9e8c2811e75cc8
### sda1: 361866da04f1063001e9e8c2811e75cc8
### sda2: 361866da04f1063001e9e8c2811e75cc8
### sdb: 3600a098038303742665d49316b78327a
### sdc: 3600a098038303742665d49316b78327a
### sde: 3600a098038303742665d49316b783279
### sdd: 3600a098038303742665d49316b783279
### sdf: 3600a098038303742665d49316b783278
### sdh: 3600a098038303742665d49316b783330
### sdg: 3600a098038303742665d49316b783278
### sdj: 3600a098038303742665d49316b783331
.......................
### sdbv: 3600a098038303742695d4933306e7a51
### sdbw: 3600a098038303742695d4933306e7a51
### sdbx: 3600a098038303742695d4933306e7a51
### sdby: 3600a098038303742695d4933306e7a51
### sdbz: 3600a098038303742695d4933306e7a51
### sdca: 3600a098038303742695d4933306e7a51
### sdcb: 3600a098038303742695d4933306e7a51
### sdcc: 3600a098038303742695d4933306e7a51
通过查看,发现最后一个是最新加进去的盘
--编辑多路径
[[email protected] scsi_host]# vi /etc/multipath.conf
.........................
multipath {
wwid 3600a098038303742665d49316b783278
alias ocrdisk1
}
multipath {
wwid 3600a098038303742665d49316b783279
alias ocrdisk2
multipath {
wwid 3600a098038303742665d49316b783333
alias data4
}
multipath {
wwid 3600a098038303742695d4933306e7a51
alias data5
}
本次,我们加入的data5
注意,两个节点都要进行这样操作
--重新配置多路径
[[email protected] scsi_host]# multipathd -k
multipathd> reconfigure
ok
multipathd> quit
[[email protected] scsi_host]# multipath -l
data5 (3600a098038303742695d4933306e7a51) dm-11 NETAPP,LUN C-Mode
size=500G features=‘4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle‘ hwhandler=‘0‘ wp=rw
|-+- policy=‘round-robin 0‘ prio=0 status=active
| |- 3:0:2:9 sdcb 68:240 active undef running
| |- 3:0:3:9 sdcc 69:0 active undef running
| |- 1:0:2:9 sdbx 68:176 active undef running
| `- 1:0:3:9 sdby 68:192 active undef running
`-+- policy=‘round-robin 0‘ prio=0 status=enabled
|- 3:0:1:9 sdca 68:224 active undef running
|- 1:0:0:9 sdbv 68:144 active undef running
|- 1:0:1:9 sdbw 68:160 active undef running
`- 3:0:0:9 sdbz 68:208 active undef running
data4 (3600a098038303742665d49316b783333) dm-8 NETAPP,LUN C-Mode
size=500G features=‘4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle‘ hwhandler=‘0‘ wp=rw
|-+- policy=‘round-robin 0‘ prio=0 status=active
| |- 3:0:0:6 sdn 8:208 active undef running
| |- 1:0:0:6 sdo 8:224 active undef running
| |- 3:0:1:6 sdaf 65:240 active undef running
| `- 1:0:1:6 sdag 66:0 active undef running
`-+- policy=‘round-robin 0‘ prio=0 status=enabled
|- 1:0:3:6 sdba 67:64 active undef running
|- 1:0:2:6 sdar 66:176 active undef running
|- 3:0:2:6 sdbj 67:208 active undef running
`- 3:0:3:6 sdbs 68:96 active undef running
可以查看到刚才添加的data5正常状态
--编辑udev
[[email protected] rules.d]# pwd
/etc/udev/rules.d
[[email protected] rules.d]# ls -a
. 60-pcmcia.rules 90-hal.rules 99-fuse.rules
.. 60-raw.rules 97-bluetooth-serial.rules
60-fprint-autosuspend.rules 70-persistent-cd.rules 98-kexec.rules
60-openct.rules 90-alsa.rules 99-asm-multipath.rules
我们使用的是99-asm-multipath.rules
[[email protected] rules.d]# vi 99-asm-multipath.rules
.........................
ENV{DM_NAME}=="data5", OWNER:="grid", GROUP:="oinstall", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
ENV{DM_NAME}=="data4", OWNER:="grid", GROUP:="oinstall", MODE:="660", SYMLINK+="iscsi/oraasm-$env{DM_NAME}"
把data5添加进去,如上图所示,以上步骤两个节点均要执行
重启udev
[[email protected] etc]# start_udev
Starting udev: [ OK ]
两个节点均要执行
--检查磁盘权限
[[email protected] mapper]# pwd
/dev/mapper
[[email protected] mapper]# ls -lrt
total 0
crw-rw---- 1 root root 10, 236 Jan 11 17:34 control
lrwxrwxrwx 1 root root 8 Jan 11 17:34 reco1 -> ../dm-10
lrwxrwxrwx 1 root root 7 Jan 11 17:34 redo1 -> ../dm-9
lrwxrwxrwx 1 root root 7 Jan 11 17:34 ocrdisk3 -> ../dm-2
lrwxrwxrwx 1 root root 7 Jan 11 17:34 ocrdisk1 -> ../dm-4
lrwxrwxrwx 1 root root 7 Jan 11 17:34 data1 -> ../dm-5
lrwxrwxrwx 1 root root 7 Jan 11 17:34 VolGroup-lv_swap -> ../dm-1
lrwxrwxrwx 1 root root 7 Jan 11 17:34 VolGroup-lv_root -> ../dm-0
lrwxrwxrwx 1 root root 7 Jan 11 17:34 ocrdisk2 -> ../dm-3
lrwxrwxrwx 1 root root 7 Jan 11 17:34 data2 -> ../dm-6
lrwxrwxrwx 1 root root 7 Jan 11 17:35 data3 -> ../dm-7
lrwxrwxrwx 1 root root 8 Jan 11 17:35 data5 -> ../dm-11
lrwxrwxrwx 1 root root 7 Jan 11 17:35 data4 -> ../dm-8
data5对应的是/dm-11,检查权限
[[email protected] dev]# ls -lrt | grep dm
brw-rw---- 1 root disk 8, 192 Jan 11 17:34 sdm
crw-rw---- 1 root root 1, 12 Jan 11 17:34 oldmem
crw-rw---- 1 root root 10, 62 Jan 11 17:34 cpu_dma_latency
brw-rw---- 1 root disk 252, 1 Jan 11 17:34 dm-1
lrwxrwxrwx 1 root root 4 Jan 11 17:34 root -> dm-0
brw-rw---- 1 root disk 252, 0 Jan 11 17:34 dm-0
brw-rw---- 1 grid oinstall 252, 5 Jan 11 17:36 dm-5
brw-rw---- 1 grid oinstall 252, 11 Jan 11 17:36 dm-11
brw-rw---- 1 grid oinstall 252, 4 Jan 11 17:36 dm-4
brw-rw---- 1 grid oinstall 252, 3 Jan 11 17:36 dm-3
brw-rw---- 1 grid oinstall 252, 2 Jan 11 17:36 dm-2
brw-rw---- 1 grid oinstall 252, 10 Jan 11 17:36 dm-10
brw-rw---- 1 grid oinstall 252, 8 Jan 11 17:36 dm-8
brw-rw---- 1 grid oinstall 252, 7 Jan 11 17:36 dm-7
brw-rw---- 1 grid oinstall 252, 6 Jan 11 17:36 dm-6
brw-rw---- 1 grid oinstall 252, 9 Jan 11 17:36 dm-9
可以看到dm-11的权限已经授予了grid:oinstall
注意,两个节点都要进行检查,确保权限是正确的,否则添加磁盘会报错
--扩充dg
SQL> select name,path from v$asm_disk;
NAME PATH
-----------------------------------------------------
DATADG_0000 /dev/mapper/data3
DATADG_0011 /dev/mapper/data2
REDODG_0000 /dev/mapper/data1
SQL> alter diskgroup datadg add disk ‘/dev/mapper/data5‘ rebalance power 8;
Diskgroup altered.
添加成功后,可以通过查看是否成功
SQL> select name,total_mb,free_mb,USABLE_FILE_MB from v$asm_diskgroup;
NAME TOTAL_MB FREE_MB USABLE_FILE_MB
------------------------------ ---------- ---------- --------------
DATADG 4198790 230530 230530
OCRDG 15360 14434 4657
RECODG 512078 507865 507865
REDODG 204800 42117 42117
致此,添加磁盘到dg完成。
4、为表空间添加文件
进入需要添加文件的数据库
[[email protected] ~]$ export ORACLE_SID=testdb1
[[email protected] ~]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.4.0 Production on Wed Jan 11 17:55:28 2017
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> alter tablespace testdb_blob add datafile ‘+datadg‘ size 30720M;
Tablespace altered.
在这里,千万要注意使用"+"号,否则就会变成本地文件,导致一个节点无法使用。
以上是关于Oracle数据库集群添加表空间操作规范的主要内容,如果未能解决你的问题,请参考以下文章