radhat6.6上安装oracle12c RAC

Posted 运维·拖拉斯基

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了radhat6.6上安装oracle12c RAC 相关的知识,希望对你有一定的参考价值。

软件环境:VMware、redhat6.6、oracle12c(linuxx64_12201_database.zip)、12cgrid(linuxx64_12201_grid_home.zip)

一、前期准备工作

虚拟机先配置一个节点即可,第二个节点由第一个节点克隆再修改相关参数(环境变量中的sid名称、网络等)

1.1、服务器基本配置(操作系统、安装包、网络、用户、环境变量)

1.1.1、服务器安装操作系统

  选择最小安装即可,磁盘分配:35G,内存:4G(最少可能也得2G),swap:8G

  关闭防火墙、SELinux

  关闭ntpd(mv /etc/ntp.conf /etc/ntp.conf_bak)

  添加四块网卡:分别用于公网(仅主机)、私网(vlan1)、存储链接2路(vlan2)

1.1.2、检查并安装oracle12c需要的rpm包

  检查

rpm -q binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst make net-tools nfs-utils smartmontools sysstat

  将查询到的未安装的包安装(VMware连接镜像,配置本地yum)

[[email protected] ~]#mount /dev/cdrom /mnt
[[email protected] ~]# cat /etc/yum.repos.d/rhel-source.repo 
[ISO] name
=iso baseurl=file:///mnt enabled=1 gpgcheck=0

   yum install安装

yum install binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel libxcb libX11 libXau libXi libXtst make net-tools nfs-utils smartmontools sysstat

  另外再安装cvuqdisk包(rac_grid自检需要的包)

rpm -qi cvuqdisk
CVUQDISK_GRP=oinstall; export CVUQDISK_GRP        \这里需要先创建oinstall组再安装,后面教程有创建,所以等创建后再进行这一步
rpm -iv cvuqdisk-1.0.10-1.rpm

1.1.3、配置各节点的/etc/hosts

[[email protected] ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 jydb1.rac
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 jydb1.rac

#eth0 public
192.168.137.11  jydb1
192.168.137.12  jydb2

#eth0 vip                                              
192.168.137.21  jydb1-vip 
192.168.137.22  jydb2-vip 

#eth1 private                                             
10.0.0.1   jydb1-priv
10.0.0.2   jydb2-priv
10.0.0.11  jydb1-priv2
10.0.0.22  jydb2-priv2

#scan ip
192.168.137.137 jydb-cluster-scan

 

1.1.4、各节点创建需要的用户和组

创建group & user:

groupadd -g 54321 oinstall  
groupadd -g 54322 dba  
groupadd -g 54323 oper  
groupadd -g 54324 backupdba  
groupadd -g 54325 dgdba  
groupadd -g 54326 kmdba  
groupadd -g 54327 asmdba  
groupadd -g 54328 asmoper  
groupadd -g 54329 asmadmin  
groupadd -g 54330 racdba  
  
useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle  
useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid 

自行设置oracle、grid密码

1.1.5、各节点创建安装目录(root)

mkdir -p /u01/app/12.2.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

 

1.1.6、各节点配置文件修改

内核参数修改:vi /etc/sysctl.conf

# vi /etc/sysctl.conf  增加如下内容:
fs.file-max = 6815744  
kernel.sem = 250 32000 100 128  
kernel.shmmni = 4096  
kernel.shmall = 1073741824  
kernel.shmmax = 6597069766656
kernel.panic_on_oops = 1  
net.core.rmem_default = 262144  
net.core.rmem_max = 4194304  
net.core.wmem_default = 262144  
net.core.wmem_max = 1048576  
net.ipv4.conf.eth3.rp_filter = 2
net.ipv4.conf.eth2.rp_filter = 2
net.ipv4.conf.eth0.rp_filter = 1  
fs.aio-max-nr = 1048576  
net.ipv4.ip_local_port_range = 9000 65500 

 

修改生效:sysctl -p

用户shell的限制:vi /etc/security/limits.conf

#在/etc/security/limits.conf 增加如下内容:
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240

-加载 pam_limits.so插入式认证模块:vi /etc/pam.d/login

vi /etc/pam.d/login 添加如下内容:
session required pam_limits.so

 

1.1.7、各节点用户环境变量配置

[[email protected] ~]# cat /home/grid/.bash_profile

export ORACLE_SID=+ASM1;
export ORACLE_HOME=/u01/app/12.2.0/grid; 
export PATH=$ORACLE_HOME/bin:$PATH;
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib 
export DISPLAY=192.168.88.121:0.0

 

[[email protected] ~]# cat /home/oracle/.bash_profile

export ORACLE_SID=racdb1; 
export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1;       
export ORACLE_HOSTNAME=jydb1;
export PATH=$ORACLE_HOME/bin:$PATH; 
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; 
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
export DISPLAY=192.168.88.121:0.0

 

上面的步骤完成后可以克隆node2了

1.1.8、配置各节点ssh互信

克隆出第二台,网络更改没问题后

以grid用户为例,oracle用户同样要配置互信:

①先生成节点一grid的公钥
[[email protected] ~]$ ssh-keygen -t rsa -P ‘‘    
Generating public/private rsa key pair.
Enter file in which to save the key (/home/grid/.ssh/id_rsa): 
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
b6:07:65:3f:a2:e8:75:14:33:26:c0:de:47:73:5b:95 [email protected]
The keys randomart image is:
+--[ RSA 2048]----+
|     ..        .o|
|      ..  o . .E |
|     . ...Bo o   |
|      . .=.=.    |
|        S.o o    |
|       o = . .   |
|      . + o      |
|     . . o       |
|      .          |
+-----------------+
把它通过命令传到节点二,
[[email protected] ~]$ ssh-copy-id -i .ssh/id_rsa.pub [email protected]10.0.0.2
[email protected]10.0.0.2s password: 
Now try logging into the machine, with "ssh ‘[email protected]", and check in:

  .ssh/authorized_keys

to make sure we havent added extra keys that you werent expecting.

②在第二个节点上也生成公钥,并追加到authorized_keys
[[email protected] .ssh]$ ssh-keygen -t rsa -P ‘‘
......
[[email protected] .ssh]$ cat id_rsa.pub >> authorized_keys
[[email protected] .ssh]$ scp authorized_keys [email protected]10.0.0.1:.ssh/
The authenticity of host 10.0.0.1 (10.0.0.1) cant be established.
RSA key fingerprint is d1:21:03:35:9d:f2:a2:81:e7:e1:7b:d0:79:f4:d3:be.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 10.0.0.1 (RSA) to the list of known hosts.
[email protected]10.0.0.1s password: 
authorized_keys                                                                                                            100%  792     0.8KB/s   00:00

③验证
[[email protected] .ssh]$ ssh jydb1 date
2018年 03月 30日 星期五 08:01:20 CST
[[email protected] .ssh]$ ssh jydb2 date
2018年 03月 30日 星期五 08:01:20 CST
[[email protected] .ssh]$ ssh jydb1-priv date
2018年 03月 30日 星期五 08:01:20 CST
[[email protected] .ssh]$ ssh jydb2-priv date
2018年 03月 30日 星期五
08:01:20 CST

jydb2上只需要修改蓝色字体

1.2、共享存储配置

  添加一台服务器模拟存储服务器,配置两个私有地址和rac客户端连接多路径,磁盘划分和配置

  目标:从存储中划分出来两台主机可以同时看到的共享LUN,一共六个:3个1G的盘用作OCR和Voting Disk,1个40G的盘做GIMR,其余规划做数据盘和FRA。

 为存储服务器加63g的硬盘

//2.3的lv划分
asmdisk1         1G
asmdisk2         1G
asmdisk3         1G
asmdisk4         40G
asmdisk5         10G
asmdisk6         10G

 

1.2.1、rac为存储客户端,添加连接存储的网卡(数量2)

  VMware建立vlan2,两个rac节点、存储服务器上均添加两块网卡,划分到vlan2,这样就可以通过多路径和存储进行连接。

  存储(服务端):10.0.1.99、10.0.2.99

     rac-jydb1(客户端):10.0.1.150、10.0.2.150

  rac-jydb2(客户端):10.0.1.151、10.0.2.151

  最后测试网路互通没问题即可进行下一步

1.2.2、安装iscsi软件包 

  --服务端
  yum安装scsi-target-utils

yum install scsi-target-utils

  --客户端
  yum安装iscsi-initiator-utils

yum install iscsi-initiator-utils

1.2.3、模拟存储加盘

  --服务端操作

填加一个63G的盘,实际就是用来模拟存储新增实际的一块盘
我这里新增加的盘显示为/dev/sdb,我将它创建成lvm

# pvcreate /dev/sdb
Physical volume "/dev/sdb" successfully created

# vgcreate vg_storage /dev/sdb
  Volume group "vg_storage" successfully created

# lvcreate -L 10g -n lv_lun1 vg_storage     //按照之前划分的磁盘容量分配多少g
  Logical volume "lv_lun1" created

1.2.4、配置iscsi服务端

  iSCSI服务端主要配置文件:/etc/tgt/targets.conf

  所以我这里按照规范设置的名称,添加好如下配置:

<target iqn.2018-03.com.cnblogs.test:alfreddisk>
    backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1
    backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2
    backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3
    backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4
    backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5
    backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6
</target>

 

  配置完成后,就启动服务和设置开机自启动:

[[email protected] ~]# service tgtd start
Starting SCSI target daemon: [  OK  ]
[[email protected] ~]# chkconfig tgtd on
[[email protected] ~]# chkconfig --list|grep tgtd
tgtd            0:off   1:off   2:on    3:on    4:on    5:on    6:off
[[email protected] ~]# service tgtd status
tgtd (pid 1763 1760) is running...

  然后查询下相关的信息,比如占用的端口、LUN信息(Type:disk):

[[email protected] ~]# netstat -tlunp |grep tgt
tcp        0      0 0.0.0.0:3260                0.0.0.0:*                   LISTEN      1760/tgtd           
tcp        0      0 :::3260                     :::*                        LISTEN      1760/tgtd           

[[email protected] ~]# tgt-admin --show
Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk
    System information:
        Driver: iscsi
        State: ready
    I_T nexus information:
    LUN information:
        LUN: 0
            Type: controller
            SCSI ID: IET     00010000
            SCSI SN: beaf10
            Size: 0 MB, Block size: 1
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: null
            Backing store path: None
            Backing store flags: 
        LUN: 1
            Type: disk
            SCSI ID: IET     00010001
            SCSI SN: beaf11
            Size: 10737 MB, Block size: 512
            Online: Yes
            Removable media: No
            Prevent removal: No
            Readonly: No
            Backing store type: rdwr
            Backing store path: /dev/vg_storage/lv_lun1
            Backing store flags: 
    Account information:
    ACL information:
        ALL

1.2.5、配置iscsi客户端

确认开机启动项设置开启:

#  chkconfig --list|grep scsi
iscsi           0:off   1:off   2:off   3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:off   3:on    4:on    5:on    6:off

使用iscsiadm命令扫描服务端的LUN(探测iSCSI Target)

  iscsiadm -m discovery -t sendtargets -p 10.0.1.99

[[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 10.0.1.99
10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
[[email protected] ~]# iscsiadm -m discovery -t sendtargets -p 10.0.2.99
10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

查看iscsiadm -m node

 [[email protected] ~]# iscsiadm -m node
 10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
 10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk

   查看/var/lib/iscsi/nodes/下的文件:

[[email protected] ~]# ll -R /var/lib/iscsi/nodes/
/var/lib/iscsi/jydbs/:
总用量 4
drw------- 4 root root 4096 3月  29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk

/var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk:
总用量 8
drw------- 2 root root 4096 3月  29 00:59 10.0.1.99,3260,1
drw------- 2 root root 4096 3月  29 00:59 10.0.2.99,3260,1

/var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.1.99,3260,1:
总用量 4
-rw------- 1 root root 2049 3月  29 00:59 default

/var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.2.99,3260,1:
总用量 4
-rw------- 1 root root 2049 3月  29 00:59 default

挂载iscsi磁盘

  根据上面探测的结果,执行下面命令,挂载共享磁盘:

iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login

[[email protected] ~]# iscsiadm -m node  -T iqn.2018-03.com.cnblogs.test:alfreddisk --login
Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] (multiple)
Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] (multiple)
Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] successful.
Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] successful.
显示挂载成功

 

通过(fdisk -l或lsblk)命令查看挂载的iscsi硬盘

[[email protected] ~]# lsblk 
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   35G  0 disk 
├─sda1   8:1    0  200M  0 part /boot
├─sda2   8:2    0  7.8G  0 part [SWAP]
└─sda3   8:3    0   27G  0 part /
sr0     11:0    1  3.5G  0 rom  /mnt
sdb      8:16   0    1G  0 disk 
sdc      8:32   0    1G  0 disk 
sdd      8:48   0    1G  0 disk 
sde      8:64   0    1G  0 disk 
sdf      8:80   0    1G  0 disk 
sdg      8:96   0    1G  0 disk 
sdi      8:128  0   40G  0 disk 
sdk      8:160  0   10G  0 disk 
sdm      8:192  0   10G  0 disk 
sdj      8:144  0   10G  0 disk 
sdh      8:112  0   40G  0 disk 
sdl      8:176  0   10G  0 disk 

1.2.6、配置multipath多路径

安装多路径软件包:

rpm -qa |grep device-mapper-multipath
没有安装则yum安装
#yum install -y device-mapper-multipath
或下载安装这两个rpm
device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm
device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

 

添加开机启动

chkconfig multipathd on

 

生成多路径配置文件

--生成multipath配置文件
/sbin/mpathconf --enable

--显示多路径的布局
multipath -ll

--重新刷取
multipath -v2      或-v3

--清空所有多路径
multipath -F

 

以下是操作输出,供参考

 

技术分享图片[[email protected] ~]# multipath -v3

 

[[email protected] ~]# multipath -ll
Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths
Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths
Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths
Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths
Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths
Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths
asmdisk6 (1IET     00010006) dm-5 IET,VIRTUAL-DISK           //wwid
size=10.0G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:6 sdj 8:144 active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:6 sdm 8:192 active ready running
asmdisk5 (1IET     00010005) dm-2 IET,VIRTUAL-DISK
size=10G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:5 sdh 8:112 active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:5 sdl 8:176 active ready running
asmdisk4 (1IET     00010004) dm-4 IET,VIRTUAL-DISK
size=40G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:4 sdf 8:80  active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:4 sdk 8:160 active ready running
asmdisk3 (1IET     00010003) dm-3 IET,VIRTUAL-DISK
size=1.0G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:3 sdd 8:48  active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:3 sdi 8:128 active ready running
asmdisk2 (1IET     00010002) dm-1 IET,VIRTUAL-DISK
size=1.0G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:2 sdc 8:32  active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:2 sdg 8:96  active ready running
asmdisk1 (1IET     00010001) dm-0 IET,VIRTUAL-DISK
size=1.0G features=0 hwhandler=0 wp=rw
|-+- policy=round-robin 0 prio=1 status=active
| `- 33:0:0:1 sdb 8:16  active ready running
`-+- policy=round-robin 0 prio=1 status=enabled
  `- 34:0:0:1 sde 8:64  active ready running

 

启动multipath服务

#service multipathd start

 

配置multipath

修改第一处:
#建议user_friendly_names设为no。如果设定为 no,即指定该系统应使用WWID 作为该多路径的别名。如果将其设为 yes,系统使用文件 #/etc/multipath/mpathn 作为别名。
 
#当将 user_friendly_names 配置选项设为 yes 时,该多路径设备的名称对于一个节点来说是唯一的,但不保证对使用多路径设备的所有节点都一致。也就是说,
 
在节点一上的mpath1和节点二上的mpath1可能不是同一个LUN,但是各个服务器上看到的相同LUN的WWID都是一样的,所以不建议设为yes,而是设为#no,用WWID作为别名。
 
defaults {
        user_friendly_names no
        path_grouping_policy failover                //表示multipath工作模式为主备,path_grouping_policy  multibus为主主
}
 
添加第二处:绑定wwid<br>这里的wwid在multipath -l中体现
multipaths {
       multipath {
               wwid                      "1IET     00010001"
               alias                     asmdisk1
       }
 
multipaths {
       multipath {
               wwid                      "1IET     00010002"
               alias                     asmdisk2
       }
 
multipaths {
       multipath {
               wwid                      "1IET     00010003"
               alias                     asmdisk3
       }
multipaths {
       multipath {
               wwid                      "1IET     00010003"
               alias                     asmdisk3
       }
 
multipaths {
       multipath {
               wwid                      "1IET     00010004"
               alias                     asmdisk4
       }
 
multipaths {
       multipath {
               wwid                      "1IET     00010005"
               alias                     asmdisk5
       }
 
multipaths {
       multipath {
               wwid                      "1IET     00010006"
               alias                     asmdisk6
       }

 

  配置完成要生效得重启multipathd

绑定后查看multipath别名

[[email protected] ~]# cd /dev/mapper/
[[email protected] mapper]# ls
asmdisk1  asmdisk2  asmdisk3  asmdisk4  asmdisk5  asmdisk6  control

 

udev绑定裸设备

首先进行UDEV权限绑定,否则权限不对安装时将扫描不到共享磁盘

  修改之前:

[[email protected] ~]# ls -lh /dev/dm*
brw-rw---- 1 root disk  253, 0 4月   2 16:18 /dev/dm-0
brw-rw---- 1 root disk  253, 1 4月   2 16:18 /dev/dm-1
brw-rw---- 1 root disk  253, 2 4月   2 16:18 /dev/dm-2
brw-rw---- 1 root disk  253, 3 4月   2 16:18 /dev/dm-3
brw-rw---- 1 root disk  253, 4 4月   2 16:18 /dev/dm-4
brw-rw---- 1 root disk  253, 5 4月   2 16:18 /dev/dm-5
crw-rw---- 1 root audio  14, 9 4月   2 16:18 /dev/dmmidi

 

  我这里系统是RHEL6.6,对于multipath的权限,手工去修改几秒后会变回root。所以需要使用udev去绑定好权限。
  搜索对应的配置文件模板:

[[email protected] ~]# find / -name 12-*
/usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

 

  根据模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面:

vi /etc/udev/rules.d/12-dm-permissions.rules
# MULTIPATH DEVICES
#
# Set permissions for all multipath devices
ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"          //修改这里

# Set permissions for first two partitions created on a multipath device (and detected by kpartx)
# ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"

 

  完成后启动start_udev,30s后权限正常则OK

[[email protected] ~]# start_udev 
正在启动 udev:[确定]
[[email protected] ~]# ls -lh /dev/dm*
brw-rw---- 1 grid asmadmin 253, 0 4月   2 16:25 /dev/dm-0
brw-rw---- 1 grid asmadmin 253, 1 4月   2 16:25 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 2 4月   2 16:25 /dev/dm-2
brw-rw---- 1 grid asmadmin 253, 3 4月   2 16:25 /dev/dm-3
brw-rw---- 1 grid asmadmin 253, 4 4月   2 16:25 /dev/dm-4
brw-rw---- 1 grid asmadmin 253, 5 4月   2 16:25 /dev/dm-5
crw-rw---- 1 root audio     14, 9 4月   2 16:24 /dev/dmmidi

 

磁盘设备绑定

  查询裸设备的主设备号、次设备号

[[email protected] ~]# ls -lt /dev/dm-*
brw-rw---- 1 grid asmadmin 253, 5 3月  29 04:00 /dev/dm-5
brw-rw---- 1 grid asmadmin 253, 3 3月  29 04:00 /dev/dm-3
brw-rw---- 1 grid asmadmin 253, 2 3月  29 04:00 /dev/dm-2
brw-rw---- 1 grid asmadmin 253, 4 3月  29 04:00 /dev/dm-4
brw-rw---- 1 grid asmadmin 253, 1 3月  29 04:00 /dev/dm-1
brw-rw---- 1 grid asmadmin 253, 0 3月  29 04:00 /dev/dm-0


[[email protected] ~]# dmsetup ls|sort
asmdisk1        (253:0)
asmdisk2        (253:1)
asmdisk3        (253:3)
asmdisk4        (253:4)
asmdisk5        (253:2)
asmdisk6        (253:5)

根据对应关系绑定裸设备
vi  /etc/udev/rules.d/60-raw.rules
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m"
ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m"


ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660"
ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660"
ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660"
ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660"
ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660"
ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"

 完成后查看

[[email protected] ~]# ll /dev/mapper/*
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk1 -> ../dm-2
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk2 -> ../dm-1
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk3 -> ../dm-5
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk4 -> ../dm-3
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk5 -> ../dm-4
lrwxrwxrwx 1 root root      7 4月   2 16:25 /dev/mapper/asmdisk6 -> ../dm-0
crw-rw---- 1 root root 10, 58 4月   2 16:25 /dev/mapper/control

 




















以上是关于radhat6.6上安装oracle12c RAC 的主要内容,如果未能解决你的问题,请参考以下文章

vmware安装oracle 12c rac内存一般设多大

Oracle12C R2+RAC安装测试

Oracle 12C RAC集群搭建

ORACLE 12C 三节点 RAC 安装报错 [INS-32025]

Oracle 12c RAC安装PSU(12.1.0.2.161018)

Oracle12c 中RAC功能增强新特性之ASM&amp;Grid