Oracle LiveLabs实验:Oracle RAC Fundamentals
Posted dingdingfish
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Oracle LiveLabs实验:Oracle RAC Fundamentals相关的知识,希望对你有一定的参考价值。
概述
此实验申请地址在这里,时间为4小时。
实验帮助在这里。
简介
Oracle RAC 是一个具有共享缓存架构的集群数据库,它克服了传统无共享和共享磁盘方法的限制,为所有业务应用程序提供高度可扩展和可用的数据库解决方案。 Oracle RAC 是 Oracle 私有云架构的关键组件。
Oracle Real Application Clusters 通过消除单个数据库服务器的单点故障,为客户提供最高数据库可用性。 在集群服务器环境中,数据库本身在服务器池中共享,这意味着如果服务器池中的任何服务器发生故障,数据库将继续在幸存的服务器上运行。 Oracle RAC 不仅使客户能够在服务器发生故障时继续处理数据库工作负载,而且通过减少数据库因计划内维护操作而脱机的时间,有助于进一步降低停机成本。
Oracle Real Application Clusters 支持跨集群服务器池透明地部署 Oracle 数据库。 这使客户能够轻松地将其单服务器 Oracle 数据库重新部署到数据库服务器集群上,从而充分利用集群数据库服务器提供的组合内存容量和处理能力。
Oracle Real Application Clusters 提供了在服务器池上轻松部署 Oracle 数据库并充分利用集群提供的性能、可伸缩性和可用性所需的所有软件组件。 Oracle RAC 使用 Oracle Grid Infrastructure 作为 Oracle RAC 数据库系统的基础。 Oracle Grid Infrastructure 包括 Oracle Clusterware 和 Oracle 自动存储管理 (ASM),可在高度可用和可扩展的数据库云环境中高效共享服务器和存储资源。
Oracle RAC 提供:
- 高可用性
- 可扩展性
- 数据库即服务
带有 Oracle Real Application Clusters (RAC) 选项的 Oracle 数据库允许在不同服务器上运行的多个实例访问存储在共享存储上的同一个物理数据库。该数据库跨越多个硬件系统,但对应用程序显示为一个统一的数据库。这使得能够利用商品化硬件来降低总拥有成本并提供支持各种应用程序工作负载的可扩展计算环境。如果需要额外的计算能力,客户可以添加额外的节点,而不是更换现有的服务器。唯一的要求是集群中的服务器必须运行相同的操作系统和相同版本的 Oracle。它们不必是相同的型号或容量。这节省了资本支出,允许客户购买具有最新硬件配置的服务器并将其与现有服务器一起使用。这种架构还提供了高可用性,因为在不同节点上运行的 RAC 实例提供了对服务器故障的保护。需要注意的是,(几乎)所有应用程序(例如 Oracle Applications、PeopleSoft、Siebel、SAP)都无需任何更改即可在 Oracle RAC 上运行。
RAC的官方介绍见这里。
实验1,2: 准备RAC环境
这里我选择的是在OCI上自建2节点RAC。
本实验需要将以下的环境变量设置添加到2个RAC节点oracle用户的.bash_profile中:
export DBNAME=$(srvctl config database)
export INST1=$(srvctl status database -d $DBNAME|awk 'NR==1 print $2')
export INST2=$(srvctl status database -d $DBNAME|awk 'NR==2 print $2')
export NODE1=$(srvctl status database -d $DBNAME|awk 'NR==1 print $7')
export NODE2=$(srvctl status database -d $DBNAME|awk 'NR==2 print $7')
export SYSPWD='W3lc0m3#W3lc0m3#'
dummy_var=$(srvctl config scan|grep "SCAN name"|awk 'print $3')
export SCAN_NAME=$dummy_var::-1
export SERVICE_DOMAIN=$(echo $SCAN_NAME|cut -d '.' -f2-)
实验3:Clusterware 和 Fencing
Clusterware的介绍:
Oracle Clusterware is the technology used in the RAC architecture that transforms a collection of servers into a highly available unified system. Oracle Clusterware provides failure detection, node membership, node fencing and optimal resource placement. It provides cluster-wide component inter-dependency management for RAC and other applications in the cluster. Clusterware uses resource models and policies to provide high availability responses to planned and unplanned component downtime.
Clusterware的官方介绍见这里。
任务1: 禁用私网互联接口
在2个RAC节点均运行以下命令:
sudo su - oracle
ps -ef | grep pmon
ps -ef | grep lsnr
输出为:
$ ps -ef | grep pmon
oracle 52227 1 0 Jan15 ? 00:00:04 ora_pmon_racMJMWZ1
grid 59515 1 0 Jan14 ? 00:00:09 asm_pmon_+ASM1
grid 66889 1 0 Jan14 ? 00:00:08 apx_pmon_+APX1
$ ps -ef | grep lsnr
grid 23673 58835 1 02:01 ? 00:00:00 [lsnrctl] <defunct>
grid 58936 1 0 Jan14 ? 00:00:05 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
grid 58996 1 0 Jan14 ? 00:00:05 /u01/app/19.0.0.0/grid/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit
grid 59034 1 0 Jan14 ? 00:00:51 /u01/app/19.0.0.0/grid/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit
以 oracle 用户身份监控每个节点上的 crsd.trc。 crsd.trc 文件位于 $ADR_BASE/diag/crs/nodename/crs/trace 目录中。 在早期版本的 Grid Infrastructure 中,日志文件位于 CRS_HOME/log//crs 下(这些目录结构仍然存在于安装中)。
# hostname -s选项表示short
tail -f /u01/app/grid/diag/crs/`hostname -s`/crs/trace/crsd.trc
查看网络信息,ifconfig 命令显示所有已配置和正在运行的网络接口。 flags 条目将显示接口状态,如 UP、BROADCAST和MULTICAST中。 inet 条目显示IP 地址。:
-- 节点1
# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:00:17:02:8e:3e brd ff:ff:ff:ff:ff:ff
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 02:00:17:00:ab:91 brd ff:ff:ff:ff:ff:ff
# ifconfig -a
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.199 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
RX packets 40585381 bytes 56300704544 (52.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 63197327 bytes 96635430396 (89.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.116 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens3:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.213 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 192.168.16.18 netmask 255.255.255.0 broadcast 192.168.16.255
ether 02:00:17:00:ab:91 txqueuelen 1000 (Ethernet)
RX packets 11213750 bytes 15607164363 (14.5 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10388496 bytes 9695653030 (9.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 2951715 bytes 5602553215 (5.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2951715 bytes 5602553215 (5.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
-- 节点2
$ ifconfig -a
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.113 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:01:14:77 txqueuelen 1000 (Ethernet)
RX packets 30846709 bytes 50904305075 (47.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53483010 bytes 75783050324 (70.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.192 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:01:14:77 txqueuelen 1000 (Ethernet)
ens3:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.229 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:01:14:77 txqueuelen 1000 (Ethernet)
ens3:4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.74 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:01:14:77 txqueuelen 1000 (Ethernet)
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 192.168.16.19 netmask 255.255.255.0 broadcast 192.168.16.255
ether 02:00:17:02:34:91 txqueuelen 1000 (Ethernet)
RX packets 10389236 bytes 9697474573 (9.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 11221056 bytes 15622091494 (14.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 26019016 bytes 7777053790 (7.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 26019016 bytes 7777053790 (7.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
其中一个网络接口ens3有多个虚拟接口 es3:1 和 es3:2及与之关联的 IP 地址。这些虚拟接口用于每个节点和 的虚拟 IP (VIP)和SCAN 侦听器使用。
此RAC集群的主机名文件类似于以下:
10.0.0.199 node1.pub node1
10.0.0.113 node2.pub node2
192.168.16.18 node1-priv.pub node1-priv
192.168.16.19 node2-priv.pub node2-priv
10.0.0.116 node1-vip.pub node1-vip
10.0.0.192 node2-vip.pub node2-vip
从中可知,此集群的专用互连地址分别为 192.168.16.18(节点1) 和 192.168.16.19(节点2) 或 node1-priv 和 node2-priv。
还有一些地址没有在主机名文件中定义,他们是SCAN IP:
$ srvctl config scan
SCAN name: lvracdb-s01-2022-01-14-123012-scan.pub.racdblab.oraclevcn.com, Network: 1
Subnet IPv4: 10.0.0.0/255.255.255.0/ens3, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.0.0.213
SCAN VIP is enabled.
SCAN 2 IPv4 VIP: 10.0.0.229
SCAN VIP is enabled.
SCAN 3 IPv4 VIP: 10.0.0.74
SCAN VIP is enabled.
在节点2上停止网络接口ens4:
ifconfig ens4 down
然后查看状态,UP状态没有了。并且之前有的ens3:1和ens3:2也不见了:
-- 节点2
$ sudo ifconfig -a
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.113 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:01:14:77 txqueuelen 1000 (Ethernet)
RX packets 30967725 bytes 51296240020 (47.7 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53633546 bytes 76282920523 (71.0 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens4: flags=4098<BROADCAST,MULTICAST> mtu 9000
inet 192.168.16.19 netmask 255.255.255.0 broadcast 192.168.16.255
ether 02:00:17:02:34:91 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
此时,查看节点1。发现之前在节点2上的ens3:1(VIP),ens3:2(SCAN VIP)和ens3:4(SCAN VIP)迁移到了节点1上,变成了ens3:4和ens3:5:
# 节点1
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:02:8e:3e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.199/24 brd 10.0.0.255 scope global dynamic ens3
valid_lft 60039sec preferred_lft 60039sec
inet 10.0.0.116/24 brd 10.0.0.255 scope global secondary ens3:1
valid_lft forever preferred_lft forever
inet 10.0.0.213/24 brd 10.0.0.255 scope global secondary ens3:2
valid_lft forever preferred_lft forever
inet 10.0.0.74/24 brd 10.0.0.255 scope global secondary ens3:3
valid_lft forever preferred_lft forever
inet 10.0.0.229/24 brd 10.0.0.255 scope global secondary ens3:4
valid_lft forever preferred_lft forever
inet 10.0.0.192/24 brd 10.0.0.255 scope global secondary ens3:5
valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:00:ab:91 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.18/24 brd 192.168.16.255 scope global ens4
valid_lft forever preferred_lft forever
# ifconfig -a
ens3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.199 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
RX packets 40746815 bytes 56932906281 (53.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 63378440 bytes 97224944841 (90.5 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens3:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.116 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens3:2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.213 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens3:3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.74 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens3:4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.229 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens3:5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 10.0.0.192 netmask 255.255.255.0 broadcast 10.0.0.255
ether 02:00:17:02:8e:3e txqueuelen 1000 (Ethernet)
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000
inet 192.168.16.18 netmask 255.255.255.0 broadcast 192.168.16.255
ether 02:00:17:00:ab:91 txqueuelen 1000 (Ethernet)
RX packets 11301060 bytes 15815646827 (14.7 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 10463714 bytes 9803778924 (9.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 0 (Local Loopback)
RX packets 2993508 bytes 5703023992 (5.3 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2993508 bytes 5703023992 (5.3 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
任务2: 检查CRSD日志
在节点2的CRSD日志中可以发现如下的信息,显示此节点被移除出RAC集群,此即CLUSTER FENCING:
$ tail -f /u01/app/grid/diag/crs/`hostname -s`/crs/trace/crsd.trc
2022-01-16 02:42:43.192 : CRSD:3710390016: [ NONE] Created alert : (:CRSD00111:) : Could not init OCR, error: PROC-23: Error in cluster services layer Cluster services error [ [3]
2022-01-16 02:42:43.192 : CRSD:3710390016: [ ERROR] [PANIC] CRSD exiting: Could not init OCR, code: 23
2022-01-16 02:42:43.193 : CRSD:3710390016: [ INFO] Done.
在节点1的CRSD日志中也可以证明这一点:
2022-01-16 02:42:59.335 : CRSD:4176369408: [ NONE] 1:34851:34845 1:34851:34845Server [lvracdb-s01-2022-01-14-1230122] has been un-assigned from the server pool: ora.racMJMWZ_icn189
命令查看集群状态,只有一个节点在线:
$ crsctl status server
NAME=lvracdb-s01-2022-01-14-1230121
STATE=ONLINE
任务3: 重启私网互联接口
在节点2上,重启私网互联接口:
# ifconfig ens4 up
节点2重新加入集群:
$ crsctl status server
NAME=lvracdb-s01-2022-01-14-1230121
STATE=ONLINE
NAME=lvracdb-s01-2022-01-14-1230122
STATE=ONLINE
目前各IP的分布如下。也就是说,VIP重新回到其原节点,SCAN IP则均衡分布:
# 节点1
$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:02:8e:3e brd ff:ff:ff:ff:ff:ff
inet 10.0.0.199/24 brd 10.0.0.255 scope global dynamic ens3
valid_lft 59858sec preferred_lft 59858sec
inet 10.0.0.116/24 brd 10.0.0.255 scope global secondary ens3:1
valid_lft forever preferred_lft forever
inet 10.0.0.74/24 brd 10.0.0.255 scope global secondary ens3:3
valid_lft forever preferred_lft forever
inet 10.0.0.229/24 brd 10.0.0.255 scope global secondary ens3:4
valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:00:ab:91 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.18/24 brd 192.168.16.255 scope global ens4
valid_lft forever preferred_lft forever
# 节点2
# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:01:14:77 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.113/24 brd 10.0.0.255 scope global dynamic ens3
valid_lft 68633sec preferred_lft 68633sec
inet 10.0.0.192/24 brd 10.0.0.255 scope global secondary ens3:1
valid_lft forever preferred_lft forever
inet 10.0.0.213/24 brd 10.0.0.255 scope global secondary ens3:2
valid_lft forever preferred_lft forever
3: ens4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
link/ether 02:00:17:02:34:91 brd ff:ff:ff:ff:ff:ff
inet 192.168.16.19/24 brd 192.168.16.255 scope global ens4
valid_lft forever preferred_lft forever
实验4:快速应用通知(Fast Application Notification)
此实验已包含在Oracle LiveLabs实验:Application Continuity Fundamentals,此略。
实验5:安装示例Schema
下载示例Schema,并初始化环境变量:
wget https://github.com/oracle/db-sample-schemas/archive/v19c.zip
unzip v19c.zip
cd db-sample-schemas-19c
perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat
在tnsnames.ora中添加以下net service name:
PDB1 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = lvracdb-s01-2022-01-14-123012-scan.pub.racdblab.oraclevcn.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = pdb1.pub.racdblab.oraclevcn.com)
)
)
安装示例Schema:
export SYSPW=W3lcom3#W3lcom3#
export USERPW=$SYSPW
export CONNSTR=PDB1
sqlplus system/$SYSPW@$CONNSTR @mksample $SYSPW $SYSPW $USERPW $USERPW $USERPW $USERPW $USERPW $USERPW users temp $ORACLE_HOME/demo/schema/log/ $CONNSTR
实验6:服务
简介
服务代表具有共同属性、服务级别阈值和优先级的应用程序组。 应用程序功能可以划分为由服务标识的工作负载。
一个服务可以跨越一个或多个 Oracle 数据库实例,一个全局集群中的多个数据库,一个实例可以支持多个服务。 为服务提供服务的实例数量对应用程序是透明的。 服务提供单个系统映像来管理竞争应用程序,并允许将每个工作负载作为一个单元进行管理。
响应时间和 CPU 消耗指标、性能和资源统计、等待事件、基于阈值的警报和性能索引由AWR自动为所有服务维护。 服务、模块和操作标签用于识别服务器上服务内的操作。 (MODULE 和 ACTION 由应用程序设置)端到端监控支持在 Service、Module 和 Action 级别进行聚合和跟踪,以识别高负载操作。 Oracle Enterprise Manager 管理响应时间和 CPU 消耗的服务质量阈值,监控顶级服务,并提供深入了解顶级模块和每个服务的顶级操作。
连接时间路由和运行时路由算法平衡提供服务的实例之间的负载均衡。 RAC 使用服务来启用不间断的数据库操作。 通过允许重新定位或禁用/启用服务的接口支持计划性操作。
应用程序连续性被设置为服务的属性。
Oracle 建议共享服务的所有用户具有相同的服务级别要求。 您可以为服务定义特定的特征,每个服务都可以代表一个单独的工作单元。 使用服务时,您可以利用许多选项。 尽管您不必实现这些选项,但使用它们有助于优化应用程序操作和性能。
任务1:登录并确认数据库和实例名
在节点1上创建服务:
srvctl add service -d $DBNAME -s testy -pdb pdb1 -preferred $INST1 -available $INST2
srvctl start service -d $DBNAME -s testy
然后以grid用户查看集群状态:
$ crsctl stat res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.COMMONSTORE.advm
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.LISTENER.lsnr
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.chad
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.data.commonstore.acfs
ONLINE ONLINE lvracdb-s01-2022-01-14-12mounted on /opt/orac
30121 le/dcs/commonstore,S
TABLE
ONLINE ONLINE lvracdb-s01-2022-01-14-12mounted on /opt/orac
30122 le/dcs/commonstore,S
TABLE
ora.net1.network
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.ons
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.proxy_advm
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.DATA.dg(ora.asmgroup)
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.RECO.dg(ora.asmgroup)
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.asm(ora.asmgroup)
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12Started,STABLE
30121
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12Started,STABLE
30122
ora.asmnet1.asmnetwork(ora.asmgroup)
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.cvu
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.lvracdb-s01-2022-01-14-1230121.vip
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.lvracdb-s01-2022-01-14-1230122.vip
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.qosmserver
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.racmjmwz_icn189.db
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12Open,HOME=/u01/app/o
30121 racle/product/19.0.0
.0/dbhome_1,STABLE
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12Open,HOME=/u01/app/o
30122 racle/product/19.0.0
.0/dbhome_1,STABLE
ora.racmjmwz_icn189.testy.svc
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.scan1.vip
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30122
ora.scan2.vip
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
ora.scan3.vip
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
--------------------------------------------------------------------------------
在Cluster Resources部分,带.db
的就是数据库信息,数据库名为racmjmwz_icn189:
ora.racmjmwz_icn189.db
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12Open,HOME=/u01/app/o
30121 racle/product/19.0.0
.0/dbhome_1,STABLE
2 ONLINE ONLINE lvracdb-s01-2022-01-14-12Open,HOME=/u01/app/o
30122 racle/product/19.0.0
.0/dbhome_1,STABLE
确认testy 服务在运行,并标记其运行的节点,本例为节点1:
ora.racmjmwz_icn189.testy.svc
1 ONLINE ONLINE lvracdb-s01-2022-01-14-12STABLE
30121
任务2:创建服务
创建服务svctest:
srvctl add 以上是关于Oracle LiveLabs实验:Oracle RAC Fundamentals的主要内容,如果未能解决你的问题,请参考以下文章
Oracle LiveLabs实验:Oracle RAC Fundamentals
Oracle LiveLabs实验:Install Oracle Database 21c
Oracle LiveLabs实验:DB Security - Oracle Label Security (OLS)
Oracle LiveLabs实验:Oracle Label Security (OLS)