Scenario: 1
In this scenario, I am replacing the private interface eth2 with the new interface eth3 and also change in subnet
·
Get the Current cluster interconnect configuration
details
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.56.0
global public
eth2 192.168.10.0
global cluster_interconnect,asm
[root@rac1 bin]#
[root@rac2 bin]# ./oifcfg getif
eth1 192.168.56.0
global public
eth2 192.168.10.0
global cluster_interconnect,asm
[root@rac2 bin]#
·
Get the list of interfaces known to OS on all the
nodes of the cluster
[root@rac2 bin]# ./oifcfg iflist -p
-n -hdr
INTERFACE_NAME SUBNET TYPE NETMASK
eth1 192.168.56.0 PRIVATE 255.255.255.0
eth2 192.168.10.0 PRIVATE 255.255.255.0
eth2 169.254.0.0 UNKNOWN 255.255.0.0
[root@rac2 bin]#
eth3 Link encap:Ethernet HWaddr 08:00:27:C6:F8:07
inet addr:172.16.10.11 Bcast:172.16.10.255 Mask:255.255.255.0
inet6 addr:
fe80::a00:27ff:fec6:f807/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:2954 errors:0 dropped:0
overruns:0 frame:0
TX packets:578 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:483490 (472.1 KiB) TX bytes:42397 (41.4 KiB)
eth3 Link encap:Ethernet HWaddr 08:00:27:6D:AD:3B
inet addr:172.16.10.12 Bcast:172.16.10.255 Mask:255.255.255.0
inet6 addr:
fe80::a00:27ff:fe6d:ad3b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:153 errors:0 dropped:0
overruns:0 frame:0
TX packets:54 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:36425 (35.5 KiB) TX bytes:9769 (9.5 KiB)
·
Backup the profile.xml file (gpnp profile ) in $CRS_HOME/gpnp/<hostname>/profiles/peer
. This profile stores the private network configuration information along with
other details . This profile is crucial for crsd startup . This file is updated
whenever we make any changes to cluster interconnect configuration using oifcfg
$CRS_HOME/gpnp/<hostname>/profiles/peer
cp –p profile.xml
profile.xml.b4change
·
Add new private network interface eth3 to the
cluster configuration using oifcfg command . Please note at this point eth3 interface need
not to be available or known to OS . Use
–global option when this interface is
not yet available . Remember to use the same interface name (eth3) on all the
nodes of the cluster . We need to pass the network interface name and the new
subnet of the private IP (Only subnet of the Private IP not the Private IP ) to
oifcfg command .
[root@rac1 bin]# ./oifcfg setif
-global eth3/172.16.10.0:cluster_interconnect,asm
NOTE : Please ensure all the nodes
of the cluster are running before adding a new interface .
·
Check whether the newly added interface is listed
and the configuration details are correct. From 12c We have new kind of network
called asm network . We can use the same private network for asm. Below output
lists eth2 and eth3 are used for both cluster_interconnect(private IP ) and
also for ASM network .
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.56.0
global public
eth2 192.168.10.0
global cluster_interconnect,asm
eth3 172.16.10.0
global cluster_interconnect,asm
[root@rac1 bin]#
·
Stop the clusterware on all the nodes and make
the necessary changes at OS to add new interface and configure IP . This task
may require to stop the node to add the new hardware and configure static private ip at OS level .
After successful addition of this interface we can list the interface using
ifconfig command . Ensure to use the same interface name across all the nodes
of the cluster .
Node
:1
eth3 Link encap:Ethernet HWaddr 08:00:27:C6:F8:07
inet addr:172.16.10.11 Bcast:172.16.10.255 Mask:255.255.255.0
inet6 addr:
fe80::a00:27ff:fec6:f807/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:1818 errors:0 dropped:0
overruns:0 frame:0
TX packets:1822 errors:0 dropped:0
overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:572051 (558.6 KiB) TX bytes:447108 (436.6 KiB)
Node
:2
eth3 Link encap:Ethernet HWaddr 08:00:27:6D:AD:3B
inet addr:172.16.10.12 Bcast:172.16.10.255 Mask:255.255.255.0
inet6 addr:
fe80::a00:27ff:fe6d:ad3b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500
Metric:1
RX packets:18998 errors:0 dropped:0
overruns:0 frame:0
TX packets:19424 errors:0 dropped:0
overruns:0 carrier:0
collisions:0
txqueuelen:1000
RX bytes:13896372 (13.2 MiB) TX bytes:16261621 (15.5 MiB)
·
Try to start clusterware on all the nodes and
check for any startup issues . We can
see the new interface eth3 is being listed in ifconfig and oifcfg iflist
command .
[root@rac1 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability
Services has been started.
[root@rac1 bin]#
[root@rac2 bin]# ./crsctl start crs
CRS-4123: Oracle High Availability
Services has been started.
[root@rac2 bin]#
[root@rac2 bin]# ./oifcfg getif
eth1 192.168.56.0
global public
eth2 192.168.10.0
global cluster_interconnect,asm
eth3 172.16.10.0
global cluster_interconnect,asm
[root@rac2 bin]# ./oifcfg iflist -p
-n -hdr
INTERFACE_NAME SUBNET TYPE NETMASK
eth1 192.168.56.0 PRIVATE 255.255.255.0
eth3 172.16.10.0 PRIVATE 255.255.255.0
eth3 169.254.0.0 UNKNOWN 255.255.0.0
[root@rac2 bin]#
·
I am using 12c rac . I see ora.storage service not coming up on
node . This is because we have a asm listener that was running with previous private
IP subnet . We need to update the new subnet for this asm listener .
2019-09-18 03:02:10.787 [ORAROOTAGENT(3058)]CRS-5818: Aborted
command 'start' for resource 'ora.storage'. Details at (:CRSAGF00113:) {0:9:3}
in /u01/app/oracle/diag/crs/rac1/crs/trace/ohasd_orarootagent_root.trc.
2019-09-18 03:02:16.020 [ORAROOTAGENT(3058)]CRS-5017: The
resource action "ora.storage start" encountered the following error:
2019-09-18 03:02:16.020+Storage agent start action aborted. For
details refer to "(:CLSN00107:)" in
"/u01/app/oracle/diag/crs/rac1/crs/trace/ohasd_orarootagent_root.trc".
2019-09-18 03:02:16.023 [OHASD(2894)]CRS-2757: Command 'Start'
timed out waiting for response from the resource 'ora.storage'. Details at
(:CRSPE00221:) {0:9:3} in /u01/app/oracle/diag/crs/rac1/crs/trace/ohasd.trc.
2019-09-18 03:02:22.670 [OSYSMOND(10554)]CRS-8500: Oracle
Clusterware OSYSMOND process is starting with operating system process ID 10554
2019-09-18 03:02:27.972 [ORAROOTAGENT(3058)]CRS-5019: All OCR
locations are on ASM disk groups [DATA], and none of these disk groups are
mounted. Details are at "(:CLSN00140:)" in
"/u01/app/oracle/diag/crs/rac1/crs/trace/ohasd_orarootagent_root.trc".
[root@rac1 bin]# ./crsctl stat res -t -init
--------------------------------------------------------------------------------
Name Target State
Server State
details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE
ONLINE rac1 STABLE
ora.cluster_interconnect.haip
1 ONLINE
ONLINE rac1 STABLE
ora.crf
1 ONLINE
OFFLINE
STABLE
ora.crsd
1 ONLINE
OFFLINE
STABLE
ora.cssd
1 ONLINE
ONLINE rac1 STABLE
ora.cssdmonitor
1 ONLINE
ONLINE rac1 STABLE
ora.ctssd
1 ONLINE
ONLINE rac1 OBSERVER,STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.drivers.acfs
1
ONLINE ONLINE rac1 STABLE
ora.evmd
1 ONLINE
INTERMEDIATE rac1
STABLE
ora.gipcd
1 ONLINE
ONLINE rac1 STABLE
ora.gpnpd
1 ONLINE
ONLINE rac1 STABLE
ora.mdnsd
1 ONLINE
ONLINE rac1 STABLE
ora.storage
1
ONLINE OFFLINE rac1 STARTING
·
To make changes to subnet of asm listener we
need to add a new asm listener with the new subnet and remove the previous one .
[oracle@rac2 bin]$ ./srvctl add
listener -asmlistener -l ASMNEWLISTENER_ASM -subnet 172.16.10.0
[oracle@rac2 bin]$ ./srvctl update
listener -listener ASMNET1LSNR_ASM -asm -remove –force
[root@rac1 bin]# ./srvctl config
listener -asmlistener
Name: ASMNEWLISTENER_ASM
Type: ASM Listener
Owner: oracle
Subnet: 172.16.10.0
Home: <CRS home>
End points: TCP:1527
Listener is enabled.
Listener is individually enabled on
nodes:
Listener is individually disabled
on nodes:
[root@rac1 bin]# ./srvctl config
asm
ASM home: <CRS home>
Password file: +DATA/orapwASM
Backup of Password file:
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener:
ASMNEWLISTENER_ASM
[root@rac1 bin]#
[root@rac1 bin]#
·
After making the above changes we can see all
the cluster services are up . Check for ora.storage service which was down
before .
[root@rac1 bin]# ./crsctl stat res
-t -init
--------------------------------------------------------------------------------
Name Target State
Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1
ONLINE ONLINE rac1
STABLE
ora.cluster_interconnect.haip
1
ONLINE ONLINE rac1 STABLE
ora.crf
1
ONLINE ONLINE rac1 STABLE
ora.crsd
1
ONLINE ONLINE rac1 STABLE
ora.cssd
1
ONLINE ONLINE rac1 STABLE
ora.cssdmonitor
1
ONLINE ONLINE rac1 STABLE
ora.ctssd
1
ONLINE ONLINE rac1 OBSERVER,STABLE
ora.diskmon
1
OFFLINE OFFLINE STABLE
ora.drivers.acfs
1
ONLINE ONLINE rac1 STABLE
ora.evmd
1 ONLINE
ONLINE rac1 STABLE
ora.gipcd
1
ONLINE ONLINE rac1 STABLE
ora.gpnpd
1
ONLINE ONLINE rac1 STABLE
ora.mdnsd
1
ONLINE ONLINE rac1 STABLE
ora.storage
1
ONLINE ONLINE rac1 STABLE
·
Check all the nodes joined the cluster and if
required delete the old network interface eth2 from cluster configuration .
[root@rac1 bin]# ./oifcfg delif -global eth2
[root@rac1 bin]#
[root@rac1 bin]# ./oifcfg getif
eth1 192.168.56.0
global public
eth3 172.16.10.0
global cluster_interconnect,asm