Two node RAC, underlying storage migration for OCR, Voting disks, Data and Reco ASM diskgroups.

This document demonstrates how the underlying storage of OCR/Vote disk and ASM diskgroups for DATA and RECO can be migrated from one storage to another in two-node RAC cluster.
This activity will be carried out online and doesn’t need any downtime. From 11g onwards, OCR/Voting disks can be moved without the need to bring down cluster services/node-apps. The oracle version used in this scenario is 19c, however the same procedure can be applied in 11g/12c as well.

Note: I’m restarting the CRS services on both the RAC nodes at the end just to ensure that the CRS services are starting up properly after the change in the storage.

Following is the step-wise approach to move the OCR/Voting disk to new ASM diskgroup, and changing the underlying storage disk for DATA/RECO diskgroup.

  1. Check the ASM Disk group and ASM disk path for the existing and new ASM disks.
SQL> col DISK_FILE_PATH for a30
line 2000
SELECT
    NVL(a.name, '[CANDIDATE]')      disk_group_name
  , b.path                          disk_file_path
  , b.name                          disk_file_name
  , b.failgroup                     disk_file_fail_group
FROM
    v$asm_diskgroup a   RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY a.name;
SQL>  

DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME                 DISK_FILE_FAIL_GROUP
------------------------------ ------------------------------ ------------------------------ ------------------------------
PSIDATA                        /dev/sdc1                      PSIDATA_0000                   PSIDATA_0000
PSIRECO                        /dev/sdd1                      PSIRECO_0000                   PSIRECO_0000
OCRVOT                         /dev/sde1                      OCRVOT_0000                    OCRVOT_0000
OCRVOT                         /dev/sdg1                      OCRVOT_0002                    OCRVOT_0002
OCRVOT                         /dev/sdf1                      OCRVOT_0001                    OCRVOT_0001
[CANDIDATE]                    /dev/sdl1
[CANDIDATE]                    /dev/sdk1
[CANDIDATE]                    /dev/sdh1
[CANDIDATE]                    /dev/sdj1
[CANDIDATE]                    /dev/sdi1

10 rows selected.

SQL>

** All the disks showing as [CANDIDATE] are the new storage disks, which will be added into the cluster config and the one’s already part of the cluster will be removed from the configuration.

  1. Check the current OCR diskgroup and integrity using root user.
[root@NODE5001 bin]# $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84588
         Available space (kbytes) :     407096
         ID                       :  603958846
         Device/File Name         :    +OCRVOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@NODE5001 bin]#
3.	Check the GI Cluster service status
[root@NODE5001 bin]# $GRID_HOME/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crf
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.ctssd
      1        ONLINE  ONLINE       NODE5001             OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gipcd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gpnpd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.mdnsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.storage
      1        ONLINE  ONLINE       NODE5001             STABLE
--------------------------------------------------------------------------------
[root@NODE5001 bin]#

4 . Validate the current votedisk status

[root@NODE5001 bin]# $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   d37054b8c8d64f29bfa7cc7d485eb30a (/dev/sde1) [OCRVOT]
 2. ONLINE   2da180539aed4fdcbf97de51b4c07cd7 (/dev/sdf1) [OCRVOT]
 3. ONLINE   d96a697fec024f86bf47df7e02ecfe56 (/dev/sdg1) [OCRVOT]
Located 3 voting disk(s).
[root@NODE5001 bin]#

5. Take the backup of current OCR/Voting disk to the specified location:

[root@NODE5001 bin]#  $GRID_HOME/bin/ocrconfig -export /u01/SW_PKGS/OCR_BKP/ocr_backup_`date +%Y%m%d`.dmp
PROT-58: successfully exported the Oracle Cluster Registry contents to file '/u01/software/OCR_BACKUP/ocr_backup_20210820.dmp'
[root@NODE5001 bin]#

[root@NODE5001 bin]# ls -ltr /u01/software/OCR_BACKUP/
-rw------- 1 root root 208896 Aug 20 17:14 ocr_backup_20210820.dmp
[root@NODE5001 bin]#

6. Incase you also wish to take the manual backup of the OCR, use the following command.

[root@NODE5001 bin]# $GRID_HOME/bin/ocrconfig -manualbackup

NODE5101     2021/08/20 17:11:44     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210820_171144.ocr.286.1081098705     1944883066
NODE5001     2021/03/17 14:39:26     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210317_143926.ocr.289.1067438367     1944883066
[root@NODE5001 bin]#

7. Get the details of currently available Diskgroups.

SQL> col COMPATIBILITY for a13
col DATABASE_COMPATIBILITY for a13
set lin 2000
select GROUP_NUMBER,NAME,STATE,TYPE,TOTAL_MB,FREE_MB,REQUIRED_MIRROR_FREE_MB,USABLE_FILE_MB,COMPATIBILITY,DATABASE_COMPATIBILITY,VOTING_FILES from  v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB COMPATIBILITY DATABASE_COMP V
------------ ------------------------------ ----------- ------ ---------- ---------- ----------------------- -------------- ------------- ----
           1 PSIDATA                        CONNECTED   EXTERN     511996     150848                       0         150848 19.0.0.0.0    10.1.0.0.0    N
           2 PSIRECO                        CONNECTED   EXTERN     307196     301784                       0         301784 19.0.0.0.0    10.1.0.0.0    N
           3 OCRVOT                         MOUNTED     NORMAL     221172     172804                   73724          49540 19.0.0.0.0    10.1.0.0.0    Y

8. Create a new Disk group for OCR and Voting disks.

SQL>  set timing on
set time on
create diskgroup OCR_VOT normal redundancy disk '/dev/sdj1','/dev/sdk1','/dev/sdl1'
attribute 'compatible.rdbms'='11.2.0.0', 'compatible.asm'='19.0.0.0';

Diskgroup created.

9. Check the status of newly created Diskgroup.

SQL> col COMPATIBILITY for a13
col DATABASE_COMPATIBILITY for a13
set lin 2000
select GROUP_NUMBER,NAME,STATE,TYPE,TOTAL_MB,FREE_MB,REQUIRED_MIRROR_FREE_MB,USABLE_FILE_MB,COMPATIBILITY,DATABASE_COMPATIBILITY,VOTING_FILES from  v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB COMPATIBILITY DATABASE_COMP V
------------ ------------------------------ ----------- ------ ---------- ---------- ----------------------- -------------- ------------- ----
           1 PSIDATA                        MOUNTED     EXTERN     511996     150848                       0         150848 19.0.0.0.0    10.1.0.0.0    N
           2 PSIRECO                        MOUNTED     EXTERN     307196     301784                       0         301784 19.0.0.0.0    10.1.0.0.0    N
           3 OCRVOT                         MOUNTED     NORMAL     221172     172804                   73724          49540 19.0.0.0.0    10.1.0.0.0    Y
           4 OCR_VOT                        MOUNTED     NORMAL     221181     220986                   73727          73629 19.0.0.0.0    11.2.0.0.0    N
SQL>

** Make sure the diskgroup is mounted on all the nodes.

SQL> select name,state,usable_file_mb,total_mb,free_mb,required_mirror_free_mb from v$asm_diskgroup;

NAME                           STATE       USABLE_FILE_MB   TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB
------------------------------ ----------- -------------- ---------- ---------- -----------------------
PSIDATA                        MOUNTED             150848     511996     150848                       0
PSIRECO                        MOUNTED             301784     307196     301784                       0
OCRVOT                         MOUNTED              49540     221172     172804                   73724
OCR_VOT                        MOUNTED              73565     221181     220857                   73727
SQL>
  1. Move OCR and Vote disk from {existing diskgroup } to {new diskgroup }
    Note: I’m using grid binary-owner user who has the sudo privilege to root, if you have root credentials you can use that.
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/ocrconfig -add +OCR_VOT

11. Now check the OCR status after adding the new diskgroup in OCR configuration.

[root@NODE5001 trace]# $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84632
         Available space (kbytes) :     407052
         ID                       :  603958846
         Device/File Name         :    +OCRVOT
                                    Device/File integrity check succeeded
         Device/File Name         :   +OCR_VOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@NODE5001 trace]#

Monitor the Alertlog, and ensure there are no errors reported during the new OCR diskgroup addition.

crs/NODE17703/crs/trace/crsd.trc.
2021-08-20 17:27:22.313 [CRSD(6049)]CRS-1007: The OCR/OCR mirror location was replaced by +OCR_VOT/PSIdrclus/OCRFILE/registry.255.1081099639.

12. Query the Vote disk status, it will be still pointing to the old diskgroup.

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   d37054b8c8d64f29bfa7cc7d485eb30a (/dev/sde1) [OCRVOT]
 2. ONLINE   2da180539aed4fdcbf97de51b4c07cd7 (/dev/sdf1) [OCRVOT]
 3. ONLINE   d96a697fec024f86bf47df7e02ecfe56 (/dev/sdg1) [OCRVOT]
Located 3 voting disk(s).
[grid@NODE5001 bin]$

13. Now replace the vote disk with newly created diskgroup OCR_VOT.

[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl replace votedisk +OCR_VOT
Successful addition of voting disk 8d84c608635d4fe4bf24be76191ef59f.
Successful addition of voting disk c3b3c443f5224f2ebf8bde96f3501b52.
Successful addition of voting disk 804f8d85c7294f57bf825e4817c1c98b.
Successful deletion of voting disk d37054b8c8d64f29bfa7cc7d485eb30a.
Successful deletion of voting disk 2da180539aed4fdcbf97de51b4c07cd7.
Successful deletion of voting disk d96a697fec024f86bf47df7e02ecfe56.
Successfully replaced voting disk group with +OCR_VOT.
CRS-4266: Voting file(s) successfully replaced
[grid@NODE5001 bin]$

Monitor the Alertlog to ensure no errors:

2021-08-20 17:33:36.440 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdj1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.441 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdk1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.441 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdl1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.458 [OCSSD(3385)]CRS-1626: A Configuration change request completed successfully
2021-08-20 17:33:36.558 [OCSSD(3385)]CRS-1601: CSSD Reconfiguration complete. Active nodes are NODE5001 NODE5101 .

14. Now check the vote disk status and it will point to new Diskgroup- OCR_VOT

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   8d84c608635d4fe4bf24be76191ef59f (/dev/sdj1) [OCR_VOT]
 2. ONLINE   c3b3c443f5224f2ebf8bde96f3501b52 (/dev/sdk1) [OCR_VOT]
 3. ONLINE   804f8d85c7294f57bf825e4817c1c98b (/dev/sdl1) [OCR_VOT]
Located 3 voting disk(s).
[grid@NODE5001 bin]$
14.	Now drop the existing OCR location from the Cluster configuration.
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/ocrconfig -delete +OCRVOT
[grid@NODE5001 bin]$

** Monotor the Alertlog to ensure operation completed successfully.

2021-08-20 17:31:18.494 [CRSD(6049)]CRS-1010: The OCR mirror location +OCRVOT/PSIdrclus/OCRFILE/registry.255.1067259605 was removed.

15. Revalidate the OCR status

[grid@NODE5001 bin]$ $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84632
         Available space (kbytes) :     407052
         ID                       :  603958846
         Device/File Name         :   +OCR_VOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[grid@NODE5001 bin]$

16. Now lets point the ASM SPFILE to new location.

SQL> create pfile='/tmp/asmspfile.ora' from spfile;
File created.
SQL>  create spfile='+OCR_VOT' from pfile='/tmp/asmspfile.ora';
File created.
SQL>

17. Lets recheck using gpnptool to ensure that Spfile is pointing to the new diskgroup in the cluster configuration.

[grid@NODE5001 bin]$ $ORACLE_HOME/bin/gpnptool get
Warning: some command line parameters were defaulted. Resulting command line:
         /u01/app/19.0.0/grid_home/bin/gpnptool.bin get -o-

<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="6" ClusterUId="a41ec45d9d3a6f7fffe1b140bace941c" ClusterName="PSIdrclus" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="10.13.75.0" Adapter="ens192" Use="public"/><gpnp:Network id="net2" IP="10.21.7.0" Adapter="ens256" Use="asm,cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/sd*" SPFile="+OCR_VOT/PSIdrclus/ASMPARAMETERFILE/registry.253.1081100211" Mode="remote" Extended="false"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>UjQf1EcKTeONpOPSphLNUVmVJV8=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>k9K+Y0BnUGrjXrlXZwaf/0UQZR3XztmD1nAObRfdDLE9qA4oTVGG1YnN2+T58n9SH+FpYKmdcvWPZ1orenghqNdvgsQL174ZKv3Cw5XWHgHxcPxfdG4nxYOzdl8W5c22plHoKJWCnT+DK08MJmWJo7cN38OTzwRBRGBCNDeraVo=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.
[grid@NODE5001 bin]$
18.	 Now change the ASM password file location by Copying the ASM password file to new Diskgroupp
[grid@NODE5001 bin]$ asmcmd
ASMCMD> pwget --asm
+OCRVOT/orapwASM
ASMCMD> pwcopy +OCRVOT/orapwASM +OCR_VOT/orapwASM
copying +OCRVOT/orapwASM -> +OCR_VOT/orapwASM
ASMCMD> ls -lt  +OCR_VOT/orapwASM
Type      Redund  Striped  Time             Sys  Name
PASSWORD  HIGH    COARSE   AUG 20 17:00:00  N    orapwASM => +OCR_VOT/ASM/PASSWORD/pwdasm.256.1081100361

19. Modify the password file location to new diskgroup

[grid@NODE5001 bin]$ srvctl config asm
ASM home: <CRS home>
Password file: +OCRVOT/orapwASM
Backup of Password file: +OCRVOT/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@NODE5001 bin]$

20. Change the password file location in cluster configuration.

[grid@NODE5001 bin]$ $GRID_HOME/bin/srvctl modify asm -pwfile +OCR_VOT/orapwASM
[grid@NODE5001 bin]$
[grid@NODE5001 bin]$ srvctl config asm
ASM home: <CRS home>
Password file: +OCR_VOT/orapwASM
Backup of Password file: +OCRVOT/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@NODE5001 bin]$
[grid@NODE5001 bin]$

21. Validate the cluster resource status.

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl stat res -init -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crf
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.ctssd
      1        ONLINE  ONLINE       NODE5001             OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gipcd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gpnpd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.mdnsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.storage
      1        ONLINE  ONLINE       NODE5001             STABLE
--------------------------------------------------------------------------------

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl check cluster -all
**************************************************************
NODE5001:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
NODE5101:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@NODE5001 bin]$

22. Restart CRS node wise to ensure cluster services are coming up without issue using new OCR/Vote disk location.

[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Avai
[grid@NODE5001 bin]$ 
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE5001 bin]$

23. Add new Disks to DATA and RECO diskgroups.

SQL> ALTER DISKGROUP PSIDATA ADD DISK '/dev/sdh1' rebalance power 8;
Diskgroup altered.

24. Monitor the rebalance operation until it completes.

SQL>  select * from v$asm_operation;
GROUP_NUMBER OPERA PASS      STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE                                       CON_ID
------------ ----- --------- ---- ---------- ---------- ---------- ---------- ---------- ----------- -------------------------------------------- ----------
           1 REBAL COMPACT   WAIT          8          8          0          0          0           0                                                       0
           1 REBAL REBALANCE RUN           8          8      13387      45140      15158           2                                                       0
           1 REBAL REBUILD   DONE          8          8          0          0          0           0                                                       0
SQL>  select * from v$asm_operation;
no rows selected

SQL> ALTER DISKGROUP PSIRECO ADD DISK '/dev/sdi1' rebalance power 8;
Diskgroup altered.
SQL>  select * from v$asm_operation;
no rows selected

25. Since the rebalance operation has completed, we can drop the old storage disks.

SQL> ALTER DISKGROUP PSIDATA  drop disk  PSIDATA_0000 rebalance power 8;
Diskgroup altered.
SQL> alter diskgroup PSIRECO drop disk PSIRECO_0000 rebalance power 8;
Diskgroup altered.

26. Monitor the rebalance operation.

SQL>  select * from v$asm_operation;
no rows selected

27. Update the backup location for the OCR

[grid@NODE5001 bin]$ sudo ./ocrconfig -backuploc +OCR_VOT

28. Initiate the test OCR backup and verify that it is going on new diskgroup.

[grid@NODE5001 bin]$  sudo .$GRID_HOME/bin/ocrconfig -manualbackup
NODE5001     2021/08/20 18:31:10     +OCR_VOT:/PSIdrclus/OCRBACKUP/backup_20210820_183110.ocr.257.1081103473     1944883066
NODE5101     2021/08/20 17:11:44     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210820_171144.ocr.286.1081098705     1944883066
NODE5001     2021/03/17 14:39:26     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210317_143926.ocr.289.1067438367     1944883066
[grid@NODE5001 bin]$

Note: If MGMTDB database is present in cluster configuration follow the steps here to migrate the MGMTDB to new diskgroup

29. Check the existing OCR Voting diskgroup is not being used by any db or process other than ASM.

SQL> select a.instance_name,a.db_name,a.status from v$asm_client a, v$asm_diskgroup b
where a.group_number=b.group_number and b.name='OCRVOT';

INSTANCE_NAME         DB_NAME  STATUS
--------------------- -------- ------------
+ASM1                 +ASM     CONNECTED

30. Dismount the existing OCR diskgroup in all the nodes except one node1.

SQL> alter diskgroup OCRVOT dismount;
Diskgroup altered.

31. Drop the old OCR diskgroup by logging to ASM instance using sysasm on Node01.

SQL> drop diskgroup OCRVOT including contents;
Diskgroup dropped.

32. Validate the Diskgroup and Disk path details, now all the existing disks and old OCR diskgroup are no more part of the cluster configuration and those can be removed by the OS Admin from the cluser hosts.

SQL> col DISK_FILE_PATH for a30
set line 2000
SELECT
    NVL(a.name, '[CANDIDATE]')      disk_group_name
SQL>   , b.path                          disk_file_path
  , b.name                          disk_file_name
SQL>   2    3    4    5    , b.failgroup                     disk_file_fail_group
FROM
  6    7      v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
    a.name;
DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME                 DISK_FILE_FAIL_GROUP
------------------------------ ------------------------------ ------------------------------ ------------------------------
DATA                           /dev/sdi1                      DATA_0001                      DATA_0001
OCR_VOT                        /dev/sdj1                      OCR_VOT_0002                   OCR_VOT_0002
OCR_VOT                        /dev/sdk1                      OCR_VOT_0001                   OCR_VOT_0001
OCR_VOT                        /dev/sdl1                      OCR_VOT_0000                   OCR_VOT_0000
RECO                           /dev/sdh1                      RECO_0001                      RECO_0001
[CANDIDATE]                    /dev/sdf1
[CANDIDATE]                    /dev/sdd1
[CANDIDATE]                    /dev/sdg1
[CANDIDATE]                    /dev/sde1
[CANDIDATE]                    /dev/sdc1
SQL>

33. Do a restart of CRS services node wise to ensure that the services are comming up properly using the new OCR/Vote disk location.

[grid@NODE1770$ sudo $GRID_HOME/bin/crsctl stop crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE1770$
[grid@NODE1770$ sudo $GRID_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE1770$

** Storage disks shown as [CANDIDATE] are the old disks and can be removed from the host configuration by the OS Admin now. This completes our storage migration.

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s