Connection using asmsnmp fails with ORA-01017

+ASM_clst4: Failed. ORA-01017: invalid username/password; logon denied The Connect Descriptor was (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbclient01-vip.xxyyzz.com)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = dbclient03-vip.xxyyzz.com)(PORT = 1521)))(CONNECT_DATA = (SERVICE_NAME = +ASM)(SERVER = DEDICATED)))

Solution:
List the users from the local Oracle ASM password file

ASMCMD> lspwusr
Username sysdba sysoper sysasm
SYS TRUE TRUE TRUE
ASMSNMP TRUE FALSE FALSE
ASMCMD>

Incase you dont know the correct asmsnmp password and need to reset it, use the following command.

ASMCMD> orapwusr --modify asmsnmp
Enter password: ****************
ASMCMD>

Advertisement

ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

RMAN> RUN
{
sql ‘alter system archive log current’;
BACKUP
TAG ImageCopy_DB_Backup
FORMAT ‘/Bkp/BS/ImageBkps/MLL/GPRD_DB_bk_%s_%p_%t_%T’
AS COPY DATABASE ;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
}

using target database control file instead of recovery catalog
..
Starting backup at 19-SEP-21
channel ch00: starting datafile copy
input datafile file number=00003 name=+DA_DM01/GPRD_dr/datafile/undotbs1_01.dbf
channel ch01: starting datafile copy
input datafile file number=00004 name=+DA_DM01/GPRD_dr/datafile/undotbs2_01.dbf
RMAN-03009: failure of backup command on ch00 channel at 09/19/2021 10:43:14
ORA-19504: failed to create file “/Bkp/BS/ImageBkps/MLL/GPRD_DB_bk_144137_1_1083667393_20210919”
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
Additional information: 12

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of backup command on ch05 channel at 09/19/2021 10:43:15
ORA-19504: failed to create file “/Bkp/BS/ImageBkps/MLL/GPRD_DB_bk_144142_1_1083667393_20210919”
ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 3
Additional information: 12

Check if DNFS is enabled

SQL> select * from v$dnfs_servers;

no rows selected

SQL>

Since DNFS is not enableed, RMAN image copy backup fails. Let’s enable DNFS on RDBMS Home as follows

$ make -f $ORACLE_HOME/rdbms/lib/ins_rdbms.mk dnfs_on

$

select * from v$dnfs_servers;

    ID SVRNAME                        DIRNAME                           MNTPORT    NFSPORT      WTMAX      RTMAX
    -------------------                       --------------                           ----------------  --------------     ------------      --------------
     1 xx.xx.xx.152                      /export/Bkp                            20048             2049               1048576        1048576

[oracle@dbhost ~]$ grep /Bkp /etc/vfstab
1xx.xxx.xx.152:/export/Bkp – /Bkp nfs – yes rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=4

Initiated the Rman backup again and it completed successfully!

Rman output is purged to reduce the huge number of lines..

RMAN> RUN
{
sql ‘alter system archive log current’;
BACKUP
TAG ImageCopy_DB_Backup
FORMAT ‘/Bkp/BS/ImageBkps/MLL/GPRD_DB_bk_%s_%p_%t_%T’
AS COPY DATABASE ;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
}

using target database control file instead of recovery catalog

———

———

Finished backup at 19-OCT-21

RMAN>

Hope it helps!

ERROR: ORA-29548 ORA-29548: Java system class reported: release of Java system classes in the database (12.1.0.2.160419 1.6) does not match that of the oracle executable (12.1.0.2.170117 1.6)

Mostly this error is caused when JVM post patching activities are not completed properly. The workaround is to run the JVM update as follows:

SQL> select dbms_java.get_jdk_version() from dual;
select dbms_java.get_jdk_version() from dual
*
ERROR at line 1:
ORA-29548: Java system class reported: release of Java system classes in the
database (12.1.0.2.160419 1.6) does not match that of the oracle executable
(12.1.0.2.170117 1.6)

SQL> select comp_id,status,version,modified from dba_registry;

COMP_ID STATUS VERSION MODIFIED


DV VALID 12.1.0.2.0 25-MAY-2019 03:14:29
OLS VALID 12.1.0.2.0 18-FEB-2022 14:19:09
OWM VALID 12.1.0.2.0 25-MAY-2019 03:14:14
XDB VALID 12.1.0.2.0 25-MAY-2019 03:13:36
CATALOG VALID 12.1.0.2.0 25-MAY-2019 03:13:25
CATPROC VALID 12.1.0.2.0 25-MAY-2019 03:13:26
JAVAVM VALID 12.1.0.2.0 25-MAY-2019 03:14:19
XML VALID 12.1.0.2.0 25-MAY-2019 03:14:21
CATJAVA VALID 12.1.0.2.0 25-MAY-2019 03:14:22
RAC VALID 12.1.0.2.0 25-MAY-2019 03:14:23

10 rows selected.

Run the following script as a workaround to update the JVM

SQL> @?/javavm/install/update_javavm_db.sql
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> alter session set “_ORACLE_SCRIPT”=true;

Session altered.

SQL>
SQL> — If Java is installed, do CJS.
SQL>
SQL> — If CJS can deal with the SROs inconsistent with the new JDK,
SQL> — the drop_sros() call here can be removed.
SQL> call initjvmaux.drop_sros();

Call completed.

SQL>
SQL> create or replace java system;
2 /

Java created.

SQL>
SQL> update dependency$
2 set p_timestamp=(select stime from obj$ where obj#=p_obj#)
3 where (select stime from obj$ where obj#=p_obj#)!=p_timestamp and
4 (select type# from obj$ where obj#=p_obj#)=29 and
5 (select owner# from obj$ where obj#=p_obj#)=0;

0 rows updated.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> alter session set “_ORACLE_SCRIPT”=false;

Session altered.

After executing the JVM update, now the Java version is showing up without the error

SQL> select dbms_java.get_jdk_version() from dual;

DBMS_JAVA.GET_JDK_VERSION()

1.6.0_141

CRS-8501 CRS-2316 Cannot initialize GPnP, CLSGPNP_INIT_FAILED (GPnP facility initilization failed)

root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
root@dbhost040102:/u01/app/12.2.0.1/grid/bin#

CRS alertlog shows GPnP initialization failure during the CRS startup.

2022-11-12 06:06:41.472 [CLSECHO(1414)]CRS-10132: Oracle High Availability Service was restarted at least 10 times within the last 60 seconds. Stop auto-restarting O racle High Availability Service.
2022-11-12 06:06:56.664 [GPNPD(1267)]CRS-2316: Cannot initialize GPnP, CLSGPNP_INIT_FAILED (GPnP facility initialization failed).
2022-11-12 06:06:56.673 [GPNPD(1267)]CRS-8501: Oracle Clusterware GPNPD process with operating system process ID 1267 is ending with return value 3

Download the troubleshooting script here. Unzip the script in /tmp and run it from the problematic node as per below steps:

$ cd /tmp/
$ tar -zxvf startUpCheck_[OS_Platform].tar.gz
$ chmod +x startUpCheck_[OS_Platform].{sh,py}
You would find 2 files startUpCheck_[OS_Platform].sh and startUpCheck_[OS_Platform].py. Execute the script as "root" user as follows

# ./startUpCheck_[OS_Platform].sh -n <node list> -i <private/asm interface list>

where
-n => list of nodes in the cluster
-i => list of private/asm interfaces

root@dbhost040102:/tmp/gi_scrpt# ./startUpCheck_Solaris.sh -n dbhost020102,dbhost040102 -i eth1,eth2
Logfile location : /tmp/gi_scrpt/crsstartup_dbhost040102_2022-11-12.12:25:52.log

An error occurred while executing '/u01/app/12.2.0.1/grid/bin/gpnptool get' command. Refer log for details
Verifying if script is executed by root user ...PASSED
Verifying runlevel ...PASSED
Verifying if the environment is STANDALONE or RAC ...RAC
Verifying the provided node list against information in /u01/app/12.2.0.1/grid/crs/install/crsconfig_params ...PASSED
Verifying the provided private/asm interface list against information fetched from GPNP ...FAILED
        RESULT: Warning!!! Provided private interconnect details found incorrect... Proceeding with autodetected private interface details
Verifying GI Home details ...DONE
Verifying '/u01/app/12.2.0.1/grid/dbs' is owned by 'grid' user... PASSED
Verifying ownership and permissions on /u01/app/12.2.0.1/grid/bin/oracle ...PASSED
Verifying mount options for GI Home mount point ...PASSED
Verifying OLR integrity ...FAILED
Cause: OLR is corrupted

Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     409568
         Used space (kbytes)      :        896
         Available space (kbytes) :     408672
         ID                       :  158964337
         Device/File Name         : /u01/app/12.2.0.1/grid/cdata/dbhost040102.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check failed

The output of the script about shows OLR is corrupted on the node, where this script was invoked. The solution is to restore the OLR from the last successful backup as follows:

Before proceeding with the OLR restoration , use the following command to ensure GI stack is completely down on problematic node and ohasd.bin is not up and running.

root@dbhost040102:/tmp/gi_scrpt# ps -ef| grep ohasd.bin
    root 45843 31954   0 12:56:51 pts/8       0:00 grep ohasd.bin
root@dbhost040102:/tmp/gi_scrpt#

Incase few services are up, use the following command to forcefully stop the CRS services.

root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./crsctl stop crs -f
CRS-4639: Could not contact Oracle High Availability Services
CRS-4000: Command Stop failed, or completed with errors.
root@dbhost040102:/u01/app/12.2.0.1/grid/bin# 
Check the available OCR backups
root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./ocrconfig -local -showbackup

dbhost040102     2020/06/16 00:39:48     /u01/app/12.2.0.1/grid/cdata/dbhost040102/autobackup_20200616_003948.olr     2808044450

dbhost040102     2020/06/15 00:39:45     /u01/app/12.2.0.1/grid/cdata/dbhost040102/autobackup_20200615_003945.olr     2808044450

dbhost040102     2019/10/22 02:50:19     /u01/app/12.2.0.1/grid/cdata/dbhost040102/backup_20191022_025019.olr     185980871

dbhost040102     2016/06/22 20:27:05     /u01/app/12.1.0.2/grid/cdata/dbhost040102/backup_20160622_202705.olr     3118584562
root@dbhost040102:/u01/app/12.2.0.1/grid/bin# 
Use the last successful backup to restore the OLR

For example:

root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./ocrconfig -local -restore /u01/app/12.2.0.1/grid/cdata/dbhost040102/autobackup_20200616_003948.olr

root@dbhost040102:/u01/app/12.2.0.1/grid/bin#

OLR is restored, lets try to bring up the CRS now.

root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
root@dbhost040102:/u01/app/12.2.0.1/grid/bin#


root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
root@dbhost040102:/u01/app/12.2.0.1/grid/bin# ./crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
root@dbhost040102:/u01/app/12.2.0.1/grid/bin#

CRS services have started up successfuly. CRS alert log clearly show the clusterware process start.

CRS Alertlog:
2022-11-12 13:13:35.011 [OCSSD(74199)]CRS-1601: CSSD Reconfiguration complete. Active nodes are dbhost020102 dbhost040102 .
2022-11-12 13:13:37.229 [OCSSD(74199)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
2022-11-12 13:13:37.678 [OCTSSD(76610)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 76610
2022-11-12 13:13:38.581 [OCTSSD(76610)]CRS-2403: The Cluster Time Synchronization Service on host dbhost040102 is in observer mode.
2022-11-12 13:13:40.067 [OCTSSD(76610)]CRS-2407: The new Cluster Time Synchronization Service reference node is host dbhost020102.
2022-11-12 13:13:40.068 [OCTSSD(76610)]CRS-2401: The Cluster Time Synchronization Service started on host dbhost040102.
2022-11-12 13:13:50.647 [OSYSMOND(77856)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 77856
2022-11-12 13:13:51.945 [CRSD(77979)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 77979
2022-11-12 13:14:02.883 [CRSD(77979)]CRS-1012: The OCR service started on node dbhost040102.
2022-11-12 13:14:05.479 [CRSD(77979)]CRS-1201: CRSD started on node dbhost040102.
2022-11-12 13:14:10.836 [ORAAGENT(80013)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 80013
2022-11-12 13:14:11.035 [ORAAGENT(80058)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 80058
2022-11-12 13:14:11.422 [ORAROOTAGENT(80102)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 80102
2022-11-12 13:15:22.874 [ORAAGENT(89757)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 89757

Admin Server Port is occupied by another process. Please stop process to free port.

Unable to stop the oms:

[emXXX@XXX00C123 bin]$ ./emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
WebTier is Down
Oracle Management Server is Up
JVMD Engine is Up

BI Publisher Server is Down

[emXXX@XXX00C123 bin]$ ./emctl stop oms -all
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Stopping Oracle Management Server…
WebTier Successfully Stopped
Node Manager Not Running
Oracle Management Server is Up
JVMD Engine is Up
BI Publisher Server is Down
Admin Server Port is occupied by another process. Please stop process to free port.
[emXXX@XXX00C123 bin]$

[emXXX@XXX00C123 bin]$ ./emctl stop oms -all -force
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Stopping Oracle Management Server…
WebTier Successfully Stopped
Node Manager Not Running
Oracle Management Server is Up
JVMD Engine is Up

BI Publisher Server is Down
Admin Server Port is occupied by another process. Please stop process to free port.
[emXXX@XXX00C123 bin]$

[emXXX@XXX00C123 bin]$ ps -ef | grep EMGC_ADMINSERVER
emXXX 64487 407 0 12:55 pts/1 00:00:00 grep –color=auto EMGC_ADMINSERVER
emXXX 69903 69855 2 Oct30 ? 00:25:44 /oem/em13c/Middleware/oracle_common/jdk/bin/java -server -Xms256M -Xmx8192M -XX:CompileThreshold=8000 -XX:-DoEscapeAnalysis -XX:+UseCodeCacheFlushing -XX:ReservedCodeCacheSize=100M -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSClassUnloadingEnabled -cp /oem/em13c/Middleware/wlserver/server/lib/weblogic-launc

[emXXX@XXX00C123 bin]$ kill -9 69903 64487

[emXXX@XXX00C123 bin]$ ps -ef | grep java
emXXX 12661 12291 3 Oct30 ? 00:48:43 /oem/em13c/gc_inst/agent_13.4.0.0.0/oracle_common/jdk/bin/java -Xmx138M -XX:MaxMetaspaceSize=192M -server

[emXXX@XXX00C123 bin]$ kill -9 12661

[emXXX@XXX00C123 bin]$ ./emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
WebTier is Down
Oracle Management Server is Down
JVMD Engine is Down
BI Publisher Server is Down

[emXXX@XXX00C123 bin]$ ./emctl start oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
Starting Oracle Management Server…
WebTier Successfully Started
Oracle Management Server Successfully Started
Oracle Management Server is Up
JVMD Engine is Up
Starting BI Publisher Server …
BI Publisher Server Successfully Started
BI Publisher Server is Up
[emXXX@XXX00C123 bin]$

[emXXX@XXX00C123 bin]$ ./emctl status oms
Oracle Enterprise Manager Cloud Control 13c Release 4
Copyright (c) 1996, 2020 Oracle Corporation. All rights reserved.
WebTier is Up
Oracle Management Server is Up
JVMD Engine is Up
BI Publisher Server is Up
[emXXX@XXX00C123 bin]$

Two node RAC, underlying storage migration for OCR, Voting disks, Data and Reco ASM diskgroups.

This document demonstrates how the underlying storage of OCR/Vote disk and ASM diskgroups for DATA and RECO can be migrated from one storage to another in two-node RAC cluster.
This activity will be carried out online and doesn’t need any downtime. From 11g onwards, OCR/Voting disks can be moved without the need to bring down cluster services/node-apps. The oracle version used in this scenario is 19c, however the same procedure can be applied in 11g/12c as well.

Note: I’m restarting the CRS services on both the RAC nodes at the end just to ensure that the CRS services are starting up properly after the change in the storage.

Following is the step-wise approach to move the OCR/Voting disk to new ASM diskgroup, and changing the underlying storage disk for DATA/RECO diskgroup.

  1. Check the ASM Disk group and ASM disk path for the existing and new ASM disks.
SQL> col DISK_FILE_PATH for a30
line 2000
SELECT
    NVL(a.name, '[CANDIDATE]')      disk_group_name
  , b.path                          disk_file_path
  , b.name                          disk_file_name
  , b.failgroup                     disk_file_fail_group
FROM
    v$asm_diskgroup a   RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY a.name;
SQL>  

DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME                 DISK_FILE_FAIL_GROUP
------------------------------ ------------------------------ ------------------------------ ------------------------------
PSIDATA                        /dev/sdc1                      PSIDATA_0000                   PSIDATA_0000
PSIRECO                        /dev/sdd1                      PSIRECO_0000                   PSIRECO_0000
OCRVOT                         /dev/sde1                      OCRVOT_0000                    OCRVOT_0000
OCRVOT                         /dev/sdg1                      OCRVOT_0002                    OCRVOT_0002
OCRVOT                         /dev/sdf1                      OCRVOT_0001                    OCRVOT_0001
[CANDIDATE]                    /dev/sdl1
[CANDIDATE]                    /dev/sdk1
[CANDIDATE]                    /dev/sdh1
[CANDIDATE]                    /dev/sdj1
[CANDIDATE]                    /dev/sdi1

10 rows selected.

SQL>

** All the disks showing as [CANDIDATE] are the new storage disks, which will be added into the cluster config and the one’s already part of the cluster will be removed from the configuration.

  1. Check the current OCR diskgroup and integrity using root user.
[root@NODE5001 bin]# $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84588
         Available space (kbytes) :     407096
         ID                       :  603958846
         Device/File Name         :    +OCRVOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@NODE5001 bin]#
3.	Check the GI Cluster service status
[root@NODE5001 bin]# $GRID_HOME/bin/crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crf
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.ctssd
      1        ONLINE  ONLINE       NODE5001             OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gipcd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gpnpd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.mdnsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.storage
      1        ONLINE  ONLINE       NODE5001             STABLE
--------------------------------------------------------------------------------
[root@NODE5001 bin]#

4 . Validate the current votedisk status

[root@NODE5001 bin]# $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   d37054b8c8d64f29bfa7cc7d485eb30a (/dev/sde1) [OCRVOT]
 2. ONLINE   2da180539aed4fdcbf97de51b4c07cd7 (/dev/sdf1) [OCRVOT]
 3. ONLINE   d96a697fec024f86bf47df7e02ecfe56 (/dev/sdg1) [OCRVOT]
Located 3 voting disk(s).
[root@NODE5001 bin]#

5. Take the backup of current OCR/Voting disk to the specified location:

[root@NODE5001 bin]#  $GRID_HOME/bin/ocrconfig -export /u01/SW_PKGS/OCR_BKP/ocr_backup_`date +%Y%m%d`.dmp
PROT-58: successfully exported the Oracle Cluster Registry contents to file '/u01/software/OCR_BACKUP/ocr_backup_20210820.dmp'
[root@NODE5001 bin]#

[root@NODE5001 bin]# ls -ltr /u01/software/OCR_BACKUP/
-rw------- 1 root root 208896 Aug 20 17:14 ocr_backup_20210820.dmp
[root@NODE5001 bin]#

6. Incase you also wish to take the manual backup of the OCR, use the following command.

[root@NODE5001 bin]# $GRID_HOME/bin/ocrconfig -manualbackup

NODE5101     2021/08/20 17:11:44     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210820_171144.ocr.286.1081098705     1944883066
NODE5001     2021/03/17 14:39:26     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210317_143926.ocr.289.1067438367     1944883066
[root@NODE5001 bin]#

7. Get the details of currently available Diskgroups.

SQL> col COMPATIBILITY for a13
col DATABASE_COMPATIBILITY for a13
set lin 2000
select GROUP_NUMBER,NAME,STATE,TYPE,TOTAL_MB,FREE_MB,REQUIRED_MIRROR_FREE_MB,USABLE_FILE_MB,COMPATIBILITY,DATABASE_COMPATIBILITY,VOTING_FILES from  v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB COMPATIBILITY DATABASE_COMP V
------------ ------------------------------ ----------- ------ ---------- ---------- ----------------------- -------------- ------------- ----
           1 PSIDATA                        CONNECTED   EXTERN     511996     150848                       0         150848 19.0.0.0.0    10.1.0.0.0    N
           2 PSIRECO                        CONNECTED   EXTERN     307196     301784                       0         301784 19.0.0.0.0    10.1.0.0.0    N
           3 OCRVOT                         MOUNTED     NORMAL     221172     172804                   73724          49540 19.0.0.0.0    10.1.0.0.0    Y

8. Create a new Disk group for OCR and Voting disks.

SQL>  set timing on
set time on
create diskgroup OCR_VOT normal redundancy disk '/dev/sdj1','/dev/sdk1','/dev/sdl1'
attribute 'compatible.rdbms'='11.2.0.0', 'compatible.asm'='19.0.0.0';

Diskgroup created.

9. Check the status of newly created Diskgroup.

SQL> col COMPATIBILITY for a13
col DATABASE_COMPATIBILITY for a13
set lin 2000
select GROUP_NUMBER,NAME,STATE,TYPE,TOTAL_MB,FREE_MB,REQUIRED_MIRROR_FREE_MB,USABLE_FILE_MB,COMPATIBILITY,DATABASE_COMPATIBILITY,VOTING_FILES from  v$asm_diskgroup;

GROUP_NUMBER NAME                           STATE       TYPE     TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB USABLE_FILE_MB COMPATIBILITY DATABASE_COMP V
------------ ------------------------------ ----------- ------ ---------- ---------- ----------------------- -------------- ------------- ----
           1 PSIDATA                        MOUNTED     EXTERN     511996     150848                       0         150848 19.0.0.0.0    10.1.0.0.0    N
           2 PSIRECO                        MOUNTED     EXTERN     307196     301784                       0         301784 19.0.0.0.0    10.1.0.0.0    N
           3 OCRVOT                         MOUNTED     NORMAL     221172     172804                   73724          49540 19.0.0.0.0    10.1.0.0.0    Y
           4 OCR_VOT                        MOUNTED     NORMAL     221181     220986                   73727          73629 19.0.0.0.0    11.2.0.0.0    N
SQL>

** Make sure the diskgroup is mounted on all the nodes.

SQL> select name,state,usable_file_mb,total_mb,free_mb,required_mirror_free_mb from v$asm_diskgroup;

NAME                           STATE       USABLE_FILE_MB   TOTAL_MB    FREE_MB REQUIRED_MIRROR_FREE_MB
------------------------------ ----------- -------------- ---------- ---------- -----------------------
PSIDATA                        MOUNTED             150848     511996     150848                       0
PSIRECO                        MOUNTED             301784     307196     301784                       0
OCRVOT                         MOUNTED              49540     221172     172804                   73724
OCR_VOT                        MOUNTED              73565     221181     220857                   73727
SQL>
  1. Move OCR and Vote disk from {existing diskgroup } to {new diskgroup }
    Note: I’m using grid binary-owner user who has the sudo privilege to root, if you have root credentials you can use that.
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/ocrconfig -add +OCR_VOT

11. Now check the OCR status after adding the new diskgroup in OCR configuration.

[root@NODE5001 trace]# $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84632
         Available space (kbytes) :     407052
         ID                       :  603958846
         Device/File Name         :    +OCRVOT
                                    Device/File integrity check succeeded
         Device/File Name         :   +OCR_VOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@NODE5001 trace]#

Monitor the Alertlog, and ensure there are no errors reported during the new OCR diskgroup addition.

crs/NODE17703/crs/trace/crsd.trc.
2021-08-20 17:27:22.313 [CRSD(6049)]CRS-1007: The OCR/OCR mirror location was replaced by +OCR_VOT/PSIdrclus/OCRFILE/registry.255.1081099639.

12. Query the Vote disk status, it will be still pointing to the old diskgroup.

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   d37054b8c8d64f29bfa7cc7d485eb30a (/dev/sde1) [OCRVOT]
 2. ONLINE   2da180539aed4fdcbf97de51b4c07cd7 (/dev/sdf1) [OCRVOT]
 3. ONLINE   d96a697fec024f86bf47df7e02ecfe56 (/dev/sdg1) [OCRVOT]
Located 3 voting disk(s).
[grid@NODE5001 bin]$

13. Now replace the vote disk with newly created diskgroup OCR_VOT.

[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl replace votedisk +OCR_VOT
Successful addition of voting disk 8d84c608635d4fe4bf24be76191ef59f.
Successful addition of voting disk c3b3c443f5224f2ebf8bde96f3501b52.
Successful addition of voting disk 804f8d85c7294f57bf825e4817c1c98b.
Successful deletion of voting disk d37054b8c8d64f29bfa7cc7d485eb30a.
Successful deletion of voting disk 2da180539aed4fdcbf97de51b4c07cd7.
Successful deletion of voting disk d96a697fec024f86bf47df7e02ecfe56.
Successfully replaced voting disk group with +OCR_VOT.
CRS-4266: Voting file(s) successfully replaced
[grid@NODE5001 bin]$

Monitor the Alertlog to ensure no errors:

2021-08-20 17:33:36.440 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdj1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.441 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdk1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.441 [OCSSD(3385)]CRS-1605: CSSD voting file is online: /dev/sdl1; details in /u01/app/grid/diag/crs/NODE5001/crs/trace/ocssd.trc.
2021-08-20 17:33:36.458 [OCSSD(3385)]CRS-1626: A Configuration change request completed successfully
2021-08-20 17:33:36.558 [OCSSD(3385)]CRS-1601: CSSD Reconfiguration complete. Active nodes are NODE5001 NODE5101 .

14. Now check the vote disk status and it will point to new Diskgroup- OCR_VOT

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   8d84c608635d4fe4bf24be76191ef59f (/dev/sdj1) [OCR_VOT]
 2. ONLINE   c3b3c443f5224f2ebf8bde96f3501b52 (/dev/sdk1) [OCR_VOT]
 3. ONLINE   804f8d85c7294f57bf825e4817c1c98b (/dev/sdl1) [OCR_VOT]
Located 3 voting disk(s).
[grid@NODE5001 bin]$
14.	Now drop the existing OCR location from the Cluster configuration.
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/ocrconfig -delete +OCRVOT
[grid@NODE5001 bin]$

** Monotor the Alertlog to ensure operation completed successfully.

2021-08-20 17:31:18.494 [CRSD(6049)]CRS-1010: The OCR mirror location +OCRVOT/PSIdrclus/OCRFILE/registry.255.1067259605 was removed.

15. Revalidate the OCR status

[grid@NODE5001 bin]$ $GRID_HOME/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84632
         Available space (kbytes) :     407052
         ID                       :  603958846
         Device/File Name         :   +OCR_VOT
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[grid@NODE5001 bin]$

16. Now lets point the ASM SPFILE to new location.

SQL> create pfile='/tmp/asmspfile.ora' from spfile;
File created.
SQL>  create spfile='+OCR_VOT' from pfile='/tmp/asmspfile.ora';
File created.
SQL>

17. Lets recheck using gpnptool to ensure that Spfile is pointing to the new diskgroup in the cluster configuration.

[grid@NODE5001 bin]$ $ORACLE_HOME/bin/gpnptool get
Warning: some command line parameters were defaulted. Resulting command line:
         /u01/app/19.0.0/grid_home/bin/gpnptool.bin get -o-

<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="6" ClusterUId="a41ec45d9d3a6f7fffe1b140bace941c" ClusterName="PSIdrclus" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="10.13.75.0" Adapter="ens192" Use="public"/><gpnp:Network id="net2" IP="10.21.7.0" Adapter="ens256" Use="asm,cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/sd*" SPFile="+OCR_VOT/PSIdrclus/ASMPARAMETERFILE/registry.253.1081100211" Mode="remote" Extended="false"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>UjQf1EcKTeONpOPSphLNUVmVJV8=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>k9K+Y0BnUGrjXrlXZwaf/0UQZR3XztmD1nAObRfdDLE9qA4oTVGG1YnN2+T58n9SH+FpYKmdcvWPZ1orenghqNdvgsQL174ZKv3Cw5XWHgHxcPxfdG4nxYOzdl8W5c22plHoKJWCnT+DK08MJmWJo7cN38OTzwRBRGBCNDeraVo=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.
[grid@NODE5001 bin]$
18.	 Now change the ASM password file location by Copying the ASM password file to new Diskgroupp
[grid@NODE5001 bin]$ asmcmd
ASMCMD> pwget --asm
+OCRVOT/orapwASM
ASMCMD> pwcopy +OCRVOT/orapwASM +OCR_VOT/orapwASM
copying +OCRVOT/orapwASM -> +OCR_VOT/orapwASM
ASMCMD> ls -lt  +OCR_VOT/orapwASM
Type      Redund  Striped  Time             Sys  Name
PASSWORD  HIGH    COARSE   AUG 20 17:00:00  N    orapwASM => +OCR_VOT/ASM/PASSWORD/pwdasm.256.1081100361

19. Modify the password file location to new diskgroup

[grid@NODE5001 bin]$ srvctl config asm
ASM home: <CRS home>
Password file: +OCRVOT/orapwASM
Backup of Password file: +OCRVOT/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@NODE5001 bin]$

20. Change the password file location in cluster configuration.

[grid@NODE5001 bin]$ $GRID_HOME/bin/srvctl modify asm -pwfile +OCR_VOT/orapwASM
[grid@NODE5001 bin]$
[grid@NODE5001 bin]$ srvctl config asm
ASM home: <CRS home>
Password file: +OCR_VOT/orapwASM
Backup of Password file: +OCRVOT/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM
[grid@NODE5001 bin]$
[grid@NODE5001 bin]$

21. Validate the cluster resource status.

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl stat res -init -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crf
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.crsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.ctssd
      1        ONLINE  ONLINE       NODE5001             OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gipcd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.gpnpd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.mdnsd
      1        ONLINE  ONLINE       NODE5001             STABLE
ora.storage
      1        ONLINE  ONLINE       NODE5001             STABLE
--------------------------------------------------------------------------------

[grid@NODE5001 bin]$ $GRID_HOME/bin/crsctl check cluster -all
**************************************************************
NODE5001:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
NODE5101:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[grid@NODE5001 bin]$

22. Restart CRS node wise to ensure cluster services are coming up without issue using new OCR/Vote disk location.

[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Avai
[grid@NODE5001 bin]$ 
[grid@NODE5001 bin]$ sudo $GRID_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE5001 bin]$

23. Add new Disks to DATA and RECO diskgroups.

SQL> ALTER DISKGROUP PSIDATA ADD DISK '/dev/sdh1' rebalance power 8;
Diskgroup altered.

24. Monitor the rebalance operation until it completes.

SQL>  select * from v$asm_operation;
GROUP_NUMBER OPERA PASS      STAT      POWER     ACTUAL      SOFAR   EST_WORK   EST_RATE EST_MINUTES ERROR_CODE                                       CON_ID
------------ ----- --------- ---- ---------- ---------- ---------- ---------- ---------- ----------- -------------------------------------------- ----------
           1 REBAL COMPACT   WAIT          8          8          0          0          0           0                                                       0
           1 REBAL REBALANCE RUN           8          8      13387      45140      15158           2                                                       0
           1 REBAL REBUILD   DONE          8          8          0          0          0           0                                                       0
SQL>  select * from v$asm_operation;
no rows selected

SQL> ALTER DISKGROUP PSIRECO ADD DISK '/dev/sdi1' rebalance power 8;
Diskgroup altered.
SQL>  select * from v$asm_operation;
no rows selected

25. Since the rebalance operation has completed, we can drop the old storage disks.

SQL> ALTER DISKGROUP PSIDATA  drop disk  PSIDATA_0000 rebalance power 8;
Diskgroup altered.
SQL> alter diskgroup PSIRECO drop disk PSIRECO_0000 rebalance power 8;
Diskgroup altered.

26. Monitor the rebalance operation.

SQL>  select * from v$asm_operation;
no rows selected

27. Update the backup location for the OCR

[grid@NODE5001 bin]$ sudo ./ocrconfig -backuploc +OCR_VOT

28. Initiate the test OCR backup and verify that it is going on new diskgroup.

[grid@NODE5001 bin]$  sudo .$GRID_HOME/bin/ocrconfig -manualbackup
NODE5001     2021/08/20 18:31:10     +OCR_VOT:/PSIdrclus/OCRBACKUP/backup_20210820_183110.ocr.257.1081103473     1944883066
NODE5101     2021/08/20 17:11:44     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210820_171144.ocr.286.1081098705     1944883066
NODE5001     2021/03/17 14:39:26     +OCRVOT:/PSIdrclus/OCRBACKUP/backup_20210317_143926.ocr.289.1067438367     1944883066
[grid@NODE5001 bin]$

Note: If MGMTDB database is present in cluster configuration follow the steps here to migrate the MGMTDB to new diskgroup

29. Check the existing OCR Voting diskgroup is not being used by any db or process other than ASM.

SQL> select a.instance_name,a.db_name,a.status from v$asm_client a, v$asm_diskgroup b
where a.group_number=b.group_number and b.name='OCRVOT';

INSTANCE_NAME         DB_NAME  STATUS
--------------------- -------- ------------
+ASM1                 +ASM     CONNECTED

30. Dismount the existing OCR diskgroup in all the nodes except one node1.

SQL> alter diskgroup OCRVOT dismount;
Diskgroup altered.

31. Drop the old OCR diskgroup by logging to ASM instance using sysasm on Node01.

SQL> drop diskgroup OCRVOT including contents;
Diskgroup dropped.

32. Validate the Diskgroup and Disk path details, now all the existing disks and old OCR diskgroup are no more part of the cluster configuration and those can be removed by the OS Admin from the cluser hosts.

SQL> col DISK_FILE_PATH for a30
set line 2000
SELECT
    NVL(a.name, '[CANDIDATE]')      disk_group_name
SQL>   , b.path                          disk_file_path
  , b.name                          disk_file_name
SQL>   2    3    4    5    , b.failgroup                     disk_file_fail_group
FROM
  6    7      v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
    a.name;
DISK_GROUP_NAME                DISK_FILE_PATH                 DISK_FILE_NAME                 DISK_FILE_FAIL_GROUP
------------------------------ ------------------------------ ------------------------------ ------------------------------
DATA                           /dev/sdi1                      DATA_0001                      DATA_0001
OCR_VOT                        /dev/sdj1                      OCR_VOT_0002                   OCR_VOT_0002
OCR_VOT                        /dev/sdk1                      OCR_VOT_0001                   OCR_VOT_0001
OCR_VOT                        /dev/sdl1                      OCR_VOT_0000                   OCR_VOT_0000
RECO                           /dev/sdh1                      RECO_0001                      RECO_0001
[CANDIDATE]                    /dev/sdf1
[CANDIDATE]                    /dev/sdd1
[CANDIDATE]                    /dev/sdg1
[CANDIDATE]                    /dev/sde1
[CANDIDATE]                    /dev/sdc1
SQL>

33. Do a restart of CRS services node wise to ensure that the services are comming up properly using the new OCR/Vote disk location.

[grid@NODE1770$ sudo $GRID_HOME/bin/crsctl stop crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE1770$
[grid@NODE1770$ sudo $GRID_HOME/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[grid@NODE1770$

** Storage disks shown as [CANDIDATE] are the old disks and can be removed from the host configuration by the OS Admin now. This completes our storage migration.

Migrating Grid Infrastructure Management Repository (GIMR) database MGMTDB to a new Diskgroup

**Download mdbutil utility here and follow the below instructions.

  1. Copy the mdbutil to the RAC node where the MGMTDB is running currently.
  2. Unzip the mdbutil
  3. Set the execute permissions using chmod 755 mdbutil.pl
  4. Invoke the utility by specifying the new Diskgroup in the Target location, in this scenario OCR_VOT is our new diskgroup.

./mdbutil.pl –mvmgmtdb –target=+OCR_VOT
mdbutil.pl version : 1.100
Moving MGMTDB, it will be stopped, are you sure (Y/N)? Y
2021-08-20 20:51:35: I Checking for the required paths under +OCR_VOT
2021-08-20 20:51:37: I Creating new path +OCR_VOT/_MGMTDB/PARAMETERFILE
2021-08-20 20:51:41: I Creating new path +OCR_VOT/_MGMTDB/PASSWORD
2021-08-20 20:51:45: I Creating new path +OCR_VOT/_MGMTDB/CONTROLFILE
2021-08-20 20:51:49: I Creating new path +OCR_VOT/_MGMTDB/ONLINELOG
2021-08-20 20:51:53: I Creating new path +OCR_VOT/_MGMTDB/DATAFILE
2021-08-20 20:51:57: I Creating new path +OCR_VOT/_MGMTDB/TEMPFILE
2021-08-20 20:52:01: I Creating new path +OCR_VOT/_MGMTDB/DATAFILE/PDB$SEED
2021-08-20 20:52:04: I Creating new path +OCR_VOT/_MGMTDB/DATAFILE/TEMPFILE/PDB$SEED
2021-08-20 20:52:08: I Creating new path +OCR_VOT/_MGMTDB/DATAFILE/PSIdrclus
2021-08-20 20:52:11: I Creating new path +OCR_VOT/_MGMTDB/TEMPFILE/PSIdrclus
2021-08-20 20:52:13: I Getting MGMTDB Database files location
2021-08-20 20:52:13: I Getting MGMTDB Temp files location
2021-08-20 20:52:13: I Getting MGMTDB PDB PDB$SEED files location
2021-08-20 20:52:13: I Getting MGMTDB PDB PDB$SEED Temp files location
2021-08-20 20:52:14: I Getting MGMTDB PDB GIMR_DSCREP_10 files location
2021-08-20 20:52:14: I Getting MGMTDB PDB GIMR_DSCREP_10 Temp files location
2021-08-20 20:52:20: I Creating temporary PFILE
2021-08-20 20:52:20: I Creating target SPFILE
2021-08-20 20:52:24: I Stopping the Cluster Health Analysis Resource
2021-08-20 20:52:24: W Cluster Health Analysis Resource was not running
2021-08-20 20:52:24: I Stopping mgmtdb
2021-08-20 20:52:53: I Copying MGMTDB DBFiles to +OCR_VOT
2021-08-20 20:53:01: I Copying MGMTDB PDB$SEED DBFiles to +OCR_VOT
2021-08-20 20:53:13: I Copying MGMTDB PDB DBFiles to +OCR_VOT
2021-08-20 20:54:31: I Creating the CTRL File
2021-08-20 20:55:01: I The CTRL File has been created and MGMTDB is now running from +OCR_VOT
2021-08-20 20:55:01: I Setting MGMTDB SPFile location
2021-08-20 20:55:02: I Modifing the init parameter
2021-08-20 20:55:02: I Managing the password File
2021-08-20 20:55:08: I Setting MGMTDB Password File location
2021-08-20 20:55:09: I Removing old MGMTDB
2021-08-20 20:55:12: I Changing START_DEPENDENCIES
2021-08-20 20:55:13: I Changing STOP_DEPENDENCIES
2021-08-20 20:55:14: I Restarting MGMTDB using target SPFile
2021-08-20 20:56:40: I Starting the Cluster Health Analysis Resource
2021-08-20 20:56:41: E Cannot start Cluster Health Analysis (CHA), exiting…
[grid@NODE5101 MDB]$

5. MGMTDB has been migrated to new diskgroup successfully.

Issue: Unable to reset dbsnmp password when DB vault is enabled in the database.

SQL> alter user dbsnmp identified by xxxxx;
alter user dbsnmp identified by xxxxx
*
ERROR at line 1:
ORA-01031: insufficient privileges

Workaround:

  • Login as a DB Vault Manager user and revoke DV_MONITOR role from dbsnmp user.

SQL> conn vlt_mgr/xxxxxxxxxx
Connected.

SQL> revoke DV_MONITOR from dbsnmp;

Revoke succeeded

  • Connect to vault account manager user and reset the password.

SQL> conn acct_mgr/xxxxxxxxxx
Connected.

SQL> alter user dbsnmp identified by xxxxxx;

User altered.

  • After the password reset, reassign the DV_monitor role back to dbsnmp.

SQL> conn vlt_mgr/xxxxxxxxxx
Connected.

SQL> grant DV_MONITOR to dbsnmp;

Grant succeeded.

Issue: Error while exporting a table when DB Vault in enabled in the database.

Error:
ORA-31693: Table data object "MXPDD"."EX_OM_TD_RP" failed to load/unload and is being skipped due to error:
ORA-28116: insufficient privileges to do direct path access

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Advanced Analytics, Oracle Database Vault and Real Application Testin
Starting “ops”.”SYS_EXPORT_TABLE_02″: ops/ directory=IMPDIR dumpfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.dmp logfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.log tables=MXPDD.EX_OM_TD_RP
ORA-39327: Oracle Database Vault data is being stored unencrypted in dump file set.
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 2 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
ORA-31693: Table data object “MXPDD”.”EX_OM_TD_RP” failed to load/unload and is being skipped due to error:
ORA-28116: insufficient privileges to do direct path access
Master table “ops”.”SYS_EXPORT_TABLE_02″ successfully loaded/unloaded


Dump file set for ops.SYS_EXPORT_TABLE_02 is:
/MTX_BCKP/artprd/expdp/EX_OM_TD_RP_Rex.TL.20.4.0E.002.dmp
Job “ops”.”SYS_EXPORT_TABLE_02″ completed with 2 error(s) at Tue 24 Aug 14:29:02 2021 elapsed 0 00:00:11

Findings:

  • Check whether the database vault is enabled in the database.

SQL> select PARAMETER,VALUE from v$option where PARAMETER like ‘%Vault’;

PARAMETER VALUE


Oracle Database Vault TRUE

Export dump of the table failed even through the user who was taking the export had DBA role.

SQL> SELECT GRANTEE,GRANTED_ROLE FROM DBA_ROLE_PRIVS WHERE GRANTEE=’OPS’;

GRANTEE GRANTED_ROLE


OPS DBA
OPS CONNECT
OPS RESOURCE

Fix
Login as DB Vault owner account and authorizes the user to perform Oracle Data Pump operations. In this scenario user is 'OPS'.
SQL> EXEC DBMS_MACADM.AUTHORIZE_DATAPUMP_USER('OPS');
PL/SQL procedure successfully completed.
SQL>

Re-initiating the export dump completed succesfully.

oracle@dbs0103:/MTX_BCKP/artprd/expdp$ expdp directory=IMPDIR dumpfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.dmp logfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.log tables=MXPDD.EX_OM_TD_RP

Export: Release 12.1.0.2.0 – Production on Tue 24 Aug 10:39:13 2021

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Username: ops
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security,
OLAP, Advanced Analytics, Oracle Database Vault and Real Application Testin
Starting “ops”.”SYS_EXPORT_TABLE_02″: ops/ directory=IMPDIR dumpfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.dmp logfile=EX_OM_TD_RP_Rex.TL.20.4.0E.002.log tables=MXPDD.EX_OM_TD_RP
ORA-39327: Oracle Database Vault data is being stored unencrypted in dump file set.
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1024 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “MXPDD”.”EX_OM_TD_RP” 887.7 KB 5260 rows
Master table “ops”.”SYS_EXPORT_TABLE_02″ successfully loaded/unloaded


Dump file set for ops.SYS_EXPORT_TABLE_02 is:
/MTX_BCKP/artprd/expdp/EX_OM_TD_RP_Rex.TL.20.4.0E.002.dmp
Job “ops”.”SYS_EXPORT_TABLE_02″ completed with 1 error(s) at Wed Feb 24 10:39:54 2021 elapsed 0 00:00:33

oracle@dbs0103:/MTX_BCKP/artprd/expdp$