ASM scoped security

In Oracle supper cluster environment where the same set of Exadata Storage Servers are shared among multiple grid infrastructures/RAC clusters, It’s good to have ASM Scoped Security configured.
ASM scope security is a precautionary measure to prevent the accidental use of one cluster’s diskgroups by another cluster.
For example preventing use of production RAC cluster diskgroups from test RAC clusters within the same supper cluster.

1. Stop cluster on both DB zone cluster node members by running following command as root user on each DB zone cluster node member:

# /u01/app/12.1.0.2/grid/bin/crsctl stop crs

2. Create key on any of the cell storage servers:
CELLCLI> create key
8f2f23ecc48031f48b775f02f050dba2
CELLCLI>

3. Using the dcli command, assign the newly created ASM authentication key to the ASM cluster identified by cluster name (special characters are not accepted so it should be removed
from the cluster name used below):
In this scenario “sclu16” is the name of RAC Cluster to which ASM disks/Diskgroup are only supposed to be available.

# dcli -l root -g /root/cell_group “cellcli -e ASSIGN KEY FOR ‘sclu16’=’8f2f23ecc48031f48b775f02f050dba2′”
cladm01: Key for sclu16 successfully created
cladm02: Key for sclu16 successfully created
cladm03: Key for sclu16 successfully created

4. On both cluster nodes create the cluster cellkey.ora file, and ammend to it the key generated in
previous step 2 and assign the cluster name used in step 3:

# vi /etc/oracle/cell/network-config/cellkey.ora
key=fa6f73e518c98af2d8be97ad93997a30
asm=sclu16

5. On both cluster nodes, set correct permissions for the cellkey.ora file created in previous step:
# chmod 600 /etc/oracle/cell/network-config/cellkey.ora
# chown grid:oinstall /etc/oracle/cell/network-config/cellkey.ora

6. Update relevant griddisks “availableTo” attribute for this cluster diskgroups griddisks only, with
the cluster unique name chosen in step 3, as follows:
Note: make sure to update correct griddisks and cells with the correct command as can be seen below.

To get the list of griddisks member in this cluster:
# cellcli -e list griddisk | grep OCR16
# cellcli -e list griddisk | grep DM16
# cellcli -e list griddisk | grep RECO16

On Cell1:
CELLCLI> alter griddisk OCR16_CD_00_cladm01, OCR16_CD_01_cladm01, OCR16_CD_02_cladm01,
OCR16_CD_03_cladm01, OCR16_CD_04_cladm01, OCR16_CD_05_cladm01,
OCR16_CD_06_cladm01, OCR16_CD_07_cladm01, OCR16_CD_08_cladm01,
OCR16_CD_09_cladm01, OCR16_CD_10_cladm01, OCR16_CD_11_cladm01,
DATA16_CD_00_cladm01, DATA16_CD_01_cladm01, DATA16_CD_02_cladm01,
DATA16_CD_03_cladm01, DATA16_CD_04_cladm01, DATA16_CD_05_cladm01,
DATA16_CD_06_cladm01, DATA16_CD_07_cladm01, DATA16_CD_08_cladm01,
DATA16_CD_09_cladm01, DATA16_CD_10_cladm01, DATA16_CD_11_cladm01,
RECO16_CD_00_cladm01, RECO16_CD_01_cladm01, RECO16_CD_02_cladm01,
RECO16_CD_03_cladm01, RECO16_CD_04_cladm01, RECO16_CD_05_cladm01,
RECO16_CD_06_cladm01, RECO16_CD_07_cladm01, RECO16_CD_08_cladm01,
RECO16_CD_09_cladm01, RECO16_CD_10_cladm01, RECO16_CD_11_cladm01 availableTo=’sclu16′

On Cell2:
CELLCLI> alter griddisk OCR16_CD_00_cladm02, OCR16_CD_01_cladm02, OCR16_CD_02_cladm02,
OCR16_CD_03_cladm02, OCR16_CD_04_cladm02, OCR16_CD_05_cladm02,
OCR16_CD_06_cladm02, OCR16_CD_07_cladm02, OCR16_CD_08_cladm02,
OCR16_CD_09_cladm02, OCR16_CD_10_cladm02, OCR16_CD_11_cladm02,
DATA16_CD_00_cladm02, DATA16_CD_01_cladm02, DATA16_CD_02_cladm02,
DATA16_CD_03_cladm02, DATA16_CD_04_cladm02, DATA16_CD_05_cladm02,
DATA16_CD_06_cladm02, DATA16_CD_07_cladm02, DATA16_CD_08_cladm02,
DATA16_CD_09_cladm02, DATA16_CD_10_cladm02, DATA16_CD_11_cladm02,
RECO16_CD_00_cladm02, RECO16_CD_01_cladm02, RECO16_CD_02_cladm02,
RECO16_CD_03_cladm02, RECO16_CD_04_cladm02, RECO16_CD_05_cladm02,
RECO16_CD_06_cladm02, RECO16_CD_07_cladm02, RECO16_CD_08_cladm02,
RECO16_CD_09_cladm02, RECO16_CD_10_cladm02, RECO16_CD_11_cladm02
availableTo=’sclu16′

On Cell3:
CELLCLI> alter griddisk OCR16_CD_00_cladm03, OCR16_CD_01_cladm03, OCR16_CD_02_cladm03,
OCR16_CD_03_cladm03, OCR16_CD_04_cladm03, OCR16_CD_05_cladm03,
OCR16_CD_06_cladm03, OCR16_CD_07_cladm03, OCR16_CD_08_cladm03,
OCR16_CD_09_cladm03, OCR16_CD_10_cladm03, OCR16_CD_11_cladm03,
DATA16_CD_00_cladm03, DATA16_CD_01_cladm03, DATA16_CD_02_cladm03,
DATA16_CD_03_cladm03, DATA16_CD_04_cladm03, DATA16_CD_05_cladm03,
DATA16_CD_06_cladm03, DATA16_CD_07_cladm03, DATA16_CD_08_cladm03,
DATA16_CD_09_cladm03, DATA16_CD_10_cladm03, DATA16_CD_11_cladm03,
RECO16_CD_00_cladm03, RECO16_CD_01_cladm03, RECO16_CD_02_cladm03,
RECO16_CD_03_cladm03, RECO16_CD_04_cladm03, RECO16_CD_05_cladm03,
RECO16_CD_06_cladm03, RECO16_CD_07_cladm03, RECO16_CD_08_cladm03,
RECO16_CD_09_cladm03, RECO16_CD_10_cladm03, RECO16_CD_11_cladm03 availableTo=’sclu16’

7. Once all above commands executes successfully restart the CRS on both cluster nodes:

Thats it.

**In addition You may click here to one of my post to have a look on how to create the ASM Diskgroup/Griddisk**

1 Comment

  1. Pingback: ASM Diskgroup and Grid Disks creation on M7 Supper cluster with Exadata storage | Mudasir Hakak DBA

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s