1. Pre steps to remove node c from cluster.
a. Gather necessary information such as which node needs to be removed
in my case node c out of the three node rac cluster.
b. Remove instances running that node. If you don't then config will list those
instances but no negative impact.
c. Validate no db has instance running or stopped on node testc. It's OK to have ASM running
on this node.
oracle@testc.one.com:$ ps -ef | grep pmon
grid 58522 1 0 Jun04 ? 00:01:21 asm_pmon_+ASM3
oracle 66290 66194 0 15:59 pts/0 00:00:00 grep --color=auto pmon
d. login in as root and save output for before node is removed.
[testc ~]$ sudo su - root
Last login: Mon Jun 17 16:57:16 GMT 2019
[testc.one.com: ~]# . oraenv
ORACLE_SID = [+ASM1] ? +ASM3
The Oracle base has been set to /u01/app/grid
Save output for the below command in tmp or your choice of directory.
cd /tmp
olsnodes -s -t > oslnodesoutput.log
srvctl config vip -node testa > confignodea.log
srvctl config vip -node testb > confignodeb.log
srvctl config vip -node testc > confignodec.log
crsctl stat res -t|egrep 'testc|STATE|--' > statfornodec.log
crsctl stat res -t > clusterhealth.log
crsctl query crs activeversion -f > querycrsactivevrsion.log
crsctl check cluster > clustercheck.log
$ORACLE_HOME/bin/kfod op=patches > patchesapplied.log
$ORACLE_HOME/bin/kfod op=patchlvl > currentpatches.log
ps -elf | grep tns > tnsbeforeremove.log
srvctl status listener -l listener > listenerstatusbefore.log
srvctl status scan_listener > scanlistenerstatusbefore.log
srvctl status listener -l ASMNET1LSNR_ASM > asmnetlsnr.log
srvctl status listener -l MGMTLSNR > mgmtlsnr.log
ocrconfig -showbackup > ocrbackup.log
2. Steps to remove node c from cluster.
a. Delete node 'testc" from Cluster 12cR2
No need to stop or disable any listener
b. As "grid" user on testc. Remove the Oracle RAC software by running the following command on the node "testc"
grid@testc.one.com:$ env|grep ORACLE
ORACLE_SID=+ASM3
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
grid@testc.one.com:$ cd /u01/app/12.2.0.1/grid/deinstall
grid@testc.one.com:$ ls -l deinstall
-rwxr-x--- 1 grid oinstall 11313 Jul 31 2018 deinstall
you will need to confirm y to confirm the deinstall operation to continue deinstalling local nodec.
grid@testc.one.com:$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DECONFIG TOOL START ############
######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##
Checking for existence of the Oracle home location /u01/app/12.2.0.1/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/12.2.0.1/grid
The following nodes are part of this cluster: testc,testa,testb
Checking for sufficient temp space availability on node(s) : 'testc'
## [END] Install check configuration ##
Traces log file: /u01/app/oraInventory/logs//crsdc_2019-06-20_03-26-39-PM.log
Network Configuration check config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2019-06-20_03-26-40-PM.log
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2019-06-20_03-26-40-PM.log
Database Check Configuration START
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2019-06-20_03-26-40-PM.log
Oracle Grid Management database was found in this Grid Infrastructure home
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################
####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/12.2.0.1/grid
The following nodes are part of this cluster: testc,testa,testb
The cluster node(s) on which the Oracle home deinstallation will be performed are:testc
Oracle Home selected for deinstall is: /u01/app/12.2.0.1/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Option -local will not modify any ASM configuration.
Oracle Grid Management database was found in this Grid Infrastructure home
Local configuration of Oracle Grid Management database will be removed
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-06-20_03-26-38-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-06-20_03-26-38-PM.err '
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2019-06-20_03-28-54-PM.log
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2019-06-20_03-28-54-PM.log
ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2019-06-20_03-28-54-PM.log
Network Configuration clean config END
Run the following command as the root user or the administrator on node "testc".
/u01/app/12.2.0.1/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2019-06-20_03-26-35PM/response/deinstal l_OraGI12Home1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
NOTE: Before i can hit enter i need to run the above script to in a seperate
terminal so i will login to same node c and sudo to root and once the script
finishes then hit enter on this with arrow <----------------------------------------
This output is from a separate window as root:
[testc.one.com: ~]# id
uid=0(root) gid=0(root) groups=0(root)
[testc.one.com: ~]# /u01/app/12.2.0.1/grid/crs/install/rootcrs.sh -force -deconfig -paramfile "/tmp/deinstall2019-06-20_03-26-35PM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2019-06-20_03-26-35PM/response/deinstall_OraGI12Home1.rsp
The log of current session can be found at:
/u01/app/oraInventory/logs/crsdeconfig_testc_2019-06-20_03-33-33PM.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'testc'
CRS-2673: Attempting to stop 'ora.crsd' on 'testc'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'testc'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'testc'
CRS-2673: Attempting to stop 'ora.RECO.dg' on 'testc'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'testc'
CRS-2673: Attempting to stop 'ora.chad' on 'testc'
CRS-2677: Stop of 'ora.OCR.dg' on 'testc' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'testc' succeeded
CRS-2677: Stop of 'ora.RECO.dg' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'testc'
CRS-2677: Stop of 'ora.asm' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.ASMNET1LSNR_ASM.lsnr' on 'testc'
CRS-2677: Stop of 'ora.ASMNET1LSNR_ASM.lsnr' on 'testc' succeeded
CRS-2677: Stop of 'ora.chad' on 'testc' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'testc' has completed
CRS-2677: Stop of 'ora.crsd' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'testc'
CRS-2673: Attempting to stop 'ora.crf' on 'testc'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'testc'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'testc'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'testc'
CRS-2677: Stop of 'ora.drivers.acfs' on 'testc' succeeded
CRS-2677: Stop of 'ora.crf' on 'testc' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'testc' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'testc' succeeded
CRS-2677: Stop of 'ora.asm' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'testc'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'testc'
CRS-2673: Attempting to stop 'ora.evmd' on 'testc'
CRS-2677: Stop of 'ora.ctssd' on 'testc' succeeded
CRS-2677: Stop of 'ora.evmd' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'testc'
CRS-2677: Stop of 'ora.cssd' on 'testc' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'testc'
CRS-2677: Stop of 'ora.gipcd' on 'testc' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'testc' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2019/06/20 15:34:49 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.
2019/06/20 15:36:44 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.
2019/06/20 15:36:48 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node
NOTE: Now it is a good time to go back to the window which is waiting on my enter and the script requested has been run successfully.
hit enter on this with arrow waiting <----------------------------------------
Press Enter after you finish running the above commands
<----------------------------------------
######################### DECONFIG CLEAN OPERATION END #########################
####################### DECONFIG CLEAN OPERATION SUMMARY #######################
Local configuration of Oracle Grid Management database was removed successfully
Oracle Clusterware is stopped and successfully de-configured on node "testc"
Oracle Clusterware is stopped and de-configured successfully.
#######################################################################
############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2019-06-20_03-26-35PM/response/deinstall_2019-06-20_03-26-38-PM.rsp
Location of logs /u01/app/oraInventory/logs/
############ ORACLE DEINSTALL TOOL START ############
####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-06-20_03-26-38-PM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2019-06-20_03-26-38-PM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to testc
Setting CLUSTER_NODES to testc
Setting CRS_HOME to true
Setting oracle.installer.invPtrLoc to /tmp/deinstall2019-06-20_03-26-35PM/oraInst.loc
Setting oracle.installer.local to true
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/app/12.2.0.1/grid' from the central inventory on the local node : Done
Delete directory '/u01/app/12.2.0.1/grid' on the local node : Done
The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.
Oracle Universal Installer cleanup was successful.
Oracle Universal Installer clean END
## [START] Oracle install clean ##
## [END] Oracle install clean ##
######################### DEINSTALL CLEAN OPERATION END #########################
####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/app/12.2.0.1/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/12.2.0.1/grid' on the local node.
Oracle Universal Installer cleanup was successful.
Review the permissions and contents of '/u01/app/grid' on nodes(s) 'testc'.
If there are no Oracle home(s) associated with '/u01/app/grid', manually delete '/u01/app/grid' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################
############# ORACLE DEINSTALL TOOL END #############
3. Post-Steps to remove node c from cluster.
a. As root user. From any node that you are not deleting (On testa), run the following
command from the Grid_home/bin directory as root to delete the node from
the cluster:
[testa.one.com: ~]# . oraenv
ORACLE_SID = [root] ? +ASM1
The Oracle base has been set to /u01/app/grid
[testa.one.com: ~]# env|grep ORACLE
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
[testa.one.com: ~]# crsctl delete node -n testc
CRS-4661: Node testc successfully deleted.
b. Login as grid.
Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:
grid@testa.one.com:$ cluvfy stage -post nodedel -n testc
Verifying Node Removal ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...FAILED (PRVF-10002)
Post-check for node removal was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -post nodedel".
Verifying Node Removal ...FAILED
testc: PRVF-10002 : Node "testc" is not yet deleted from the Oracle inventory node list
CVU operation performed: stage -post nodedel
Date: Jun 20, 2019 4:13:05 PM
CVU home: /u01/app/12.2.0.1/grid/
User: grid
Here is how you fix that?
Here is the fix to update Oracle inventory:
===========================================
PRVF-10002: Node "{0}" is not yet deleted from the Oracle inventory node list
Cause: The indicated node still exists in the list of nodes for the CRS home in the Oracle inventory.
Action: Use 'runInstaller -updateNodeList' to remove the indicated node from the CRS home node list.
As grid user. Update oraInvenory on node a and b testa and testb:
cd /u01/app/12.2.0.1/grid/oui/bin/
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid/ CLUSTER_NODES={testa,testb} -local
grid@testa.one.com:$ /u01/app/12.2.0.1/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid/ CLUSTER_NODES={testa,testb} -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 20675 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
grid@testb.one.com:$ /u01/app/12.2.0.1/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.2.0.1/grid/ CLUSTER_NODES={testa,testb} -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 20675 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
lets run CVU command again and to verify that the specified nodes have been
successfully deleted from the cluster:
grid@testb.one.com:$ cluvfy stage -post nodedel -n testc
Verifying Node Removal ...
Verifying CRS Integrity ...PASSED
Verifying Clusterware Version Consistency ...PASSED
Verifying Node Removal ...PASSED
Post-check for node removal was successful.
CVU operation performed: stage -post nodedel
Date: Jun 20, 2019 4:55:37 PM
CVU home: /u01/app/12.2.0.1/grid/
User: grid
c- Perfom post verification.
Compare output in tmp from presteps you will not see node c in your outputs as configuration of RAC cluster.
olsnodes -s -t
srvctl config vip -node testa
srvctl config vip -node testb
srvctl config vip -node testc
crsctl stat res -t|egrep 'testc|STATE|--'
crsctl stat res -t
crsctl query crs activeversion -f
crsctl check cluster
$ORACLE_HOME/bin/kfod op=patches
$ORACLE_HOME/bin/kfod op=patchlvl
ps -elf | grep tns
srvctl status listener -l listener
srvctl status scan_listener
srvctl status listener -l ASMNET1LSNR_ASM
srvctl status listener -l MGMTLSNR
ocrconfig -showbackup
That is it you are all set.
No comments:
Post a Comment