Thursday, May 16, 2019

How to apply quarterly Database patches Manually 12c grid and 11g database

Perform the pre patching work:

PRE STEPS:
1. Create Blackout for host being patched.
2. Create Proactive SR Sev1 for unknown problem we may encounter.
3. Get root access for the servers to be patched.
4. Save copy of services where they are now. 
srvctl status service -d dbname
5. Save optach lsinventory, check opatch version and conflicts for both ORACLE and GRID and on all nodes.
As oracle: set environment.

export PATH=$PATH:$ORACLE_HOME/OPatch 

opatch lsinventory > lsinventory_oracle.log

opatch version
#Note: opatch version needs to be 11.2.0.3.6 or higher, ideally update it. 

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729245

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729234

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729262

As Grid: set environement ASM
export PATH=$PATH:$ORACLE_HOME/OPatch 

opatch lsinventory > lsinventory_grid.log

opatch version

#Note: opatch version needs to be 11.2.0.3.6 or higher,

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28822515

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28864846

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28870605

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/​28566910

opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/26839277

If you find any conflicts please upload to the environment specific SR.

6. Save a snapshot of file system on all nodes: 

login as root on each node   
df -h > nodefilesystembefore.log  -- do this for all nodes   

7. Update opatch version to latest one for grid and oracle. https://updates.oracle.com/download/6880880.html

Start these step on the day of Executions, meaning the day we are Patching:

8. Stop OEM agent. Sudo to  omsagent user. cd /tools/agents/oracle/occ/agent12c/agent_inst/bin
./emctl  stop agent
 ./emctl  status agent

9. Stop Jobs:

a. RMAN jobs if any.
ps -ef | grep rman

b. Disable any jobs which may run during patching and fail. Communicate to job owners about patching activity in release or through management. 

c. Change the  job schedule from crontab if cron jobs:
Disable from dbms if databases scheduler jobs. 

10. Stop the GG crontab job if any. 

11. Stop GG if it runs and connects. 
Login to ggpusr2 and from ggsci command line. Stop the GG processes.
Stop the extracts
stop the replicats
stop manager.

12. Save cluster resources:
 login as root
    . oraenv
    +ASM1
    crsctl check cluster
    crsctl stat res -t  ##make sure to compare after patching##


13. Make sure all the presteps are done before apps shutdown request  is made. . . and make sure all the apps come down.  

Perform the patching work Manully: 

1. Login as user Oracle:
Set the environement. make sure statusfileday.log file is generated by below command.

Stop databases home:
srvctl stop home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /tmp/statusfileday.log -n test1.com

Below is an example of how the  cat that log  would look like.
db-databasenamelowerletter (if it does not gets populated you can vi that manually to make it look like below)
$ cat statusfiledat.log
db-dbnamea
db-dbnameb

2. Login as root now Root

cd /u01/app/12.2.0.1/grid/crs/install/
run below script.
./rootcrs.sh -prepatch 

3. Now login as Grid and apply the 5 GI patches.

export PATH=$PATH:$ORACLE_HOME/OPatch
##OCW##
opatch apply -oh /u01/app/12.2.0.1/grid  -local /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28870605 

##ACFS##
opatch apply -oh /u01/app/12.2.0.1/grid  -local /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28864846

 ##DB WLM##
opatch apply -oh /u01/app/12.2.0.1/grid  -local /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/26839277

##DB RU##
opatch apply -oh /u01/app/12.2.0.1/grid  -local /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28822515

##TOMCAT##
opatch apply -oh /u01/app/12.2.0.1/grid  -local /gg_nas1/software/oracle/patches/1H2019/12cGI/28828733/28566910

4. Now login as oracle.
cd /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729234/custom/server/28729234/custom/scripts

./prepatch.sh -dbhome /u01/app/oracle/product/11.2.0.4/dbhome_1

5. Now login as Oracle and apply the 2 DB patches.

export PATH=$PATH:$ORACLE_HOME/OPatch

opatch napply -oh /u01/app/oracle/product/11.2.0.4/dbhome_1  -local /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729234/custom/server/28729234/

opatch apply -oh /u01/app/oracle/product/11.2.0.4/dbhome_1  -local  /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729262

6. As user oracle
cd /gg_nas1/software/oracle/patches/1H2019/11gGI/28813878/28729234/custom/server/28729234/custom/scripts

./postpatch.sh -dbhome /u01/app/oracle/product/11.2.0.4/dbhome_1

7. Now login as Root run the below scripts. 

cd /u01/app/12.2.0.1/grid/rdbms/install/
./rootadd_rdbms.sh


## 12c if used prepatch then use this and this will bring cluster up###
cd /u01/app/12.2.0.1/grid/crs/install/
./rootcrs.sh -postpatch 

8. Now login as oracle and start home.  This is where we use that statusfileday.log file we created. So after this step you should be able to see that all the database came up and are running just ps -ef | grep pmon.
If they do not vi  /tmp/statusfileday.log and make it look like the example in pre steps and start home again. All DB names should be listed so they can start from that home.  The database will come up. At this time perform the post patching steps and verification that patch got applied..


srvctl start home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /tmp/statusfileday.log -n test.com

Perform the post patching work:

1. Apply the catbundle in all the database.
cd $ORACLE_HOME/rdbms/admin
menu to source oracle environment
sqlplus / as sysdba
@catbundle.sql psu apply
@utlrp

2. Run Rman catalog upgraede for PROD databases .
connect and run the upgrade command twice.
rman catalog rman_dbname/linuxpassword@OMSA
upgrade catalog
upgrade catalog
 do the same for all other databases.

2. Verify patch is applied on all nodes for user ORACLE the 2 patches, and for GRID 3 patches. Verify patch is applied. Check inventory.

login as user oracle:  set environment for oracle 
export PATH=$PATH:$ORACLE_HOME/OPatch

opatch lsinventory  | grep 28729234
opatch lsinventory  | grep 28729262

Login as Grid and then set grid  environement  for ASM.

export PATH=$PATH:$ORACLE_HOME/OPatch

opatch lsinventory  | grep 28870605
opatch lsinventory  | grep 28864846 
opatch lsinventory  | grep 26839277 
opatch lsinventory  | grep 28822515 
opatch lsinventory  | grep 28566910

3. Verify cluster. 
 login as root
    . oraenv
    +ASM1
    crsctl check cluster
    crsctl stat res -t  # make sure as before patching as we saved#

4. Verify databases. 
ps -ef | grep pmon make sure all DBs comes back up.

5. Relocate services as before patching to balance load as before. 


6. Start agents. Sudo to  omsagent user.
this is where you use your root access again to sudo to omsagent
cd /tools/agents/oracle/occ/agent12c/agent_inst/bin
./emctl  start agent
  ./emctl  status agent

9. Make sure file system comes back on all nodes.
df -h and compare with root home directory saved log for prior patching df -h file system. IF needed and missing any mount it replacing as its name.

mount /nameof/orlocationof/filesystem

10. If we stopped GG. Then start goldengate.
Start manager and all extract and replicat.

11. Start the jobs:

a. Enable  db scheduler jobs. 
b. Change the e schedule from crontab back to before patching:

11. Bring up Apps now.
Ask the Apps team to start the apps and notify the teams patching has been done.

That is it we are done with patching the server for 12c grid and 11g DB.



troubleshooting:
if post patch script fails for root,  run it as below nonrolling:

./rootcrs.sh -prepatch -nonrolling
./rootcrs.sh -postpatch -nonrolling

the post script will then bring cluster up.


=========================================
NOTE: below are notes for me you can disregard:

Prepatch script will change it to rolling no need to do this step.

##SELECT SYS_CONTEXT('SYS_CLUSTER_PROPERTIES', 'CLUSTER_STATE') FROM DUAL;
##SELECT SYS_CONTEXT('SYS_CLUSTER_PROPERTIES', 'CURRENT_PATCHLVL') FROM DUAL;
##ALTER SYSTEM START ROLLING PATCH;
##ALTER SYSTEM STOP ROLLING PATCH;

./rootcrs.sh -unlock
./clscfg -localpatch
./rootcrs.sh -lock

===========================================

Hope this helps !
Ahmad

No comments:

Post a Comment