Monday, April 9, 2018

Presteps for PSU patching

The below are the command for my cluster, you will have to update them for your clusters depending on the oracle home and grid home and server name. Also at this point I am assuming all the presteps  are in done status. If not check list the below pre steps.
Perform the pre patching work:

PRE STEPS:
1. Create Blackout for host being patched in OEM.
how-to-create-database-blackout-in-oem

2. Create Proactive SR Sev1 with oracle support for unknown problem we may encounter.

3. Get root access for the servers to be patched from OS admins. 


4. Save a copy of services where they are now 


Do this manually once if not automated,  verification can be done after matched for load balancing of services. the command to check service status is as below.

srvctl status service -d DBNAME 

5. Save optach lsinventory, check opatch version and conflicts for both ORACLE and GRID and on all nodes.
As oracle: set environment for any database running on same ORACLE HOME.
export PATH=$PATH:$ORACLE_HOME/OPatch 
opatch lsinventory > lsinventory_oracle.log
opatch version
#Note: opatch version needs to be 11.2.0.3.6 or higher, upgrade to latest version ideally.
#Note: the directory gg_nas1/software/oracle/patches/1H2018/26030799 is where my patch is downloaded. 
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  /gg_nas1/software/oracle/patches/1H2018/26030799
As Grid: set environment for ASM if node 1. 
. oraenv
+ASM1
export PATH=$PATH:$ORACLE_HOME/OPatch 
opatch lsinventory > lsinventory_grid.log
opatch version
#Note: opatch version needs to be 11.2.0.3.6 or higher, upgrade to latest version ideally.
#Note: the directory gg_nas1/software/oracle/patches/1H2018/26030799 is where my patch is downloaded. 
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir  /gg_nas1/software/oracle/patches/1H2018/26030799

If you find any conflicts please upload to the environment specific SR you opened and ask oracle for solution.

6Save a snapshot of file system of all nodes sometimes some file systems needs to be mounted after patching finished: 
login as root on each node
df -h > nodefilesystembefore.log  -- do this for all nodes 



Start these step on the day of Executions, meaning the day you are Patching:

7. Stop OEM agent. Sudo to  omsagent user and agent location. In my case agent runs from below location. 
cd /tools/agents/oracle/occ/agent12c/agent_inst/bin
./emctl  stop agent
 ./emctl  status agent

8. Stop Jobs: Make sure no backup jobs are running. 

a. RMAN jobs if any.

ps -ef | grep rman

b. Disable any jobs talk to application folks which jobs needs to bedisabled before patching. 

Stage a script ahead of time, Run the script to stop the job.

exec DBMS_SCHEDULER.DISABLE('OWNER.JOB_NAME');
c. Change or stop the crontab in my case i am changing the time for my servers being patched:
​There are three cronjobs which runs at 5AM GMT changing it to 1PM only for the patching week.
Change the crontab for below three from
 00 5 * * 0  ====>> to 00 13 * * 0
9. Stop any other jobs which runs. No jobs should run during patching.  

10. Stop GG processses. 

Login to user which runs GG processes and from ggsci command line. Stop the GG processes.
Stop the extracts
stop the replicats
stop manager.
11. Make sure all the presteps are done before apps shutdown request is made. . . and make sure all the apps come down.  

No comments:

Post a Comment