Lets perform the Pre patching work:
PRE STEPS:
1. Downlaod and stage and unzip the new patches to a directory.
I am doing in this /datapump/orapatch/ as its shared
cd /datapump/orapatch -- unzip patch files
GI as grid
su - gird
cd /datapump/orapatch
unzip p28828733_122010_Linux-x86-64.zip -- this will create directory 28828733
RDBMS as oracle
su - oracle
cd /datapump/orapatch
unzip p28729262_112040_LINUX.zip -- this will create directory 28729262
2. Create Blackout for host being patched.
3. Create Proactive SR Sev1 for unknown problem we may encounter.
4. Get root access for the servers to be patched.
5. Save copy of services where they are now.
This is mostly one node EC2 instance until AWS offers RAC.
6. Save optach lsinventory, check opatch version and conflicts
for both ORACLE and GRID and on all nodes.
As oracle: set database environment.
export PATH=$PATH:$ORACLE_HOME/OPatch
opatch lsinventory > lsinventory_oracle.log
opatch version
#Note: opatch version needs to be 11.2.0.3.6 or higher.
Ideally update the opatch version to latest one.
Check oracle patch conflicts:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28729262
As Grid: set environement ASM
export PATH=$PATH:$ORACLE_HOME/OPatch
opatch lsinventory > lsinventory_grid.log
opatch version
#Note: opatch version needs to be 11.2.0.3.6 or higher,
Ideally update the opatch version to latest one.
Check grid patch conflicts:
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28828733/28566910
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28828733/28822515
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28828733/28864846
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28828733/28870605
opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /datapump/orapatch/28828733/26839277
If you find any conflicts please upload to the environment specific SR now is the time.
7. Save a snapshot of file system on all nodes:
login as root on each node
df -h > nodefilesystembefore.log -- do this for all nodes in this case one EC2 instance.
8. Save crsctl status:
# cd /u01/app/12.2.0.1/grid/bin
# ./crsctl stat res -t --we will use this after patch to compare as saved in pre steps.
9. Update opatch version to latest one for grid and oracle.
##On the day of patching ##
Pre-patching steps:
0. shutdown all apps if there are any.
1.shutdown all db except ASM
-- su - oracle
srvctl stop home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /tmp/statusfileday.log
If you stop home all databases running from home should stop else stop manually.
-- srvctl stop database -d PREEX
-- srvctl stop database -d PRDDX
-- srvctl stop database -d SOBBX
2.make sure .patch_storage exists in both GI home and RDBMS home:
ls -ltra /u01/app/oracle/product/11.2.0.4/dbhome_1/.patch_storage
ls -ltra /u01/app/12.2.0.1/grid/.patch_storage
there should have some output, if no, then need to fix first before apply patch.
3. shutdown dbconsole and agent:
for each node do as user oracle:
cd /tools/agents/oracle/occ/agent12c/agent_inst/bin
menu -- choose proper db name --- setup db env
./emctl stop dbconsole
./emctl stop agent
4. ps -ef |grep oc4j -- kill -9 pid if any -- to ensure no dbconsole running
5. ps -ef |grep emagent -- kill -9 pid if any -- to ensure no emagent running
6. ps -ef |grep omsagent -- kill -9 pid if any
7. Stop Jobs:
a. RMAN jobs if any.
ps -ef | grep rman
Let start the apply patch now:
1. GI 12c patch for ASM-- this needs to be applied by root user
# export PATH=$PATH:/u01/app/12.2.0.1/grid/OPatch
# cd /u01/app/12.2.0.1/grid/crs/install
Run below prepatch script to apply patch:
# ./roothas.sh -prepatch
Now lets sudo to grid and apply the patch.
su - grid
$ . oraenv
+ASM1
$ export PATH=$PATH:/u01/app/12.2.0.1/grid/OPatch
There are 5 patches to be applied as part of this 12c grid patch of jan 2019, lets apply:
$opatch apply -oh /u01/app/12.2.0.1/grid -local /datapump/orapatch/28828733/26839277
$ opatch apply -oh /u01/app/12.2.0.1/grid -local /datapump/orapatch/28828733/28566910
$ opatch apply -oh /u01/app/12.2.0.1/grid -local /datapump/orapatch/28828733/28822515
$ opatch apply -oh /u01/app/12.2.0.1/grid -local /datapump/orapatch/28828733/28864846
$ opatch apply -oh /u01/app/12.2.0.1/grid -local /datapump/orapatch/28828733/28870605
Lets login as root and apply post patch scripts as root:
sudo su - root
cd /u01/app/12.2.0.1/grid/rdbms/install/
# ./rootadd_rdbms.sh
# cd /u01/app/12.2.0.1/grid/crs/install
# ./roothas.sh -postpatch
Now check CRS it should come back online and takes 5 min so stay patient here.
# cd /u01/app/12.2.0.1/grid/bin
# ./crsctl stat res -t -- check if CRS comes back; most of the first section need to be online.
Compare to as saved in pre steps.
2. Lets apply RDBMS database 11g patch -- this needs to be applied by oracle user
Run as root to see if any of file block:
# fuser -v -k /u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libclntsh.so.11.1
# fuser -v -k /u01/app/oracle/product/11.2.0.4/dbhome_1/lib/libnmemso.so
commands should have no output
Lets sudo to oracle:
su - oracle
$ export PATH=$PATH:/u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch
$ cd /datapump/orapatch/28729262
$ opatch napply /datapump/orapatch/28729262
you will have to hit yes that environment is ready and ignore email.
Once the patch confirms its been applied.
Bring databases online:
Start the home we stop it should bring database up if not manually bring them up.
srvctl start home -o /u01/app/oracle/product/11.2.0.4/dbhome_1 -s /tmp/statusfileday.log -n testserver1.com
$ srvctl start database -d PREEX
$ srvctl start database -d PMDDX
$ srvctl start database -d SOBBX
Lets perform Post patching steps:
For each database, except for +ASM, do
Apply catbundle:
$menu -- choose number for database name -- this setup database environment
cd $ORACLE_HOME/rdbms/admin
sqlplus / as sysdba
SQL> @catbundle.sql cpu apply
SQL> @utlrp.sql
SQL> QUIT
verify patch:
1. GI -- as grid user:
su - grid
$ cd /u01/app/12.2.0.1/grid/OPatch
export PATH=$PATH:$ORACLE_HOME/OPatch
opatch lsinventory | grep 28870605
opatch lsinventory | grep 28864846
opatch lsinventory | grep 26839277
opatch lsinventory | grep 28822515
opatch lsinventory | grep 28566910
$ ./opatch lsinventory |grep 190115 -- should have output for all 5 above as well.
2. RDBMS -- as oracle user:
su - oracle
$ cd /u01/app/oracle/product/11.2.0.4/dbhome_1/OPatch
export PATH=$PATH:$ORACLE_HOME/OPatch
$ ./opatch lsinventory |grep 190115 -- should have output
$ ./opatch lsinventory |grep 28729262
3. start dbconsole or agent.
for each db do as user oracle:
menu -- choose proper db name --- setup db env
emclt start dbconsole
emclt start agent
emctl status dbconsole -- log to http page as oem_user to verify it is working.
4. Make sure file system comes back on all nodes.
df -h and compare with root home directory saved log for prior patching df -h file system.
If needed and missing any mount it.
Request the start of applications. once everything looks good.
No comments:
Post a Comment