I. Grid Infrastructure binaries installation
Note: GI is delivered as a zip file that must be copied to target directory: Oracle Universal Installer is no more used to install GI.
1. create directories for Oracle GI Home, Oracle Inventory and Oracle Base on all RAC nodes:
logon to server as user root
chown grid:oinstall /u01/app
chmod -R 775 /u01/app
logon to server as user grid
mkdir -p /u01/app/12.2.0.1/grid
mkdir -p /u01/app/grid
mkdir –p /u01/app/oraInventory
2. logon as grid user and unzip GI media into /u01/app/12.2.0.1/grid on node 1
cd /u01/app/12.2.0.1/grid
unzip -q /gg_nas1/software/oracle/12c2/linuxx64_12201_grid_home.zip
export ORACLE_HOME=/u01/app/12.2.0.1/grid
3. As user grid, apply patch 25784424 on node 1 --- this is to fix the bug 25784424
a. modify script $ORACLE_HOME/oui/bin/attachHome.sh to add a new ORACLE_HOME
cp $ORACLE_HOME/oui/bin/attachHome.sh patch_attachHome.sh
--- make sure following are correct:
OHOME=/u01/app/12.2.0.1/grid
OHOMENAME=OraGI12Home1
--- attach the new ORACLE_HOME
./patch_attachHome.sh
b. change the permission for file --- this is to prevent the error during the patching
chmod 775 /u01/app/12.2.0.1/grid/oui/lib/linux64/libsrvm12.so
chmod 775 /u01/app/12.2.0.1/grid/rdbms/lib/config.c
c. apply patches
1) apply the patch to fix the installer issue:
$ORACLE_HOME/OPatch/opatch napply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/12c2/patches/25784424/25784424
2) Apply Latest GI RU patches
§ upgrade Opatch
cd $ORACLE_HOME
mv OPatch Opatch.old
unzip /gg_nas1/software/oracle/patches/12c2/p6880880_122010_Linux-x86-64.zip
3) Apply latest GI RU patch 26610291
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/26609966
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/25586399
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/26609817
d. Modify script $ORACLE_HOME/oui/bin/detachHome.sh to detach the newly added
ORACLE_HOME in step a.
cp $ORACLE_HOME/oui/bin/detachHome.sh patch_detachHome.sh
--- make sure following are correct:
OHOME=/u01/app/12.2.0.1/grid
OHOMENAME=OraGI12Home1
--- attach the new ORACLE_HOME
./patch_ detachHome.sh
II. Configure RAC cluster
Logon as grid user
1. Make a copy of the GI configuration response file
#>cp /gg_nas1/software/oracle/12c2/scripts/grid.rsp grid_<clustername>.rsp
2. edit /gg_nas1/software/oracle/12c2/scripts/ grid_<clustername>.rsp file to make sure following parameter are using correct values for your cluster on node 1
ORACLE_BASE=
oracle.install.crs.config.gpnp.scanName=
oracle.install.crs.config.gpnp.scanPort
oracle.install.crs.config.clusterName=
oracle.install.crs.config.clusterNodes=
oracle.install.crs.config.networkInterfaceList=
oracle.install.asm.diskGroup.name=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=
oracle.install.asm.gimrDG.disks=
3. Run gridSetup.sh in silent mode on node 1:
[ma-tdedb01a.cccis.com: bin]$ /u01/app/12.2.0.1/grid/gridSetup.sh -silent -responseFile /gg_nas1/software/oracle/12c2/scripts/ grid_<clustername>.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
Log file is under /u01/app/oraInventory/logs/GridSetupActionsxxxxxxxx
Note: for the test cluster, cvuqdisk-1.0.10-1.rpm isn’t installed (this should be fixed for new cluster built by Unix team). To install this rpm:
Logon as root
Use the following command to find if you have an existing version of the
cvuqdisk package:
# rpm -qi cvuqdisk
Set the environment variable CVUQDISK_GRP to point to the group that owns
cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
install the cvuqdisk package.
# cd /gg_nas1/software/oracle/12c2/rpm
#rpm -iv cvuqdisk-1.0.10-1.rpm
4. As user root, run root.sh script on all nodes as described on your screen:
As a root user, execute the following script(s):
1. /u01/app/12.2.0.1/grid/root.sh
Execute /u01/app/12.2.0.1/grid/root.sh on the following nodes:
[NODE1, NODE2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.
5. Check the status of the newly built RAC cluster
As user grid:
[NODE1: bin]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[NODE1: bin]$ crsctl stat res –t
III . Post-installation
Logon as user grid
1. Creating a Fast Recovery Area
cd /u01/app/12.2.0.1/grid/bin
a. Set up the environment
#> . oraenv
b. Create the disk group RECO.
#>Sqlplus / as sysasm
Sql> CREATE DISKGROUP RECO EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/MAVMAX1_RECO_xxxxxx';
As soon the above steps are done your clusterware is ready. Now you want to go ahead and install oracle database software and create a database and enable cluster db.
I will add the links to those blogs here.
Note: GI is delivered as a zip file that must be copied to target directory: Oracle Universal Installer is no more used to install GI.
1. create directories for Oracle GI Home, Oracle Inventory and Oracle Base on all RAC nodes:
logon to server as user root
chown grid:oinstall /u01/app
chmod -R 775 /u01/app
logon to server as user grid
mkdir -p /u01/app/12.2.0.1/grid
mkdir -p /u01/app/grid
mkdir –p /u01/app/oraInventory
2. logon as grid user and unzip GI media into /u01/app/12.2.0.1/grid on node 1
cd /u01/app/12.2.0.1/grid
unzip -q /gg_nas1/software/oracle/12c2/linuxx64_12201_grid_home.zip
export ORACLE_HOME=/u01/app/12.2.0.1/grid
3. As user grid, apply patch 25784424 on node 1 --- this is to fix the bug 25784424
a. modify script $ORACLE_HOME/oui/bin/attachHome.sh to add a new ORACLE_HOME
cp $ORACLE_HOME/oui/bin/attachHome.sh patch_attachHome.sh
--- make sure following are correct:
OHOME=/u01/app/12.2.0.1/grid
OHOMENAME=OraGI12Home1
--- attach the new ORACLE_HOME
./patch_attachHome.sh
b. change the permission for file --- this is to prevent the error during the patching
chmod 775 /u01/app/12.2.0.1/grid/oui/lib/linux64/libsrvm12.so
chmod 775 /u01/app/12.2.0.1/grid/rdbms/lib/config.c
c. apply patches
2) Apply Latest GI RU patches
§ upgrade Opatch
cd $ORACLE_HOME
mv OPatch Opatch.old
unzip /gg_nas1/software/oracle/patches/12c2/p6880880_122010_Linux-x86-64.zip
3) Apply latest GI RU patch 26610291
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/26609966
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/25586399
$ORACLE_HOME/OPatch/opatch apply -oh /u01/app/12.2.0.1/grid -local /gg_nas1/software/oracle/patches/12c2/26610291/26609817
d. Modify script $ORACLE_HOME/oui/bin/detachHome.sh to detach the newly added
ORACLE_HOME in step a.
cp $ORACLE_HOME/oui/bin/detachHome.sh patch_detachHome.sh
--- make sure following are correct:
OHOME=/u01/app/12.2.0.1/grid
OHOMENAME=OraGI12Home1
--- attach the new ORACLE_HOME
./patch_ detachHome.sh
II. Configure RAC cluster
Logon as grid user
1. Make a copy of the GI configuration response file
#>cp /gg_nas1/software/oracle/12c2/scripts/grid.rsp grid_<clustername>.rsp
2. edit /gg_nas1/software/oracle/12c2/scripts/ grid_<clustername>.rsp file to make sure following parameter are using correct values for your cluster on node 1
ORACLE_BASE=
oracle.install.crs.config.gpnp.scanName=
oracle.install.crs.config.gpnp.scanPort
oracle.install.crs.config.clusterName=
oracle.install.crs.config.clusterNodes=
oracle.install.crs.config.networkInterfaceList=
oracle.install.asm.diskGroup.name=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=
oracle.install.asm.gimrDG.disks=
3. Run gridSetup.sh in silent mode on node 1:
[ma-tdedb01a.cccis.com: bin]$ /u01/app/12.2.0.1/grid/gridSetup.sh -silent -responseFile /gg_nas1/software/oracle/12c2/scripts/ grid_<clustername>.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
Log file is under /u01/app/oraInventory/logs/GridSetupActionsxxxxxxxx
Note: for the test cluster, cvuqdisk-1.0.10-1.rpm isn’t installed (this should be fixed for new cluster built by Unix team). To install this rpm:
Logon as root
Use the following command to find if you have an existing version of the
cvuqdisk package:
# rpm -qi cvuqdisk
Set the environment variable CVUQDISK_GRP to point to the group that owns
cvuqdisk, typically oinstall. For example:
# CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
install the cvuqdisk package.
# cd /gg_nas1/software/oracle/12c2/rpm
#rpm -iv cvuqdisk-1.0.10-1.rpm
4. As user root, run root.sh script on all nodes as described on your screen:
As a root user, execute the following script(s):
1. /u01/app/12.2.0.1/grid/root.sh
Execute /u01/app/12.2.0.1/grid/root.sh on the following nodes:
[NODE1, NODE2]
Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.
5. Check the status of the newly built RAC cluster
As user grid:
[NODE1: bin]$ . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
[NODE1: bin]$ crsctl stat res –t
III . Post-installation
Logon as user grid
1. Creating a Fast Recovery Area
cd /u01/app/12.2.0.1/grid/bin
a. Set up the environment
#> . oraenv
b. Create the disk group RECO.
#>Sqlplus / as sysasm
Sql> CREATE DISKGROUP RECO EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/MAVMAX1_RECO_xxxxxx';
As soon the above steps are done your clusterware is ready. Now you want to go ahead and install oracle database software and create a database and enable cluster db.
I will add the links to those blogs here.
No comments:
Post a Comment