1. Set instance-specific parameters to your SPFILE. I had to add the following parameters for my new instance:
alter system set instance_number=4 scope=spfile sid=’test3′;
alter system set local_listener=’LISTENER_test3′ scope=spfile sid=’test3′;
alter system set log_archive_format=’%t_%s_%r.dbf’ scope=spfile sid=’test3′;
alter system set log_archive_trace=0 scope=spfile sid=’test3′;
alter system set thread=4 scope=spfile sid=’test3′;
alter system set undo_tablespace=’UNDOTBS4′ scope=spfile sid=’test3′;
2. One of the parameters requires the LISTENERS_test3 entry in the TNSNAMES.ORA file. Make this entry in the file and any other entries you need for this new instance.
test =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test)
)
)
3. I put my spfile and password file on shared storage. On the new node, I need to softlink to those.
cd $ORACLE_HOME/dbs
cd /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs
ls and make sure they exist.
4. Add online redo log groups.
SQL> select group#,thread#,bytes/1048576 as MB,STATUS from v$log order by 2,1;
GROUP# THREAD# MB STATUS
---------- ---------- ---------- ----------------
1 1 200 INACTIVE
2 1 200 CURRENT
3 2 200 INACTIVE
4 2 200 CURRENT
SQL> select group#,thread#,bytes/1048576 as MB,STATUS from v$log order by 2,1;
GROUP# THREAD# MB STATUS
---------- ---------- ---------- ----------------
1 1 200 INACTIVE
2 1 200 CURRENT
3 2 200 INACTIVE
4 2 200 CURRENT
5 3 200 CURRENT
6 3 200 UNUSED
6 rows selected.
5. Add standby redo log files.
alter database add standby logfile thread 3
6. Since I have a standby database, I need to add online redo logs and standby redo logs in the standby database as well. See my previous post.
7. Enable redo thread.
column instance format a20
select thread#, status, enabled, groups, substr(instance,1,12) as instance,last_redo_time from v$thread;
alter database enable thread 3;
8. Create the new instance in Grid Infrastructure
srvctl add instance -d test -i test3 -n myhost03
9. Start the new instance
srvctl start instance -d test -i test3
10. On the new node, verify SMON is running.
ps -ef|grep smon
11. Verify the instance is open.
select instance_number,instance_name,thread#,status from gv$instance;
12. Modify /etc/oratab on new node to contain instance entry
test3:/u01/app/oracle/product/11.2.0.2:N
13. On the new node, sign on to the instance with SQL*Plus to ensure you can access the instance.
14. Verify instance with GI.
srvctl status database -d test
15. The instance is now up and running on the new node. All that remains is to make any database services available or preferred on that new node and you connections can start using the new instance.
srvctl modify service -d test -s test_svc -n -i test1,test2,test3,test3
srvctl start service -d test-s test_svc -i test3
srvctl status service -d test
That’s all there is to it!
alter system set instance_number=4 scope=spfile sid=’test3′;
alter system set local_listener=’LISTENER_test3′ scope=spfile sid=’test3′;
alter system set log_archive_format=’%t_%s_%r.dbf’ scope=spfile sid=’test3′;
alter system set log_archive_trace=0 scope=spfile sid=’test3′;
alter system set thread=4 scope=spfile sid=’test3′;
alter system set undo_tablespace=’UNDOTBS4′ scope=spfile sid=’test3′;
2. One of the parameters requires the LISTENERS_test3 entry in the TNSNAMES.ORA file. Make this entry in the file and any other entries you need for this new instance.
test =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = test-scan)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = test)
)
)
3. I put my spfile and password file on shared storage. On the new node, I need to softlink to those.
cd $ORACLE_HOME/dbs
cd /u01/app/oracle/product/12.2.0.1/dbhome_1/dbs
ls and make sure they exist.
4. Add online redo log groups.
SQL> select group#,thread#,bytes/1048576 as MB,STATUS from v$log order by 2,1;
GROUP# THREAD# MB STATUS
---------- ---------- ---------- ----------------
1 1 200 INACTIVE
2 1 200 CURRENT
3 2 200 INACTIVE
4 2 200 CURRENT
alter database add logfile thread 3
group 5 ('+DATA','+RECO') size 200M,
group 6 ('+DATA','+RECO') size 200M;
GROUP# THREAD# MB STATUS
---------- ---------- ---------- ----------------
1 1 200 INACTIVE
2 1 200 CURRENT
3 2 200 INACTIVE
4 2 200 CURRENT
5 3 200 CURRENT
6 3 200 UNUSED
6 rows selected.
alter database add standby logfile thread 3
group 5 ('+DATA','+RECO') size 200M,
group 6 ('+DATA','+RECO') size 200M;
7. Enable redo thread.
column instance format a20
select thread#, status, enabled, groups, substr(instance,1,12) as instance,last_redo_time from v$thread;
alter database enable thread 3;
8. Create the new instance in Grid Infrastructure
srvctl add instance -d test -i test3 -n myhost03
9. Start the new instance
srvctl start instance -d test -i test3
10. On the new node, verify SMON is running.
ps -ef|grep smon
11. Verify the instance is open.
select instance_number,instance_name,thread#,status from gv$instance;
12. Modify /etc/oratab on new node to contain instance entry
test3:/u01/app/oracle/product/11.2.0.2:N
13. On the new node, sign on to the instance with SQL*Plus to ensure you can access the instance.
14. Verify instance with GI.
srvctl status database -d test
15. The instance is now up and running on the new node. All that remains is to make any database services available or preferred on that new node and you connections can start using the new instance.
srvctl modify service -d test -s test_svc -n -i test1,test2,test3,test3
srvctl start service -d test-s test_svc -i test3
srvctl status service -d test
That’s all there is to it!
No comments:
Post a Comment