Detailed
steps to convert / migrate your single instance Oracle 10g Database to 2 node
RAC database manually. The current environment is a single Instance Oracle 10g
Database running on a local storage. We would migrate it to shared storage with
2 RAC instances.
HIGH LEVEL STEPS:
DETAILED STEPS:
HIGH LEVEL STEPS:
1.
Preliminary Configuration and Verification
2.
Migrate the existing database to shared storage (Raw devices /
ASM)
3.
Install the CRS Software
4.
Install Oracle RAC Software
5.
Configure Listeners, tnsnames files
6.
Modify/Create Initialization parameter, Password files
7.
Start the first Instance
8.
Create RAC data dictionary views
9.
Create Undo tablespace, Redo log files
10. Bring up the Second RAC Instance
11. Verify status of RAC processes/Services
Pre-Setup:
----------
- Install the Oracle Clusterware and
Oracle RAC Database software on nodes
- Copy/Edit original init file as
init$SID1.ora and init$SID2.ora
- Make sure SID, listeners, tnsnames,
profile, password file are correct on both nodes
Sample init, tnsnames, listener files
Moving
DB to shared storage:
------------------------------
Create pfile from spfile on original
instance, if using spfile
Shutdown original instance
Copy only datafiles on raw devices:
e.g.
dd if=/oradata/file_name
of=/dev/raw/raw5
Start instance in mount state
STARTUP MOUNT;
Rename datafiles:
e.g.
alter database rename file
'/u01/app/oracle/oradata/test10g/users01.dbf' to '/dev/raw/raw7';
alter database rename file
'/u01/app/oracle/oradata/test10g/temp01.dbf' to '/dev/raw/raw8';
Alter database open;
ADD LOGFILES on shared storage:
(Don't try to drop active group)
select group#, bytes, status from
v$log;
select group#, member from v$logfile;
e.g.
alter database add logfile group 4
'/dev/raw/raw13' size 128M;
alter database add logfile group 5
'/dev/raw/raw14' size 128M;
select group#, status from v$log;
alter system switch logfile;
alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;
select group#, bytes, status from
v$log;
Alter database backup controlfile to
trace;
shutdown immediate;
Edit init file and give raw location
of controlfiles (rename spfile so Instance do not start using spfile)
STARTUP NOMOUNT;
Recreate controlfile;
Alter database open;
shutdown immediate;
Now Database has been moved to shared
storage on original instance.
Now
Start DB from RAC instance 1:
(Make sure RAC instances have the
latest init file)
START the first instance
Add tempfile
e.g.
ALTER TABLESPACE TEMP ADD TEMPFILE
'/dev/raw/raw8'
Create RAC dictionary views:
@$ORACLE_HOME/rdbms/admin/catclust
shutdown immediate
Add db to cluster using srvctl:
srvctl add database -d -o $ORACLE_HOME
srvctl add instance -d test10g -i
test10g1 -n rac1
srvctl add instance -d test10g -i
test10g2 -n rac2
srvctl start instance -d test10g -i
test10g1
alter database add logfile thread 2
group 1 '/dev/raw/raw15' size 128M;
alter database add logfile thread 2
group 2 '/dev/raw/raw16' size 128M;
alter database enable thread 2;
create undo tablespace UNDOTBS2
datafile '/dev/raw/raw10' size 25M;
Start 2nd instance:
srvctl start instance -d test10g -i
test10g2
-------------------------------------------
Verify the RAC services:
crs_stat -t
srvctl status database -d test10g
srvctl stop database -d test10g
srvctl start database -d test10g
select instance_number instance#,
instance_name, host_name, status from gv$instance;
-----------------------------------------------------------------------------
You have now converted your single
instance database to a 2 node RAC database.
No comments:
Post a Comment