Sunday 9 November 2014

Rename ASM Disk Group using renamedg

1.Dismount asm disk group which you want to rename

SQL> alter diskgroup ORACLE_DATA dismount;

Diskgroup altered.

 

2.Precheck using check=true

[grid]$ renamedg dgname=ORACLE_DATA newdgname=TSTDB_DATA verbose=true check=true asm_diskstring='/dev/*asm*'

 

Parameters in effect:

         Old DG name       : ORACLE_DATA

         New DG name          : TSTDB_DATA

         Phases               :

                 Phase 1

                 Phase 2

         Discovery str        : /dev/*asm*

         Check              : TRUE

         Clean              : TRUE

         Raw only           : TRUE

renamedg operation: dgname=ORACLE_DATA newdgname=TSTDB_DATAverbose=true check=true asm_diskstring=/dev/*asm*

Executing phase 1

Discovering the group

Performing discovery with string:/dev/*asm*

Identified disk UFS:/dev/oracleasm_data03 with disk number:2 and timestamp (34081321 -1577446144)

Identified disk UFS:/dev/oracleasm_data05 with disk number:4 and timestamp (34081321 -1571147520)

Identified disk UFS:/dev/oracleasm_data04 with disk number:3 and timestamp (34081321 -1574295296)

Identified disk UFS:/dev/oracleasm_data02 with disk number:1 and timestamp (34081321 -1584635648)

Identified disk UFS:/dev/oracleasm_data07 with disk number:5 and timestamp (34081321 -1567993600)

Identified disk UFS:/dev/oracleasm_data08 with disk number:6 and timestamp (34081321 -1564857088)

Identified disk UFS:/dev/oracleasm_data01 with disk number:7 and timestamp (34081321 -1527015192)

Identified disk UFS:/dev/oracleasm_data06 with disk number:0 and timestamp (34081321 -1530154752)

Checking for hearbeat...

Re-discovering the group

Performing discovery with string:/dev/*asm*

Identified disk UFS:/dev/oracleasm_data03 with disk number:2 and timestamp (34081321 -1577446144)

Identified disk UFS:/dev/oracleasm_data05 with disk number:4 and timestamp (34081321 -1571147520)

Identified disk UFS:/dev/oracleasm_data04 with disk number:3 and timestamp (34081321 -1574295296)

Identified disk UFS:/dev/oracleasm_data02 with disk number:1 and timestamp (34081321 -1584635648)

Identified disk UFS:/dev/oracleasm_data07 with disk number:5 and timestamp (34081321 -1567993600)

Identified disk UFS:/dev/oracleasm_data08 with disk number:6 and timestamp (34081321 -1564857088)

Identified disk UFS:/dev/oracleasm_data01 with disk number:7 and timestamp (34081321 -1527015192)

Identified disk UFS:/dev/oracleasm_data06 with disk number:0 and timestamp (34081321 -1530154752)

Checking if the diskgroup is mounted or used by CSS

Checking disk number:2

Checking disk number:4

Checking disk number:3

Checking disk number:1

Checking disk number:5

Checking disk number:6

Checking disk number:7

Checking disk number:0

Generating configuration file..

Completed phase 1

Executing phase 2

Looking for /dev/oracleasm_data03

Leaving the header unchanged

Looking for /dev/oracleasm_data05

Leaving the header unchanged

Looking for /dev/oracleasm_data04

Leaving the header unchanged

Looking for /dev/oracleasm_data02

Leaving the header unchanged

Looking for /dev/oracleasm_data07

Leaving the header unchanged

Looking for /dev/oracleasm_data08

Leaving the header unchanged

Looking for /dev/oracleasm_data01

Leaving the header unchanged

Looking for /dev/oracleasm_data06

Leaving the header unchanged

Completed phase 2


3.Rename diskgroup

[grid]$ renamedg dgname=ORACLE_DATA newdgname=TSTDB_DATA verbose=true  asm_diskstring='/dev/*asm*'

 

Parameters in effect:

 

         Old DG name       : ORACLE_DATA

         New DG name          : TSTDB_DATA

         Phases               :

                 Phase 1

                 Phase 2

         Discovery str        : /dev/*asm*

         Clean              : TRUE

         Raw only           : TRUE

renamedg operation: dgname=ORACLE_DATA newdgname=TSTDB_DATAverbose=true asm_diskstring=/dev/*asm*

Executing phase 1

Discovering the group

Performing discovery with string:/dev/*asm*

Identified disk UFS:/dev/oracleasm_data03 with disk number:2 and timestamp (34081321 -1577446144)

Identified disk UFS:/dev/oracleasm_data05 with disk number:4 and timestamp (34081321 -1571147520)

Identified disk UFS:/dev/oracleasm_data04 with disk number:3 and timestamp (34081321 -1574295296)

Identified disk UFS:/dev/oracleasm_data02 with disk number:1 and timestamp (34081321 -1584635648)

Identified disk UFS:/dev/oracleasm_data07 with disk number:5 and timestamp (34081321 -1567993600)

Identified disk UFS:/dev/oracleasm_data08 with disk number:6 and timestamp (34081321 -1564857088)

Identified disk UFS:/dev/oracleasm_data01 with disk number:7 and timestamp (34081321 -1527015192)

Identified disk UFS:/dev/oracleasm_data06 with disk number:0 and timestamp (34081321 -1530154752)

Checking for hearbeat...

Re-discovering the group

Performing discovery with string:/dev/*asm*

Identified disk UFS:/dev/oracleasm_data03 with disk number:2 and timestamp (34081321 -1577446144)

Identified disk UFS:/dev/oracleasm_data05 with disk number:4 and timestamp (34081321 -1571147520)

Identified disk UFS:/dev/oracleasm_data04 with disk number:3 and timestamp (34081321 -1574295296)

Identified disk UFS:/dev/oracleasm_data02 with disk number:1 and timestamp (34081321 -1584635648)

Identified disk UFS:/dev/oracleasm_data07 with disk number:5 and timestamp (34081321 -1567993600)

Identified disk UFS:/dev/oracleasm_data08 with disk number:6 and timestamp (34081321 -1564857088)

Identified disk UFS:/dev/oracleasm_data01 with disk number:7 and timestamp (34081321 -1527015192)

Identified disk UFS:/dev/oracleasm_data06 with disk number:0 and timestamp (34081321 -1530154752)

Checking if the diskgroup is mounted or used by CSS

Checking disk number:2

Checking disk number:4

Checking disk number:3

Checking disk number:1

Checking disk number:5

Checking disk number:6

Checking disk number:7

Checking disk number:0

Generating configuration file..

Completed phase 1

Executing phase 2

Looking for /dev/oracleasm_data03

Modifying the header

Looking for /dev/oracleasm_data05

Modifying the header

Looking for /dev/oracleasm_data04

Modifying the header

Looking for /dev/oracleasm_data02

Modifying the header

Looking for /dev/oracleasm_data07

Modifying the header

Looking for /dev/oracleasm_data08

Modifying the header

Looking for /dev/oracleasm_data01

Modifying the header

Looking for /dev/oracleasm_data06

Modifying the header

Completed phase 2

 

4.Mount new diskgroup

SQL> alter diskgroup TSTDB_DATA mount;

Diskgroup altered.

 

Tuesday 4 November 2014

how to upgrade Oracle Clusterware, ASM and Database from 10gR2 [10.2.0.5] RAC to 11gR2 [11.2.0.3] RAC

The current 10gR2 RAC database configuration is as below, • OS: RHEL 5.8 • Oracle User: oracle • Grid User: grid • Oracle Home: /u01/app/oracle/products/10.2.0/db • Grid Home: /u01/app/grid/products/10.2.0/crs • The OCR, Voting Disk and Database files are stored on raw devices. OCR Disks - /dev/raw/raw1 - /dev/raw/raw2 Voting Disks - /dev/raw/raw3 - /dev/raw/raw4 - /dev/raw/raw5 Database Files - /dev/raw/raw6 Archive Log Files - /dev/raw/raw7 RedoLog Files - /dev/raw/raw8 - /dev/raw/raw9 ====================================================================================================== Complete the below checks before proceeding with the upgrade installation of 11gR2 Clusterware and ASM - First of all make sure you have downloaded the below software from Oracle Support site for 11gR2 Clusterware and Database installation. • p10404530_112030_Linux-x86-64_1of7.zip • p10404530_112030_Linux-x86-64_2of7.zip • p10404530_112030_Linux-x86-64_3of7.zip - Configure the SCAN IPs in the DNS. You can also configure the SCAN IP in your 'hosts' file, but oracle does not recommend it. - Complete all the preinstallation requirements for GRID Infrastructure and Oracle Software. - Backup VOTING DISK and OCR • OCR Backup ocrconfig -export /restore/crs/ocr_backup.ocr -s online • Voting Disk Backup dd if=/dev/raw/raw3 of=/restore/crs/votedisk1.dmp - Backup CLUSTER HOME, ORACLE HOME and OraInventory on all nodes • Cluster Home and Oracle Homes [Both Nodes] tar -czvf CRS_HOME.tar.gz /u01/app/grid/products/10.2.0/crs tar -czvf ORA_HOME.tar.gz /u01/app/oracle/products/10.2.0/db tar -czvf ORA_INVENTORY.tar.gz /u01/app/oraInventory - Create and add ASM Groups required for 11gR2 Grid Installation to the existing users # groupadd asmadmin # groupadd asmdba # groupadd asmoper # usermod -a -G asmadmin,asmdba,asmoper grid # usermod -a -G asmadmin,asmdba,asmoper oracle # id grid # id oracle - Make sure you have added the below entries in /etc/udev/rules.d/60-raw.rules file. Otherwise you will get a message stating that "Udev attributes check" failed during prechecks of Grid Infrastructure installation. # vi /etc/udev/rules.d/60-raw.rules ## OCR Disks ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="0640" ## VOTING Disks ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="oinstall", MODE="0640" ## DATAFILE, LOGFILE, ARCHIVELOG Disks ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N" ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N" ACTION=="add", KERNEL=="sdi1", RUN+="/bin/raw /dev/raw/raw8 %N" ACTION=="add", KERNEL=="sdj1", RUN+="/bin/raw /dev/raw/raw9 %N" ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw7", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw8", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw9", OWNER="grid", GROUP="oinstall", MODE="0770" Make sure you set the owner of OCR devices as root, owner of Voting disks as grid and owner of disks containing Database files as oracle. - Run the below command as "grid" user to check whether rolling upgrade prechecks have been completed or not. For this the 10g RAC cluster should be up and running. $ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/grid/products/10.2.0/crs -dest_crshome /u01/app/11.2.0/products/crs -dest_version 11.2.0.3.0 -fixup -fixupdir /orainstall/fixupscript -verbose If you get the error as "PRVF-10037 : Failed to retrieve storage type for "/dev/raw/raw2"......" after running the runcluvfy utility, you can ignore this error as per the DOC ID 1233505.1 and continue with the installation. - You can also directly run the cluvfy utility as below to verify whether the prechecks have been done or not. For this the 10g RAC cluster need not be up and running. $ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose Verify that all the requirements have "Passed". If any prechecks failed, resolve it and again run the cluvfy utility. ====================================================================================================== - Steps to upgrade your 10g Clusterware and ASM to 11gR2 Once you completed all the above steps, you can proceed with the installation. [Note – The existing 10g RAC Cluster &amp; ASM should be up and running on both the nodes] • Unset the environmental variables GRID_BASE and GRID_HOME of the current grid user and start the installation. $ cd grid $ ./runInstaller • DOWNLOAD SOFTWARE UPDATES Screen Select the option "Skip software updates" and click on Next to continue. • INSTALLATION OPTION Screen The upgrade option will be automatically selected. Click on Next to continue. • PRODUCT LANGUAGES Screen Click on Next to continue. • NODE SELECTION Screen Select all the nodes and make sure the Upgrade Cluster Oracle ASM is selected. Click on Next to continue. After clicking on Next, you will be prompted stating that "Oracle ASM cannot be upgraded using rolling upgrade". Click 'Yes' to continue. • SCAN INFORMATION Screen Enter the SCAN IP details and port number to configure for SCAN. Click on Next to continue. • ASM MONITOR PASSWORD Screen Enter the password and click on Next to continue. • OPERATING SYSTEM GROUPS Screen Verify whether proper groups are selected. Click on Next to continue. • INSTALLATION LOCATION Screen Enter the location where software will reside. Click on Next to continue. • PREREQUISITE CHECKS Screen Make sure that all the prechecks are successful. In this installation, I got precheck failed for "Task resolv.conf integrity". I ignored this and continued with the installation. • SUMMARY Screen Click on Install to start the installation. • INSTALL PRODUCT Screen You can monitor the installation process. • EXECUTE CONFIGURATION SCRIPTS Screen Run the rootupgrade.sh script on all nodes of RAC. Once the script is run on all nodes, click on OK to continue with the installation. • FINISH Screen As you can see the upgrade of Grid Infrastructure was successful. ====================================================================================================== - Post Installation Checks • Once the upgrade of clusterware and ASM is completed successfully, you can confirm the status of 11gR2 cluster using below commands, $ cd /u01/app/11.2.0/grid/bin $ ./crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0] $ ./crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.2.0.3.0] $ ./crsctl check cluster -all ************************************************************** rac1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** • OCR and Voting disk checks [Run the below command as root user] # ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 986324 Used space (kbytes) : 6176 Available space (kbytes) : 980148 ID : 1893642273 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded # ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 4fa950b22cae4fb5ff331cde39deb2d9 (/dev/raw/raw3) [] 2. ONLINE 9ce7c46916344f26bf90bfce130a96bc (/dev/raw/raw4) [] 3. ONLINE 782ce22735117fa0fff918ef23bc738d (/dev/raw/raw5) [] Located 3 voting disk(s). • Now change the GRID_HOME and GRID_BASE parameter of grid user in the .bash_profile file to the new location on both nodes. • In 11gR2, the owner of ASM instance is now the grid user. If you try to login to ASM instance, it will show you as "Connected to an idle instance". It is because the DATAGRP, LOGGRP1, LOGGRP2 and ARCHGRP diskgroups owner in 10g RAC was the oracle user. We need to change this owner to the grid user. • Login as root on both nodes and stop the cluster. Make the below changes on all RAC nodes. $ vi /etc/udev/rules.d/60-raw.rules [Change the owner to "grid" for above mentioned diskgroups. In our case the disks are raw6, raw7, raw8, raw9. Also change the permission to 770, since the oracle user will need to read and write on this raw device.] ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0770" Restart the service # /etc/init.d/rawdevices restart Now start the cluster on both nodes. # /u01/app/11.2.0/grid/bin/crsctl start crs Once the cluster is up and running, you will be able to login into ASM instance as grid user. $ export ORACLE_SID=+ASM1 $ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Mon Oct 22 09:20:17 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options • Once the post upgrade checks are completed, you can detach the Old Grid home [cluster and ASM from the inventory]. $ /u01/app/grid/products/10.2.0/crs/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/grid/products/10.2.0/crs ====================================================================================================== - Install 11gR2 Oracle Database Software • Unset the environmental variables ORACLE_BASE and ORACLE_HOME of the current oracle user and start the installation. $ cd /orainstall/database $ ./runInstaller • CONFIGURE SECURITY UPDATES Screen Uncheck the option "I wish to receive security updates...." and click on Next to continue the installation. • DOWNLOAD SOFTWARE UPDATES Screen Select "Skip Software Updates" and click on Next to continue the installation. • INSTALLATION OPTION Screen Select "Install Database Software Only" and click on Next to continue. • GRID INSTALLATION OPTIONS Screen By default, the option "Oracle Real Application Clusters database installation" is selected. Make sure that all the nodes are also selected before proceeding with the installation. • PRODUCT LANGUAGES Screen Click on Next to continue. • DATABASE EDITION Screen Since the 10gR2 database was configured as a Standard Edition, I have selected here the "Standard Edition" option. • INSTALLATION LOCATION Screen Enter the path for Oracle Software installation and Click on Next to continue. • OPERATING SYSTEM GROUPS Screen Check whether appropriate groups are shown. Click on Next to continue. • PREREQUISITE CHECKS Screen Here also the Task resolv.integrity check failed. Ignore it and click on Next to continue with the installation. • SUMMARY Screen Check whether all the information displayed is proper or not. To continue with the installation click on Install. • INSTALL PRODUCT Screen You can observe the installation process. Once the remote copy is done, you will get a screen to run the root.sh script as shown below. After you have finished executing the root.sh script on all nodes, click on OK. • FINISH Screen This is the end of installation. - You have successfully installed the Oracle Database Software. The next step is to upgrade the Database to 11gR2. ====================================================================================================== - Steps to upgrade your 10g RAC Database to 11gR2 There are two ways to upgrade Database 1) Using DBUA Run Database Upgrade Assistant $ dbua 2) Manual Upgrade Below steps are to upgrade the database manually. Follow the steps only on Node 1 of the cluster. As per Oracle Doc ID 1358166.1 there's no pre-patch required if the timezone version is 4 on 10gR2 but it's advised to upgrade the timezone to 14 after the upgrade. This could be done while upgrading the database. To check the Current timezone, login into the database and execute the below query. SQL> select * from v$timezone_file; FILENAME VERSION ------------ ---------- timezlrg.dat 4 • Copy the tnsnames.ora from old Home location to new location. $ cp /u01/app/oracle/products/10.2.0/db/network/admin/tnsnames.ora /u01/app/oracle/products/11.2.0/db/network/admin/tnsnames.ora • Increase the SGA_MAX_SIZE, SGA_TARGET and PGA_AGGREGATE_TARGET SQL> alter system set sga_max_size=2500M scope=spfile; System altered. SQL> alter system set sga_target=2500M scope=spfile; System altered. SQL> alter system set pga_aggregate_target=1500M scope=spfile; System altered. • Disable the archivelog mode before upgradation if you don't want the logs to be generated. It will also give some performance benefit. • Increase the RAM and SWAP space of the RAC nodes if required. • Create init.ora file and place it in the 11g Oracle Home dbs directory on both nodes and remove the obsolete initialization parameters. Comment the background_dump_dest and user_dump_dest parameters since both the parameters are deprecated in 11g and replaced by the parameter diagnostic_dest. Add the new parameter in init.ora as shown below, *.diagnostic_dest='/u01/app/oracle' Once you start the database, it will automatically create the directory structure like 'diag/rdbms/...' below the above given path '/u01/app/oracle'. • Make sure the COMPATIBLE initialization parameter is properly set for Oracle Database 11g Release 2 (11.2). The Pre-Upgrade Information Tool displays a warning in the Database section if COMPATIBLE is not properly set. • Set the CLUSTER_DATABASE initialization parameter to false. After the upgrade, you must set this initialization parameter back to TRUE. • Comment the local_listener and remote_listener parameters in init.ora file. • Copy the password file from 10g Oracle Home and copy it to the 11g Oracle Home. • Add entry in /etc/oratab file. • Add online redo log groups if required. alter database add logfile group 5 size 50M; alter database add logfile group 6 size 50M; alter database add logfile group 7 size 50M; alter database add logfile group 8 size 50M; • Run the premigration checkup script utlu112i.sql. This script is located on: <11g-Oracle-Home>/rdbms/admin directory. You can copy it to the 10g home and from there execute it on the 10g database before starting the upgrade. The utlu112i.sql will provide on its output infromation about aditional steps that may be required to execute before the migration. $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> spool pre_migration_check.log; SQL> @/u01/app/oracle/products/11.2.0/db/rdbms/admin/utlu112i.sql SQL> spool off; • Login as oracle user and set new value for the environmental variable ORACLE_HOME. $ export ORACLE_HOME=$ORACLE_BASE/products/11.2.0/db • After preparing the new Oracle home, you are ready to proceed with the manual upgrade. Make sure the database is down. Login as oracle user and issue the below commands. $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> startup upgrade; ORACLE instance started. Total System Global Area 2622255104 bytes Fixed Size 2231232 bytes Variable Size 553649216 bytes Database Buffers 2063597568 bytes Redo Buffers 2777088 bytes Database mounted. Database opened. SQL> spool upgrade.log SQL> @?/rdbms/admin/catupgrd.sql • Once the upgrades finishes, it will shut down the database automatically. Login again as sysdba and startup in normal mode. • Check the dba_registry for the components and its status set lines 200; set pages 1000; column comp_name format a40; column version format a12; column status format a15; select comp_name, version, status from dba_registry; COMP_NAME VERSION STATUS ---------------------------------------- ------------ --------------- Oracle Enterprise Manager 11.2.0.3.0 VALID OLAP Catalog 10.2.0.5.0 OPTION OFF Spatial 10.2.0.5.0 OPTION OFF Oracle Multimedia 11.2.0.3.0 VALID Oracle XML Database 11.2.0.3.0 VALID Oracle Text 11.2.0.3.0 VALID Oracle Data Mining 10.2.0.5.0 OPTION OFF Oracle Expression Filter 11.2.0.3.0 VALID Oracle Rules Manager 11.2.0.3.0 VALID Oracle Workspace Manager 11.2.0.3.0 VALID Oracle Database Catalog Views 11.2.0.3.0 VALID Oracle Database Packages and Types 11.2.0.3.0 VALID JServer JAVA Virtual Machine 11.2.0.3.0 VALID Oracle XDK 11.2.0.3.0 VALID Oracle Database Java Packages 11.2.0.3.0 VALID OLAP Analytic Workspace 10.2.0.5.0 OPTION OFF Oracle OLAP API 10.2.0.5.0 OPTION OFF Oracle Real Application Clusters 11.2.0.3.0 VALID • Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to perform upgrade actions that do not require the database to be in UPGRADE mode SQL> @?/rdbms/admin/catuppst.sql • Run the utlrp.sql script to recompile any invalid objects. SQL> spool recompile.log SQL> @?/rdbms/admin/utlrp.sql SQL> spool off; • When the utlrp.sql script completes, verify if all the components are upgraded to 11.2.0.3 by running the below script. Run utlu112s.sql to display the results of the upgrade, SQL> @?/rdbms/admin/utlu112s.sql • Shutdown the Database. Change the value of below parameters in init.ora file *.cluster_database='true' *.remote_listener='scan-ip:1515' Now start the database in normal mode. Create spfile from pfile. Again shutdown the database and start, check whether it is coming up properly using spfile or not. • Now shutdown the database. Change the environmental parameters to 10g Oracle Home. We will need to remove the 10g Database and Instances from the cluster. $ srvctl remove instance -d orcldb -i orcldb1 $ srvctl remove instance -d orcldb -i orcldb2 $ srvctl remove database -d orcldb • Now again change the environmental variables to 11g Oracle Home and add the database and instance to the cluster. $ srvctl add database -d orcldb -o /u01/app/oracle/products/11.2.0/db $ srvctl add instance -d orcldb -i orcldb1 -n rac1 Shutdown the DB and Start DB using srvctl utility. [On Node 2] • Now copy the init.ora file and password file from node 1 to node 2. Start instance on node 2 and Create spfile SQL> startup SQL> create spfile from pfile; Shutdown the database and Start DB using spfile. • Add instance from node 2 to the cluster $ srvctl add instance -d orcldb -i orcldb2 -n rac2 • Shutdown the instances on both nodes and start database using srvctl utility $ srvctl start database -d orcldb • Create a spfile on shared location. SQL> create spfile='+CNTRL_LOG_GRP2' from pfile; • Run the below command to check the database config. $ srvctl config database -d orcldb -a Database unique name: orcldb Database name: Oracle home: /u01/app/oracle/products/11.2.0/db Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: orcldb Database instances: orcldb1,orcldb2 Disk Groups: CNTRL_LOG_GRP1,DATAGRP,CNTRL_LOG_GRP2 Mount point paths: Services: Type: RAC Database is enabled Database is administrator managed • Change spfile path and database name in cluster config. $ srvctl modify database -d orcldb -p '+CNTRL_LOG_GRP2/ORCLDB/PARAMETERFILE/spfile.265.823276117' $ srvctl modify database -d orcldb -n orcldb • Detach the 10g Oracle Home from cluster. $ /u01/app/oracle/products/10.2.0/db/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/oracle/products/10.2.0/db • Change the database mode to archivelog. $ srvctl stop database -d orcldb -o immediate [Login from any one node] $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> startup mount; SQL> alter database archivelog; • Now stop the cluster on both nodes and check whether the DB is going down and starting up with the cluster. That's it, your upgradation finishes.

Wednesday 27 August 2014

GoldenGate Commands

Cataloged here are some very important, common commands that are used via the GGSCI interface during the GoldenGate configuration and for troubleshooting.

INFO
INFO MANAGER
Provides details of the Manager process
INFO MGR
Also provides details of the Manager process
STATUS MANAGER
This command also display the info of manager
REFRESH
REFRESH MANAGER
Reloads from the Manager Parameter file
REFRESH MGR
Reloads from the Manager Parameter file
SEND
SEND MANAGER CHILDSTATUS
Displays status of processes, started by Manager.
SEND MANAGER CHILDSTATUS DEBUG
Return the ports numbers allocated by the Manager
SEND MANAGER GETPORTINFO
Displays the list of currently allocated ports by Manager process
SEND MANAGER GETPORTINFO DETAIL
Provides info on ports and process assigned to them.
SEND MANAGER GETPURGEOLDEXTRACTS
Retrieves trail purge retention info.
START
START MANAGER
Starts the Manager Process
START MGR
Starts the Manager Process
STOP
STOP MGR
Stops the Manager Process
STOP MANAGER !
Stops Manager without asking for user confirmation.
STOP MGR !
Stops Manager without asking for user confirmation.
ADD Command
Creates an Extract group.
ADD EXTRACT E_SRC, Tranlog, Begin Now
Used to specify transaction logs as data source for extract.
ADD EXTRACT E_SRC, Begin Now, Passive
Specifies the extract to be run in passive mode.
ADD EXTRACT E_SRC, Extseqno 000008 Extrba 287458, Begin Now
Specifies the extract process starting position
ADD EXTRACT E_SRC, SOURCEISTABLE
Extracts data from data tables for initial loading.
ALTER Command
ALTER EXTRACT e_src, BEGIN NOW
Instructs extract to start processing
ALTER EXTRACT e_src, BEGIN 2013-04-16
Instructs extract to start processing from specific date
ALTER EXTRACT e_src, ETROLLOVER
Extract rolls over to next trail file
ALTER EXTRACT e_src, EXTSEQNO 01, EXTRBA 2887
Alters extract to start from the specific locaton in the trail
ALTER EXTRACT e_src, THREAD 4, BEGIN 2012-03-09
Alters extract thread & start date for RAC
ALTER EXTRACT e_src, LSN 1234:123:1
Altering extract for the SQL Server
CLEANUP Command
It is used to clear the run history for the specific extract group
CLEANUP EXTRACT e_src
It purges all history of records except last
CLEANUP EXTRACT e_src, SAVE 10
It saves last 10 records and deletes all other
DELETE Command
It is used to delete the extract process, its checkpoints detail and unregister the extract group.
DELETE EXTRACT e_src
deletes the extract process
DELETE EXTRACT e*
deletes all extract process whose name starts with e
DELETE EXTRACT e* !
deletes all extract process whose name starts with e without prompting
INFO Command
It is used to display the info of the extract like its status, lag, checkpoint, run history, trail info etc
INFO EXTRACT e_src, SHOWCH
Display checkpoint info of extract
INFO EXTRACT e_src, DETAIL
Display trail info, run history
INFO EXTRACT e_src, TASKS
Display extract tasks
KILL Command
It is used to kill the extract that can't be stopped with STOP Command""
KILL EXTRACT e_src
LAG Command
It is used to find the lag time between Extract and data source more precisely than the "INFO Command"
LAG EXTRACT e_src
To find lag for extract "e_src"
LAG EXTRACT *
To find lag for all of the extract processes
REGISTER Command
It is used to register the extract process, so that it can retain the archive logs required for its recovery.
REGISTER EXTRACT e_src LOGRETENTION
To register extract "e_src"
SEND Command
It is used to communicate with the running extract like sending requests for report creation, stats, to force extract to rollover to next trail etc
SEND EXTRACT finance, ROLLOVER
To increment the extract to next file in trail
SEND EXTRACT finance, STOP
To stop the extract process
SEND EXTRACT finance, TRANLOGOPTIONS TRANSCLEANUPFREQUENCY 20
For the Oracle RAC, specify the time after which the OGG scan and delete the orphan transactions
SEND EXTRACT finance, SKIPTRANS 5.17.27634 THREAD 2
For skipping the transaction in the Oracle RAC environment
SEND EXTRACT e_src, SHOWTRANS
Display the info about the open transactions like checkpoint, extract group name, SCN, Redo log and RB, status etc
SEND EXTRACT e_src, SHOWTRANS COUNT 2
Display the info for two transactions only
START Command
It is used to start the Extract process
START EXTRACT e_src
STATS Command
It is used to display the stats for the extract process including the DDL and DML operations.
STATS e_src
STATS EXTRACT e_src
stats will be displayed for the extract E_SRC""
STATS EXTRACT e_src REPORTRATE SEC
display the stats for the fetch opertions per sec
STATS EXTRACT e_src, TOTAL, DAILY
The total stats is shown since the start of the day
STATS EXTRACT e_src, TOTAL, HOURLY, REPORTRATE MIN, RESET, REPORTFETCH
By using comma between the keywords multiple options can be used for the stats command
STATUS Command
It is used to check whether extract process is currently running or not.
STATUS EXTRACT e_src
To check the status for extract "E_SRC"
STATUS EXTRACT e*
To check the status for all extracts starting with "e"
STOP Command
It is used to stop the running extract process
STOP EXTRACT e_src
To stop the extract "e_src"
STOP EXTRACT e*
To stop all of the running extract process whose name start with "e"
STOP EXTRACT *
To stop all of the running extract processes
UNREGISTER Command
it is used to unregister the extract group by removing its registration from oracle DB.
UNREGISTER EXTRACT e_src LOGRETENTION
To unregister the extract e_src""