Tuesday 4 November 2014

how to upgrade Oracle Clusterware, ASM and Database from 10gR2 [10.2.0.5] RAC to 11gR2 [11.2.0.3] RAC

The current 10gR2 RAC database configuration is as below, • OS: RHEL 5.8 • Oracle User: oracle • Grid User: grid • Oracle Home: /u01/app/oracle/products/10.2.0/db • Grid Home: /u01/app/grid/products/10.2.0/crs • The OCR, Voting Disk and Database files are stored on raw devices. OCR Disks - /dev/raw/raw1 - /dev/raw/raw2 Voting Disks - /dev/raw/raw3 - /dev/raw/raw4 - /dev/raw/raw5 Database Files - /dev/raw/raw6 Archive Log Files - /dev/raw/raw7 RedoLog Files - /dev/raw/raw8 - /dev/raw/raw9 ====================================================================================================== Complete the below checks before proceeding with the upgrade installation of 11gR2 Clusterware and ASM - First of all make sure you have downloaded the below software from Oracle Support site for 11gR2 Clusterware and Database installation. • p10404530_112030_Linux-x86-64_1of7.zip • p10404530_112030_Linux-x86-64_2of7.zip • p10404530_112030_Linux-x86-64_3of7.zip - Configure the SCAN IPs in the DNS. You can also configure the SCAN IP in your 'hosts' file, but oracle does not recommend it. - Complete all the preinstallation requirements for GRID Infrastructure and Oracle Software. - Backup VOTING DISK and OCR • OCR Backup ocrconfig -export /restore/crs/ocr_backup.ocr -s online • Voting Disk Backup dd if=/dev/raw/raw3 of=/restore/crs/votedisk1.dmp - Backup CLUSTER HOME, ORACLE HOME and OraInventory on all nodes • Cluster Home and Oracle Homes [Both Nodes] tar -czvf CRS_HOME.tar.gz /u01/app/grid/products/10.2.0/crs tar -czvf ORA_HOME.tar.gz /u01/app/oracle/products/10.2.0/db tar -czvf ORA_INVENTORY.tar.gz /u01/app/oraInventory - Create and add ASM Groups required for 11gR2 Grid Installation to the existing users # groupadd asmadmin # groupadd asmdba # groupadd asmoper # usermod -a -G asmadmin,asmdba,asmoper grid # usermod -a -G asmadmin,asmdba,asmoper oracle # id grid # id oracle - Make sure you have added the below entries in /etc/udev/rules.d/60-raw.rules file. Otherwise you will get a message stating that "Udev attributes check" failed during prechecks of Grid Infrastructure installation. # vi /etc/udev/rules.d/60-raw.rules ## OCR Disks ACTION=="add", KERNEL=="sdb1", RUN+="/bin/raw /dev/raw/raw1 %N" ACTION=="add", KERNEL=="sdc1", RUN+="/bin/raw /dev/raw/raw2 %N" ACTION=="add", KERNEL=="raw1", OWNER="root", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw2", OWNER="root", GROUP="oinstall", MODE="0640" ## VOTING Disks ACTION=="add", KERNEL=="sdd1", RUN+="/bin/raw /dev/raw/raw3 %N" ACTION=="add", KERNEL=="sde1", RUN+="/bin/raw /dev/raw/raw4 %N" ACTION=="add", KERNEL=="sdf1", RUN+="/bin/raw /dev/raw/raw5 %N" ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="oinstall", MODE="0640" ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="oinstall", MODE="0640" ## DATAFILE, LOGFILE, ARCHIVELOG Disks ACTION=="add", KERNEL=="sdg1", RUN+="/bin/raw /dev/raw/raw6 %N" ACTION=="add", KERNEL=="sdh1", RUN+="/bin/raw /dev/raw/raw7 %N" ACTION=="add", KERNEL=="sdi1", RUN+="/bin/raw /dev/raw/raw8 %N" ACTION=="add", KERNEL=="sdj1", RUN+="/bin/raw /dev/raw/raw9 %N" ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw7", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw8", OWNER="grid", GROUP="oinstall", MODE="0770" ACTION=="add", KERNEL=="raw9", OWNER="grid", GROUP="oinstall", MODE="0770" Make sure you set the owner of OCR devices as root, owner of Voting disks as grid and owner of disks containing Database files as oracle. - Run the below command as "grid" user to check whether rolling upgrade prechecks have been completed or not. For this the 10g RAC cluster should be up and running. $ ./runcluvfy.sh stage -pre crsinst -upgrade -n rac1,rac2 -rolling -src_crshome /u01/app/grid/products/10.2.0/crs -dest_crshome /u01/app/11.2.0/products/crs -dest_version 11.2.0.3.0 -fixup -fixupdir /orainstall/fixupscript -verbose If you get the error as "PRVF-10037 : Failed to retrieve storage type for "/dev/raw/raw2"......" after running the runcluvfy utility, you can ignore this error as per the DOC ID 1233505.1 and continue with the installation. - You can also directly run the cluvfy utility as below to verify whether the prechecks have been done or not. For this the 10g RAC cluster need not be up and running. $ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose Verify that all the requirements have "Passed". If any prechecks failed, resolve it and again run the cluvfy utility. ====================================================================================================== - Steps to upgrade your 10g Clusterware and ASM to 11gR2 Once you completed all the above steps, you can proceed with the installation. [Note – The existing 10g RAC Cluster &amp; ASM should be up and running on both the nodes] • Unset the environmental variables GRID_BASE and GRID_HOME of the current grid user and start the installation. $ cd grid $ ./runInstaller • DOWNLOAD SOFTWARE UPDATES Screen Select the option "Skip software updates" and click on Next to continue. • INSTALLATION OPTION Screen The upgrade option will be automatically selected. Click on Next to continue. • PRODUCT LANGUAGES Screen Click on Next to continue. • NODE SELECTION Screen Select all the nodes and make sure the Upgrade Cluster Oracle ASM is selected. Click on Next to continue. After clicking on Next, you will be prompted stating that "Oracle ASM cannot be upgraded using rolling upgrade". Click 'Yes' to continue. • SCAN INFORMATION Screen Enter the SCAN IP details and port number to configure for SCAN. Click on Next to continue. • ASM MONITOR PASSWORD Screen Enter the password and click on Next to continue. • OPERATING SYSTEM GROUPS Screen Verify whether proper groups are selected. Click on Next to continue. • INSTALLATION LOCATION Screen Enter the location where software will reside. Click on Next to continue. • PREREQUISITE CHECKS Screen Make sure that all the prechecks are successful. In this installation, I got precheck failed for "Task resolv.conf integrity". I ignored this and continued with the installation. • SUMMARY Screen Click on Install to start the installation. • INSTALL PRODUCT Screen You can monitor the installation process. • EXECUTE CONFIGURATION SCRIPTS Screen Run the rootupgrade.sh script on all nodes of RAC. Once the script is run on all nodes, click on OK to continue with the installation. • FINISH Screen As you can see the upgrade of Grid Infrastructure was successful. ====================================================================================================== - Post Installation Checks • Once the upgrade of clusterware and ASM is completed successfully, you can confirm the status of 11gR2 cluster using below commands, $ cd /u01/app/11.2.0/grid/bin $ ./crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.3.0] $ ./crsctl query crs softwareversion Oracle Clusterware version on node [rac1] is [11.2.0.3.0] $ ./crsctl check cluster -all ************************************************************** rac1: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac2: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** • OCR and Voting disk checks [Run the below command as root user] # ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 986324 Used space (kbytes) : 6176 Available space (kbytes) : 980148 ID : 1893642273 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded Device/File not configured Device/File not configured Device/File not configured Cluster registry integrity check succeeded Logical corruption check succeeded # ./crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 4fa950b22cae4fb5ff331cde39deb2d9 (/dev/raw/raw3) [] 2. ONLINE 9ce7c46916344f26bf90bfce130a96bc (/dev/raw/raw4) [] 3. ONLINE 782ce22735117fa0fff918ef23bc738d (/dev/raw/raw5) [] Located 3 voting disk(s). • Now change the GRID_HOME and GRID_BASE parameter of grid user in the .bash_profile file to the new location on both nodes. • In 11gR2, the owner of ASM instance is now the grid user. If you try to login to ASM instance, it will show you as "Connected to an idle instance". It is because the DATAGRP, LOGGRP1, LOGGRP2 and ARCHGRP diskgroups owner in 10g RAC was the oracle user. We need to change this owner to the grid user. • Login as root on both nodes and stop the cluster. Make the below changes on all RAC nodes. $ vi /etc/udev/rules.d/60-raw.rules [Change the owner to "grid" for above mentioned diskgroups. In our case the disks are raw6, raw7, raw8, raw9. Also change the permission to 770, since the oracle user will need to read and write on this raw device.] ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="oinstall", MODE="0770" Restart the service # /etc/init.d/rawdevices restart Now start the cluster on both nodes. # /u01/app/11.2.0/grid/bin/crsctl start crs Once the cluster is up and running, you will be able to login into ASM instance as grid user. $ export ORACLE_SID=+ASM1 $ sqlplus / as sysasm SQL*Plus: Release 11.2.0.3.0 Production on Mon Oct 22 09:20:17 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Real Application Clusters and Automatic Storage Management options • Once the post upgrade checks are completed, you can detach the Old Grid home [cluster and ASM from the inventory]. $ /u01/app/grid/products/10.2.0/crs/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/grid/products/10.2.0/crs ====================================================================================================== - Install 11gR2 Oracle Database Software • Unset the environmental variables ORACLE_BASE and ORACLE_HOME of the current oracle user and start the installation. $ cd /orainstall/database $ ./runInstaller • CONFIGURE SECURITY UPDATES Screen Uncheck the option "I wish to receive security updates...." and click on Next to continue the installation. • DOWNLOAD SOFTWARE UPDATES Screen Select "Skip Software Updates" and click on Next to continue the installation. • INSTALLATION OPTION Screen Select "Install Database Software Only" and click on Next to continue. • GRID INSTALLATION OPTIONS Screen By default, the option "Oracle Real Application Clusters database installation" is selected. Make sure that all the nodes are also selected before proceeding with the installation. • PRODUCT LANGUAGES Screen Click on Next to continue. • DATABASE EDITION Screen Since the 10gR2 database was configured as a Standard Edition, I have selected here the "Standard Edition" option. • INSTALLATION LOCATION Screen Enter the path for Oracle Software installation and Click on Next to continue. • OPERATING SYSTEM GROUPS Screen Check whether appropriate groups are shown. Click on Next to continue. • PREREQUISITE CHECKS Screen Here also the Task resolv.integrity check failed. Ignore it and click on Next to continue with the installation. • SUMMARY Screen Check whether all the information displayed is proper or not. To continue with the installation click on Install. • INSTALL PRODUCT Screen You can observe the installation process. Once the remote copy is done, you will get a screen to run the root.sh script as shown below. After you have finished executing the root.sh script on all nodes, click on OK. • FINISH Screen This is the end of installation. - You have successfully installed the Oracle Database Software. The next step is to upgrade the Database to 11gR2. ====================================================================================================== - Steps to upgrade your 10g RAC Database to 11gR2 There are two ways to upgrade Database 1) Using DBUA Run Database Upgrade Assistant $ dbua 2) Manual Upgrade Below steps are to upgrade the database manually. Follow the steps only on Node 1 of the cluster. As per Oracle Doc ID 1358166.1 there's no pre-patch required if the timezone version is 4 on 10gR2 but it's advised to upgrade the timezone to 14 after the upgrade. This could be done while upgrading the database. To check the Current timezone, login into the database and execute the below query. SQL> select * from v$timezone_file; FILENAME VERSION ------------ ---------- timezlrg.dat 4 • Copy the tnsnames.ora from old Home location to new location. $ cp /u01/app/oracle/products/10.2.0/db/network/admin/tnsnames.ora /u01/app/oracle/products/11.2.0/db/network/admin/tnsnames.ora • Increase the SGA_MAX_SIZE, SGA_TARGET and PGA_AGGREGATE_TARGET SQL> alter system set sga_max_size=2500M scope=spfile; System altered. SQL> alter system set sga_target=2500M scope=spfile; System altered. SQL> alter system set pga_aggregate_target=1500M scope=spfile; System altered. • Disable the archivelog mode before upgradation if you don't want the logs to be generated. It will also give some performance benefit. • Increase the RAM and SWAP space of the RAC nodes if required. • Create init.ora file and place it in the 11g Oracle Home dbs directory on both nodes and remove the obsolete initialization parameters. Comment the background_dump_dest and user_dump_dest parameters since both the parameters are deprecated in 11g and replaced by the parameter diagnostic_dest. Add the new parameter in init.ora as shown below, *.diagnostic_dest='/u01/app/oracle' Once you start the database, it will automatically create the directory structure like 'diag/rdbms/...' below the above given path '/u01/app/oracle'. • Make sure the COMPATIBLE initialization parameter is properly set for Oracle Database 11g Release 2 (11.2). The Pre-Upgrade Information Tool displays a warning in the Database section if COMPATIBLE is not properly set. • Set the CLUSTER_DATABASE initialization parameter to false. After the upgrade, you must set this initialization parameter back to TRUE. • Comment the local_listener and remote_listener parameters in init.ora file. • Copy the password file from 10g Oracle Home and copy it to the 11g Oracle Home. • Add entry in /etc/oratab file. • Add online redo log groups if required. alter database add logfile group 5 size 50M; alter database add logfile group 6 size 50M; alter database add logfile group 7 size 50M; alter database add logfile group 8 size 50M; • Run the premigration checkup script utlu112i.sql. This script is located on: <11g-Oracle-Home>/rdbms/admin directory. You can copy it to the 10g home and from there execute it on the 10g database before starting the upgrade. The utlu112i.sql will provide on its output infromation about aditional steps that may be required to execute before the migration. $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> spool pre_migration_check.log; SQL> @/u01/app/oracle/products/11.2.0/db/rdbms/admin/utlu112i.sql SQL> spool off; • Login as oracle user and set new value for the environmental variable ORACLE_HOME. $ export ORACLE_HOME=$ORACLE_BASE/products/11.2.0/db • After preparing the new Oracle home, you are ready to proceed with the manual upgrade. Make sure the database is down. Login as oracle user and issue the below commands. $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> startup upgrade; ORACLE instance started. Total System Global Area 2622255104 bytes Fixed Size 2231232 bytes Variable Size 553649216 bytes Database Buffers 2063597568 bytes Redo Buffers 2777088 bytes Database mounted. Database opened. SQL> spool upgrade.log SQL> @?/rdbms/admin/catupgrd.sql • Once the upgrades finishes, it will shut down the database automatically. Login again as sysdba and startup in normal mode. • Check the dba_registry for the components and its status set lines 200; set pages 1000; column comp_name format a40; column version format a12; column status format a15; select comp_name, version, status from dba_registry; COMP_NAME VERSION STATUS ---------------------------------------- ------------ --------------- Oracle Enterprise Manager 11.2.0.3.0 VALID OLAP Catalog 10.2.0.5.0 OPTION OFF Spatial 10.2.0.5.0 OPTION OFF Oracle Multimedia 11.2.0.3.0 VALID Oracle XML Database 11.2.0.3.0 VALID Oracle Text 11.2.0.3.0 VALID Oracle Data Mining 10.2.0.5.0 OPTION OFF Oracle Expression Filter 11.2.0.3.0 VALID Oracle Rules Manager 11.2.0.3.0 VALID Oracle Workspace Manager 11.2.0.3.0 VALID Oracle Database Catalog Views 11.2.0.3.0 VALID Oracle Database Packages and Types 11.2.0.3.0 VALID JServer JAVA Virtual Machine 11.2.0.3.0 VALID Oracle XDK 11.2.0.3.0 VALID Oracle Database Java Packages 11.2.0.3.0 VALID OLAP Analytic Workspace 10.2.0.5.0 OPTION OFF Oracle OLAP API 10.2.0.5.0 OPTION OFF Oracle Real Application Clusters 11.2.0.3.0 VALID • Run catuppst.sql, located in the ORACLE_HOME/rdbms/admin directory, to perform upgrade actions that do not require the database to be in UPGRADE mode SQL> @?/rdbms/admin/catuppst.sql • Run the utlrp.sql script to recompile any invalid objects. SQL> spool recompile.log SQL> @?/rdbms/admin/utlrp.sql SQL> spool off; • When the utlrp.sql script completes, verify if all the components are upgraded to 11.2.0.3 by running the below script. Run utlu112s.sql to display the results of the upgrade, SQL> @?/rdbms/admin/utlu112s.sql • Shutdown the Database. Change the value of below parameters in init.ora file *.cluster_database='true' *.remote_listener='scan-ip:1515' Now start the database in normal mode. Create spfile from pfile. Again shutdown the database and start, check whether it is coming up properly using spfile or not. • Now shutdown the database. Change the environmental parameters to 10g Oracle Home. We will need to remove the 10g Database and Instances from the cluster. $ srvctl remove instance -d orcldb -i orcldb1 $ srvctl remove instance -d orcldb -i orcldb2 $ srvctl remove database -d orcldb • Now again change the environmental variables to 11g Oracle Home and add the database and instance to the cluster. $ srvctl add database -d orcldb -o /u01/app/oracle/products/11.2.0/db $ srvctl add instance -d orcldb -i orcldb1 -n rac1 Shutdown the DB and Start DB using srvctl utility. [On Node 2] • Now copy the init.ora file and password file from node 1 to node 2. Start instance on node 2 and Create spfile SQL> startup SQL> create spfile from pfile; Shutdown the database and Start DB using spfile. • Add instance from node 2 to the cluster $ srvctl add instance -d orcldb -i orcldb2 -n rac2 • Shutdown the instances on both nodes and start database using srvctl utility $ srvctl start database -d orcldb • Create a spfile on shared location. SQL> create spfile='+CNTRL_LOG_GRP2' from pfile; • Run the below command to check the database config. $ srvctl config database -d orcldb -a Database unique name: orcldb Database name: Oracle home: /u01/app/oracle/products/11.2.0/db Oracle user: oracle Spfile: Domain: Start options: open Stop options: immediate Database role: PRIMARY Management policy: AUTOMATIC Server pools: orcldb Database instances: orcldb1,orcldb2 Disk Groups: CNTRL_LOG_GRP1,DATAGRP,CNTRL_LOG_GRP2 Mount point paths: Services: Type: RAC Database is enabled Database is administrator managed • Change spfile path and database name in cluster config. $ srvctl modify database -d orcldb -p '+CNTRL_LOG_GRP2/ORCLDB/PARAMETERFILE/spfile.265.823276117' $ srvctl modify database -d orcldb -n orcldb • Detach the 10g Oracle Home from cluster. $ /u01/app/oracle/products/10.2.0/db/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=/u01/app/oracle/products/10.2.0/db • Change the database mode to archivelog. $ srvctl stop database -d orcldb -o immediate [Login from any one node] $ export ORACLE_SID=orcldb1 $ sqlplus / as sysdba SQL> startup mount; SQL> alter database archivelog; • Now stop the cluster on both nodes and check whether the DB is going down and starting up with the cluster. That's it, your upgradation finishes.

No comments:

Post a Comment