Tuesday, 29 December 2020

get_cs_data.py: error: unrecognized arguments: /opt/exacloud/cs_data.enc

error while checking Exadata Storage Server default password in X8 machine

[root@ exanode1  ~]# /opt/exacloud/get_cs_data.py --data_file /opt/exacloud/cs_data.enc

usage: get_cs_data.py [-h] [--dataonly]

get_cs_data.py: error: unrecognized arguments: --data_file /opt/exacloud/cs_data.enc


[root@exanode1~]#  /opt/exacloud/get_cs_data.py --dataonly  /opt/exacloud/cs_data.enc

usage: get_cs_data.py [-h] [--dataonly]

get_cs_data.py: error: unrecognized arguments: /opt/exacloud/cs_data.enc



Correct Syntax:

[root@ exanode1 ~]#  /opt/exacloud/get_cs_data.py --dataonly

***************************

Tuesday, 15 December 2020

Create ipc pack for oracle incident

[oracle@ trace]$ adrci

ADRCI: Release 19.0.0.0.0 - Production on Thu Oct 22 02:44:32 2020

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u02/app/oracle"

adrci> show homes

ADR Homes: 

diag/rdbms/racdb01/racdb012

adrci> set home diag/rdbms/racdb01/racdb012

adrci> show problem

ADR Home = /u02/app/oracle/diag/rdbms/racdb01/racdb012:

*************************************************************************

PROBLEM_ID     PROBLEM_KEY                         LAST_INCIDENT          LASTINC_TIME             

------------ ------------------------------------ --------------------  ------------------------- 

2             ORA 600 [KGL-heap-size-exceeded]         892681             2020-10-22 02:05:02.259000 -07:00       

1 row fetched


adrci> show incident

ADR Home = /u02/app/oracle/diag/rdbms/racdb01/racdb012:

*************************************************************************

INCIDENT_ID             PROBLEM_KEY                                 CREATE_TIME                              

------------- ------------------------------------            --------------------------------

898849           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:47:17.772000 -07:00       

898850           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:47:46.487000 -07:00       

893321           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:48:15.119000 -07:00       

895665           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:48:48.707000 -07:00       

897425           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:51:47.065000 -07:00       

5 rows fetched


adrci> ips pack incident 898849 in /tmp

Generated package 3 in file /tmp/ORA600KGL_20201022024903_COM_1.zip, mode complete

Additional incremental files:

 /tmp/ORA600KGL_20201022024903_INC_2.zip


Friday, 20 November 2020

ORA-28361: master key not yet set CREATE TABLESPACE error in oci db

error while creating tablespace in oci database 
we copied 19c db from on premises to oci db systems , tde not enabled in on premises database

SQL> CREATE BIGFILE  TABLESPACE  USERS1 DATAFILE '+DATAC1'   SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G;
CREATE BIGFILE TABLESPACE  USERS1 DATAFILE '+DATAC1' SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G
*
ERROR at line 1:
ORA-28361: master key not yet set

Wallet keys are in open status  , no issues while starting db with wallet 
SQL> select * from v$encryption_wallet;
WRL_TYPE      WRL_PARAMETER   STATUS WALLET_TYPE       WALLET_OR KEYSTORE FULLY_BAC     CON_ID
-------------------- ------------------------------------------------------------ ------------------------------ -------------------- --------- -------- --------- ----------
FILE      /home/oracle/stgdb01/wallet_root/tde/   OPEN AUTOLOGIN       SINGLE NONE NO     0


Login to database as sysdba and set keystore:
SQL> ADMINISTER KEY MANAGEMENT SET KEY USING TAG 'rotate_key' FORCE KEYSTORE IDENTIFIED BY "Password" WITH BACKUP USING 'backup_key';

keystore altered.

NOTE:By default, the keystore password is set to the value of the administration password that is specified when the database deployment is created from cloud console

SQL> select * from v$encryption_wallet;
WRL_TYPE      WRL_PARAMETER      STATUS WALLET_TYPE       WALLET_OR KEYSTORE FULLY_BAC     CON_ID
--------------- ------------------------------------    ---------- -------------------- --------- -------- --------- ----------
FILE     /home/oracle/stgdb01/wallet_root/tde/    OPEN AUTOLOGIN       SINGLE NONE NO     0

Tablespace created without any error 
SQL> CREATE BIGFILE  TABLESPACE  USERS1 DATAFILE '+DATAC1'   SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G;

Tablespace created.


Thursday, 29 October 2020

Permissions granted to the object storage service principal "" to this bucket are insufficient.

Error while copy object between tenancy in a region 

Permissions granted to the object storage service principal "objectstorage-ap-mumbai-1" to this bucket are insufficient.


we have to create a policy that authorizes the service in the specified region to manage Object Storage namespaces, buckets, and their associated objects in all compartments in the tenancy:

Create Policy:

Open the navigation menu,Under Governance and Administration. 

Click Identity and choose Policies 


Click on Create Policy


Policy Created now..let try object copy 



Click ok Copy Object


 now work request is submitted without error


Work Request is completed , now we can see object in target bucket 




Tuesday, 20 October 2020

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage


rman backup failed with below error

Error:

channel c4: backup set complete, elapsed time: 00:00:11

RMAN-03009: failure of backup command on c2 channel at 10/20/2020 01:00:30

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

continuing other job steps, job failed will not be re-run

channel c1: finished piece 1 at Oct 20 1 01:04:35

piece handle=/backup/r/full/full_PRDDB31_1086397215_50020_1 tag=TAG2013401T10017 comment=NONE

channel c1: backup set complete, elapsed time: 00:04:20

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================


RMAN-03009: failure of backup command on c2 channel at 10/20/2020 01:00:30

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage


Cause: Backup location not accessible from all cluster nodes 


Solution:

Mount  /backup  from all nodes and check permissions on backup folder 



Monday, 5 October 2020

GI July 2020 RU 19.8.0.0(31305339) patching Issues , resume failed patch with opatchauto

Had couple of issues while applying GI JULY RU 19.8.0.0 patch(31305339) on Cluster Node2  , Patch was successfully applied in node1

We can expect similar error for both grid and db home  while applying RU patch , Solution also same for both grid and db homes 

Error 1:
opatchauto apply failed with /app/oraInventory/ContentsXML/oui-patch.xml (Permission denied) On node 2

grid patch apply on node2:
root@racdbnode02~ # opatchauto apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 02:52:32 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_02-52-36AM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_02-52-49AM.log
The id for this session is ST84
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start applying binary patch on home /u01/app/19.7.0.0/grid
Failed while applying binary patches on home /u01/app/19.7.0.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : racdbnode02->/u01/app/19.7.0.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0.0/grid, host: racdbnode02.
Command failed:  /u01/app/19.7.0.0/grid/OPatch/opatchauto  apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.7.0.0/grid/oraInst.loc -jre /u01/app/19.7.0.0/grid/OPatch/jre -persistresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_racdbnode02_crs_4.ser -analyzedresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_racdbnode02_crs_4.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_02-55-30AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Oct  4 03:02:44 2020
Time taken to complete the session 10 minutes, 12 seconds
 opatchauto failed with error code 42


Cause : file permission issue, Compared file permission on both nodes ,they are different 
root@racdbnode02~ # ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml
-rw-r--r-- 1 oracle oinstall 174 Oct  4 03:52 /app/oraInventory/ContentsXML/oui-patch.xml

root@ racdbnode01~ # ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml 
-rw-rw---- 1 oracle oinstall 174 Oct  4 02:38 /app/oraInventory/ContentsXML/oui-patch.xml

Solution for error 1: Change permissions to 660 
root@racdbnode02~ #chmod 660 /app/oraInventory/ContentsXML/oui-patch.xml
root@racdbnode02~ #ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml
-rw-rw---- 1 oracle oinstall 174 Oct  4 03:52 /app/oraInventory/ContentsXML/oui-patch.xml

Restart patch apply with resume option: 
Opatchauto resume will resume previous patch session

root@racdbnode02~ # opatchauto resume 
OPatchauto session is initiated at Sun Oct  4 03:31:20 2020
Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_03-31-23AM.log
Resuming existing session with id ST84

Start applying binary patch on home /u01/app/19.7.0.0/grid
Failed while applying binary patches on home /u01/app/19.7.0.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : racdbnode02->/u01/app/19.7.0.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0.0/grid, host: racdbnode02.
Command failed:  /u01/app/19.7.0.0/grid/OPatch/opatchauto  apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid -target_type cluster 
-binary -invPtrLoc /u01/app/19.7.0.0/grid/oraInst.loc -jre /u01/app/19.7.0.0/grid/OPatch/jre 
-persistresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_racdbnode02_crs_4.ser 
-analyzedresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_racdbnode02_crs_4.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /u01/app/19c_Software/31305339/31281355
Log: 
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Unable to create patchObject
Possible causes are:
   ORACLE_HOME/inventory/oneoffs/31281355 is corrupted. PatchObject constructor: 
Input file "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/actions" or "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/inventory" does not exist. 

After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Oct  4 03:31:50 2020
Time taken to complete the session 0 minute, 30 seconds
opatchauto failed with error code 42

Error 2: patch apply again failed, 
Cause: Patch meta data files are missing 
Input file "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/actions" or "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/inventory" does not exist. 

Solution: we can continue patch apply with two option 
Solution 1 (Right way to Apply Patch): Patch was successfully applied on node1, Copy patch directory from node1
1.Take backup of 31281355 folder and copy to node 2 and untar file 
Node 1: tar -cvf 31281355_node1.tar /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355
Node2: 
cd /u01/app/19.7.0.0/grid/inventory/oneoffs/
tar -xvf 31281355_node1.tar

Rollback failed patch:
root@racdbnode02~ # opatchauto rollback /u01/app/19c_Software/31305339/31281355 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 22:49:45 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_10-49-49PM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_10-50-04PM.log
The id for this session is QLI7

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start rolling back binary patch on home /u01/app/19.7.0.0/grid

Binary patch rolled back successfully on home /u01/app/19.7.0.0/grid

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid

OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:
Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY rolled back:
Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_22-51-53PM_1.log
OPatchauto session completed at Sun Oct  4 23:01:53 2020
Time taken to complete the session 12 minutes, 8 seconds

[grid@racdbnode02 ~]$ opatch lspatches
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
30869156;Database Release Update : 19.7.0.0.200414 (30869156)
OPatch succeeded.

Apply Patch:
root@racdbnode02~ # opatchauto apply /u01/app/19c_Software/31305339/31281355 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 23:03:34 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_11-03-38PM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_11-03-53PM.log
The id for this session is FEF8

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start applying binary patch on home /u01/app/19.7.0.0/grid
Binary patch applied successfully on home /u01/app/19.7.0.0/grid

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:

Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_23-05-44PM_1.log
OPatchauto session completed at Sun Oct  4 23:20:59 2020
Time taken to complete the session 17 minutes, 25 seconds

[grid@racdbnode02 ~]$ opatch lspatches
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
OPatch succeeded.

Solution 2 : 
Copy Missed meta datafile from patch staging folder and applly patch 
1.mkdir -p /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config

cd /u01/app/19c_Software/31305339/31281355/etc/config
[grid@racdbnode02 config]$ ls -ltr
total 2912
-rwxrwxrwx 1 grid oinstall  282218 Jul 10 05:23 inventory.xml
-rwxrwxrwx 1 grid oinstall 2697765 Jul 10 05:23 actions.xml

[grid@racdbnode02 config]$ cp * /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config

2.Restart patch apply with resume option: 
root@racdbnode02 ~ # opatchauto resume 

OPatchauto session is initiated at Sun Oct  4 03:47:42 2020
Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_03-47-45AM.log
Resuming existing session with id ST84

Start applying binary patch on home /u01/app/19.7.0.0/grid
Binary patch applied successfully on home /u01/app/19.7.0.0/grid

Checking shared status of home.....

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid

OPatchAuto successful
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:

Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:
Patch: /u01/app/19c_Software/31305339/31281355
Reason: This patch is already been applied, so not going to apply again.

==Following patches were SUCCESSFULLY applied:
Patch: /u01/app/19c_Software/31305339/31304218
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

Patch: /u01/app/19c_Software/31305339/31305087
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

Patch: /u01/app/19c_Software/31305339/31335188
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

OPatchauto session completed at Sun Oct  4 03:58:33 2020
Time taken to complete the session 10 minutes, 51 seconds
root@racdbnode02 ~ #


[grid@ racdbnode02 ~]$ opatch lspatches
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
OPatch succeeded.


Patch apply will completed successfully by skipping previously failed patch( 31281355) 
it misleads us by showing patch details in Opatch lspatches and also opatch lsinventory 

When we check full version details from v$instance or while connecting to sqlplus we can observer it's still in previous version 

select INSTANCE_NAME,host_name,status,logins,version,VERSION_FULL, to_char(STARTUP_TIME,'DD/MM/YYYY HH24:MI:SS') "STARTUP_TIME" from gv$instance;
INSTANCE_NAME      HOST_NAME          STATUS       LOGINS     VERSION           VERSION_FULL      STARTUP_TIME
----------------  ----------------- ------------ ---------- ----------------- ----------------- -------------------
+ASM2             racdbnode02         OPEN         ALLOWED    19.0.0.0.0        19.7.0.0.0        04/10/2020 06:41:00
+ASM1             racdbnode01         OPEN         ALLOWED    19.0.0.0.0        19.8.0.0.0        04/10/2020 06:41:00


We can also observer it while connecting to db with sqlplus “/as sysdba”

[grid@ racdbnode02 config]$ sqlplus “/as sysdba”
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Oct 5 21:56:36 2020
Version 19.8.0.0.0
Copyright (c) 1982, 2020, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.7.0.0.0

SQL>

Note: Right way to do is copy from successfully patch applied node and rollback patch and reapply 

Thursday, 24 September 2020

Cluster Service with and without Domain Parameter

Service with Domain Parameter: 
SQL> show parameter domain
NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain      string corporation.local

Add Service with srvctl: (If you do not qualify the names in this parameter with a domain, Oracle qualifies them with the value of the DB_DOMAIN parameter) 

[oracle@racnode05 ~]$srvctl add service -d poc19c00 -r POC19C001,POC19C002 -s poc19c0_s1 -l PRIMARY -e SELECT -m BASIC -P BASIC
   
[oracle@racnode05 ~]$ srvctl start service -d poc19c00
[oracle@racnode05 ~]$ srvctl status service -d poc19c00
Service poc19c0_s1 is running on instance(s) POC19C001,POC19C002

Service details from listener status:
[oracle@racnode05 ~]$ lsnrctl status listener
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 24-SEP-2020 03:37:12
Copyright (c) 1991, 2020, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                23-SEP-2020 23:50:42
Uptime                    0 days 3 hr. 46 min. 29 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/19.0.0/grid/network/admin/listener.ora
Listener Log File         /oracle/grid/diag/tnslsnr/racnode05/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.6)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_VOTEDISK" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "POC19C00" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this 
service...
Service "poc19c0_s1.corporation.local" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this service...
The command completed successfully

Service without  Domain Parameter: 
SQL> show parameter domain
NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain      string corporation.local

unset domain parameter and restart db:
SQL>  alter system set db_domain='' scope=spfile sid='*';
System altered.

[oracle@racnode05 ~]$ srvctl status database -d poc19c00
Instance POC19C001 is running on node racnode05
Instance POC19C002 is running on node racnode06

[oracle@racnode05 ~]$ srvctl stop database -d poc19c00
[oracle@racnode05 ~]$ srvctl start database -d poc19c00
[oracle@racnode05 ~]$ srvctl status database -d poc19c00
Instance POC19C001 is running on node racnode05
Instance POC19C002 is running on node racnode06

Check Service details:
[oracle@racnode05 ~]$ lsnrctl status listener

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 24-SEP-2020 03:41:03
Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                23-SEP-2020 23:50:42
Uptime                    0 days 3 hr. 50 min. 20 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/19.0.0/grid/network/admin/listener.ora
Listener Log File         /oracle/grid/diag/tnslsnr/racnode05/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.6)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_VOTEDISK" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "POC19C00" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this 
service...
Service "poc19c0_s1" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this service...
The command completed successfully

If DB_DOMAIN is not specified, then no domain will be applied to the non-qualified SERVICE_NAMES values

Wednesday, 23 September 2020

Find default ExaCS Storage Server Username and Password

On Exadata Cloud@Customer, the preconfigured user for Exadata Storage Server is cloud_user_clustername, where clustername is the name of the virtual machine (VM) cluster that is being used.


Check Clustername:

[grid@exanode1 ~]$ crsctl get cluster name

CRS-6724: Current cluster name is 'CLUSTER_NAME'


The password for cloud_user_clustername is initially set to a random value


Check default password:

[root@exanode1 ~]# /opt/exacloud/get_cs_data.py --data_file /opt/exacloud/cs_data.enc

**********************  PASSWORD




Tuesday, 8 September 2020

failed to start asm on node4 with error CRS-2552: There are no available instances of resource 'ora.asm' to start.

Unable to start asm instance after adding 4 node to 3 node 19c cluster

[grid@racnode04 ~]$ srvctl start  asm -n racnode04
PRCR-1013 : Failed to start resource ora.asm
PRCR-1064 : Failed to start resource ora.asm on node racnode04
CRS-2552: There are no available instances of resource 'ora.asm' to start.

Verify asm status 
[grid@racnode03 ~]$ srvctl status asm
ASM is running on racnode02,racnode03,racnode01

Check asm instance count:
[grid@racnode04 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +VoteDisk/orapwASM
Backup of Password file: +VOTEDISK/orapwASM_backup
ASM listener: 
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

Check cardinality
[grid@racnode01 ]$ crsctl status resource ora.asm -f | grep CARDINALITY=
CARDINALITY=3

Both instance count and cardinality set to 3 , it allows only 3 ams instance, asm on node4 can able to start after setting these number to 4 

Modify the Oracle ASM instance count, or cardinality 
[grid@racnode04 ~]$ srvctl modify asm -count 4
[grid@racnode04 ~]$ srvctl modify asm -count ALL

Verify Oracle ASM instance count, or cardinality
[grid@racnode04 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +VoteDisk/orapwASM
Backup of Password file: +VOTEDISK/orapwASM_backup
ASM listener: 
ASM instance count: 4
Cluster ASM listener: ASMNET1LSNR_ASM

[grid@racnode04 ~]$ crsctl status resource ora.asm -f | grep CARDINALITY=
CARDINALITY=4

Start ASM instance on Node4:
[grid@racnode04 ~]$ srvctl start  asm -n racnode04

[grid@racnode04 ~]$ srvctl status asm
ASM is running on racnode02,racnode03,racnode01,racnode04


19.7 Grid upgrade: rootupgrade.sh fails on first node

 rootupgrade.sh failed with below error while upgrading Grid from 18.3 to 19.7.0.0

Error:

CRS-2676: Start of 'ora.cssdmonitor' on 'node01' succeeded

CRS-1609: This node is unable to communicate with other nodes in the cluster and is going down to preserve cluster integrity;

details at (:CSSNM00086:) in /app/grid/diag/crs/node01/crs/trace/ocssd.trc.

CRS-2883: Resource 'ora.cssd' failed during Clusterware stack start.

CRS-4406: Oracle High Availability Services synchronous start failed.

CRS-41053: checking Oracle Grid Infrastructure for file permission issues

CRS-4000: Command Start failed, or completed with errors. 2020/09/07 09:08:46

CLSRSC-117: (Bad argc for has:clsrsc-117) Died at /u01/app/19.3.0.0/grid/crs/install/crsupgrade.pm line 1617.

 

We can get deviated with  “unable to communicate with other nodes”  errors in alert and trace files, Started looking communication between nodes

 

1.  Verified ssh connectivity between nodes , Its working fine

2.  Verified ping and traceroute , Looks good

From Node 1:

+ ping -s 9000 -c 4 -I <node1-private address> <node1-private address>

+ ping -s 9000 -c 4 -I <node1-private address> <node2-private address>

 

+ traceroute -s <node1-private address> -r -F <node1-private address> 8972

+ traceroute -s <node1-private address> -r -F <node2-private address> 8972

 

From Node 2:

+ ping -s 9000 -c 4 -I <node2-private address> <node1-private address>

+ ping -s 9000 -c 4 -I <node2-private address> <node2-private address>

 

+ traceroute -s <node2-private address> -r -F <node1-private address> 8972

+ traceroute -s <node2-private address> -r -F <node2-private address> 8972

 

While checking gipcd.trc found some failed errors:

020-09-07 08:21:27.483 : GIPCTLS:474797824:  gipcmodTlsAuthInit: tls context initialized successfully

2020-09-07 08:21:27.524 :GIPCXCPT:474797824:  gipcmodTlsLogErr: [NZOS], ssl_Handshake failed to perform operation on handshake with NZERROR [29024]

2020-09-07 08:21:27.524 :GIPCXCPT:474797824:  gipcmodTlsAuthStart: ssl_Handshake() failed with nzosErr : 29024, ret gipcretTlsErr (49)

 

As per bug id 2667217.1, Similar error reported in 19.6 upgrade

Workaround on 19.7:

1) Run rootupgrade.sh on node1

2) When it fails on Node1 with this error, then shutdown crs on node 2

  cd <18c_Gridhome/bin>

   ./crsctl stop crs

3)rerun rootupgrade.sh on node1

Wednesday, 2 September 2020

How to Install and Upgrade Terraform

Install Terraform on MAC:
1.Install Homebrew (The Missing Package Manager for macOS and Linux)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
2. Install terraform
$ brew install terraform
3. Check terraform Version 
$ terraform -v
Terraform v0.13.2

Install Terraform on Linux:
2.Copy Software to linux machine and unzip 
/home/oracle>unzip terraform_0.13.2_linux_amd64.zip
Archive:  terraform_0.13.2_linux_amd64.zip
  inflating: terraform      
         
/home/oracle> ls -ltr
total 117600
-rwxr-xr-x 1 oracle oinstall 85545348 Sep 1 07:39 terraform
-rw-r--r-- 1 oracle oinstall 34869122 Sep 1 08:16 terraform_0.13.2_linux_amd64.zip

3.Move Terraform to linux home path 
/home/oracle> mv terraform /usr/local/bin

/home/oracle> ls -ltr /usr/local/bin
total 83564
-rwxr-xr-x 1 oracle oinstall 85545348 Sep  1 07:39 terraform

4.Check terraform version 
/home/oracle> terraform -v
Terraform v0.13.2

Upgrade terraform to latest Version in MAC:
1.Present version 13.2 is out of date 
$terraform version 
Terraform v0.13.2
Your version of Terraform is out of date! The latest version
is 0.13.4. You can update by downloading from https://www.terraform.io/downloads.html

2.Upgrade to latest version (13.4)
$ brew upgrade terraform
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> New Formulae
asroute               flit                  leaf                  libnetfilter-queue    libxcomposite         libxkbfile            libxv                 server-go             xdpyinfo
blaze                 font-util             libaio                libnfnetlink          libxcursor            libxmu                libxvmc               shtools               xorgproto
box2d                 foreman               libdmx                libpciaccess          libxdamage            libxpm                libxxf86dga           structurizr-cli       xtrans
cloudformation-guard  fpart                 libdrm                libpthread-stubs      libxdmcp              libxrandr             libxxf86vm            tfsec                 zsh-you-should-use
commitizen            gcalcli               libfontenc            libsm                 libxext               libxrender            matplotplusplus       trunk
cvs-fast-export       git-hound             libfs                 libx11                libxfixes             libxres               or-tools              util-macros
device-mapper         gitql                 libgccjit             libxau                libxfont              libxscrnsaver         postgresql@12         vivid
dotnet                googletest            libhandy              libxaw                libxft                libxshmfence          protoc-gen-go-grpc    webify
envoy                 gost                  libice                libxaw3d              libxi                 libxt                 python@3.9            x86_64-elf-gdb
fleet-cli             halide                libmnl                libxcb                libxinerama           libxtst               rustscan              xcb-proto
==> Updated Formulae
Updated 1203 formulae.
==> Renamed Formulae
gst-validate -> gst-devtools
==> Deleted Formulae
meson-internal                                                                                       xu4

==> Upgrading 1 outdated package:
terraform 0.13.2 -> 0.13.4
==> Upgrading terraform 0.13.2 -> 0.13.4 
==> Downloading https://homebrew.bintray.com/bottles/terraform-0.13.4.catalina.bottle.tar.gz
==> Downloading from https://d29vzk4ow07wi7.cloudfront.net/05e7bf567f54c8396df8cf90470e573f5b08ebe8920bad1e638f531a54a152b6?response-content-disposition=attachment%3Bfilename%3D%22terraform-0.13.4.cat
######################################################################## 100.0%
==> Pouring terraform-0.13.4.catalina.bottle.tar.gz
🍺  /usr/local/Cellar/terraform/0.13.4: 6 files, 67.6MB
==> `brew cleanup` has not been run in 30 days, running now...
Removing: /usr/local/Cellar/terraform/0.13.2... (6 files, 67.6MB)
Removing: /Users/jay/Library/Caches/Homebrew/terraform--0.13.2.catalina.bottle.tar.gz... (19.8MB)
Removing: /Users/jay/Library/Logs/Homebrew/terraform... (64B)

3.Verify terraform version after upgrade
$ terraform version 
Terraform v0.13.4