Tuesday 29 December 2020

get_cs_data.py: error: unrecognized arguments: /opt/exacloud/cs_data.enc

error while checking Exadata Storage Server default password in X8 machine

[root@ exanode1  ~]# /opt/exacloud/get_cs_data.py --data_file /opt/exacloud/cs_data.enc

usage: get_cs_data.py [-h] [--dataonly]

get_cs_data.py: error: unrecognized arguments: --data_file /opt/exacloud/cs_data.enc


[root@exanode1~]#  /opt/exacloud/get_cs_data.py --dataonly  /opt/exacloud/cs_data.enc

usage: get_cs_data.py [-h] [--dataonly]

get_cs_data.py: error: unrecognized arguments: /opt/exacloud/cs_data.enc



Correct Syntax:

[root@ exanode1 ~]#  /opt/exacloud/get_cs_data.py --dataonly

***************************

Tuesday 15 December 2020

Create ipc pack for oracle incident

[oracle@ trace]$ adrci

ADRCI: Release 19.0.0.0.0 - Production on Thu Oct 22 02:44:32 2020

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u02/app/oracle"

adrci> show homes

ADR Homes: 

diag/rdbms/racdb01/racdb012

adrci> set home diag/rdbms/racdb01/racdb012

adrci> show problem

ADR Home = /u02/app/oracle/diag/rdbms/racdb01/racdb012:

*************************************************************************

PROBLEM_ID     PROBLEM_KEY                         LAST_INCIDENT          LASTINC_TIME             

------------ ------------------------------------ --------------------  ------------------------- 

2             ORA 600 [KGL-heap-size-exceeded]         892681             2020-10-22 02:05:02.259000 -07:00       

1 row fetched


adrci> show incident

ADR Home = /u02/app/oracle/diag/rdbms/racdb01/racdb012:

*************************************************************************

INCIDENT_ID             PROBLEM_KEY                                 CREATE_TIME                              

------------- ------------------------------------            --------------------------------

898849           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:47:17.772000 -07:00       

898850           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:47:46.487000 -07:00       

893321           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:48:15.119000 -07:00       

895665           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:48:48.707000 -07:00       

897425           ORA 600 [KGL-heap-size-exceeded]              2020-10-22 01:51:47.065000 -07:00       

5 rows fetched


adrci> ips pack incident 898849 in /tmp

Generated package 3 in file /tmp/ORA600KGL_20201022024903_COM_1.zip, mode complete

Additional incremental files:

 /tmp/ORA600KGL_20201022024903_INC_2.zip


Friday 20 November 2020

ORA-28361: master key not yet set CREATE TABLESPACE error in oci db

error while creating tablespace in oci database 
we copied 19c db from on premises to oci db systems , tde not enabled in on premises database

SQL> CREATE BIGFILE  TABLESPACE  USERS1 DATAFILE '+DATAC1'   SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G;
CREATE BIGFILE TABLESPACE  USERS1 DATAFILE '+DATAC1' SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G
*
ERROR at line 1:
ORA-28361: master key not yet set

Wallet keys are in open status  , no issues while starting db with wallet 
SQL> select * from v$encryption_wallet;
WRL_TYPE      WRL_PARAMETER   STATUS WALLET_TYPE       WALLET_OR KEYSTORE FULLY_BAC     CON_ID
-------------------- ------------------------------------------------------------ ------------------------------ -------------------- --------- -------- --------- ----------
FILE      /home/oracle/stgdb01/wallet_root/tde/   OPEN AUTOLOGIN       SINGLE NONE NO     0


Login to database as sysdba and set keystore:
SQL> ADMINISTER KEY MANAGEMENT SET KEY USING TAG 'rotate_key' FORCE KEYSTORE IDENTIFIED BY "Password" WITH BACKUP USING 'backup_key';

keystore altered.

NOTE:By default, the keystore password is set to the value of the administration password that is specified when the database deployment is created from cloud console

SQL> select * from v$encryption_wallet;
WRL_TYPE      WRL_PARAMETER      STATUS WALLET_TYPE       WALLET_OR KEYSTORE FULLY_BAC     CON_ID
--------------- ------------------------------------    ---------- -------------------- --------- -------- --------- ----------
FILE     /home/oracle/stgdb01/wallet_root/tde/    OPEN AUTOLOGIN       SINGLE NONE NO     0

Tablespace created without any error 
SQL> CREATE BIGFILE  TABLESPACE  USERS1 DATAFILE '+DATAC1'   SIZE 200M AUTOEXTEND ON NEXT 100M MAXSIZE 10G;

Tablespace created.


Thursday 29 October 2020

Permissions granted to the object storage service principal "" to this bucket are insufficient.

Error while copy object between tenancy in a region 

Permissions granted to the object storage service principal "objectstorage-ap-mumbai-1" to this bucket are insufficient.


we have to create a policy that authorizes the service in the specified region to manage Object Storage namespaces, buckets, and their associated objects in all compartments in the tenancy:

Create Policy:

Open the navigation menu,Under Governance and Administration. 

Click Identity and choose Policies 


Click on Create Policy


Policy Created now..let try object copy 



Click ok Copy Object


 now work request is submitted without error


Work Request is completed , now we can see object in target bucket 




Tuesday 20 October 2020

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage


rman backup failed with below error

Error:

channel c4: backup set complete, elapsed time: 00:00:11

RMAN-03009: failure of backup command on c2 channel at 10/20/2020 01:00:30

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage

continuing other job steps, job failed will not be re-run

channel c1: finished piece 1 at Oct 20 1 01:04:35

piece handle=/backup/r/full/full_PRDDB31_1086397215_50020_1 tag=TAG2013401T10017 comment=NONE

channel c1: backup set complete, elapsed time: 00:04:20

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN-00571: ===========================================================

RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============

RMAN-00571: ===========================================================


RMAN-03009: failure of backup command on c2 channel at 10/20/2020 01:00:30

ORA-00245: control file backup failed; in Oracle RAC, target might not be on shared storage


Cause: Backup location not accessible from all cluster nodes 


Solution:

Mount  /backup  from all nodes and check permissions on backup folder 



Thursday 8 October 2020

Find master node in Oracle 19c Cluster

we can find Oracle cluster master node using below commands 


1.ocrconfig -manualbackup


2.grep "master node number" from ocssd.log 


3.oclumon manage -get master  


4.select MASTER_NODE from v$ges_resource;


Tuesday 6 October 2020

AUTO_SAMPLE_SIZE in DBMS_STATS

SYS.DBMS_STATS.AUTO_SAMPLE_SIZE is a parameter that can be used in the Oracle DBMS_STATS package to specify that Oracle should automatically determine the appropriate sample size when gathering statistics on a database object.

When AUTO_SAMPLE_SIZE is used, Oracle analyzes the object being sampled and selects an appropriate sample size based on its characteristics. This can help to ensure that accurate statistics are gathered while minimizing the amount of time and system resources required to perform the operation.

It is highly recommended that from Oracle Database 11g on wards that the default AUTO_SAMPLE_SIZE is used for ESTIMATE_PRECENT. This is especially important because the newer histogram types (HYBRID and Top-Frequency) can only be created if an auto sample size is used.
 

Here is an example of how to use SYS.DBMS_STATS.AUTO_SAMPLE_SIZE with the DBMS_STATS.GATHER_TABLE_STATS procedure to collect statistics for a table named "EMPLOYEES":

BEGIN DBMS_STATS.GATHER_TABLE_STATS( ownname => 'HR', tabname => 'EMPLOYEES', estimate_percent => SYS.DBMS_STATS.AUTO_SAMPLE_SIZE, degree => DBMS_STATS.AUTO_DEGREE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE); END; /

In this example, the estimate_percent parameter is set to SYS.DBMS_STATS.AUTO_SAMPLE_SIZE, indicating that Oracle should automatically determine the appropriate sample size when gathering statistics for the "EMPLOYEES" table. The other parameters used in this example are similar to those used in the previous example I provided.


Monday 5 October 2020

GI July 2020 RU 19.8.0.0(31305339) patching Issues , resume failed patch with opatchauto

Had couple of issues while applying GI JULY RU 19.8.0.0 patch(31305339) on Cluster Node2  , Patch was successfully applied in node1

We can expect similar error for both grid and db home  while applying RU patch , Solution also same for both grid and db homes 

Error 1:
opatchauto apply failed with /app/oraInventory/ContentsXML/oui-patch.xml (Permission denied) On node 2

grid patch apply on node2:
root@racdbnode02~ # opatchauto apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 02:52:32 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_02-52-36AM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_02-52-49AM.log
The id for this session is ST84
Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start applying binary patch on home /u01/app/19.7.0.0/grid
Failed while applying binary patches on home /u01/app/19.7.0.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : racdbnode02->/u01/app/19.7.0.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0.0/grid, host: racdbnode02.
Command failed:  /u01/app/19.7.0.0/grid/OPatch/opatchauto  apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.7.0.0/grid/oraInst.loc -jre /u01/app/19.7.0.0/grid/OPatch/jre -persistresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_racdbnode02_crs_4.ser -analyzedresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_racdbnode02_crs_4.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_02-55-30AM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: ApplySession failed in system modification phase... 'ApplySession::apply failed: java.io.IOException: oracle.sysman.oui.patch.PatchException: java.io.FileNotFoundException: /app/oraInventory/ContentsXML/oui-patch.xml (Permission denied)' 
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Oct  4 03:02:44 2020
Time taken to complete the session 10 minutes, 12 seconds
 opatchauto failed with error code 42


Cause : file permission issue, Compared file permission on both nodes ,they are different 
root@racdbnode02~ # ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml
-rw-r--r-- 1 oracle oinstall 174 Oct  4 03:52 /app/oraInventory/ContentsXML/oui-patch.xml

root@ racdbnode01~ # ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml 
-rw-rw---- 1 oracle oinstall 174 Oct  4 02:38 /app/oraInventory/ContentsXML/oui-patch.xml

Solution for error 1: Change permissions to 660 
root@racdbnode02~ #chmod 660 /app/oraInventory/ContentsXML/oui-patch.xml
root@racdbnode02~ #ls -ltr /app/oraInventory/ContentsXML/oui-patch.xml
-rw-rw---- 1 oracle oinstall 174 Oct  4 03:52 /app/oraInventory/ContentsXML/oui-patch.xml

Restart patch apply with resume option: 
Opatchauto resume will resume previous patch session

root@racdbnode02~ # opatchauto resume 
OPatchauto session is initiated at Sun Oct  4 03:31:20 2020
Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_03-31-23AM.log
Resuming existing session with id ST84

Start applying binary patch on home /u01/app/19.7.0.0/grid
Failed while applying binary patches on home /u01/app/19.7.0.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : racdbnode02->/u01/app/19.7.0.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.7.0.0/grid, host: racdbnode02.
Command failed:  /u01/app/19.7.0.0/grid/OPatch/opatchauto  apply /u01/app/19c_Software/31305339 -oh /u01/app/19.7.0.0/grid -target_type cluster 
-binary -invPtrLoc /u01/app/19.7.0.0/grid/oraInst.loc -jre /u01/app/19.7.0.0/grid/OPatch/jre 
-persistresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_racdbnode02_crs_4.ser 
-analyzedresult /u01/app/19.7.0.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_racdbnode02_crs_4.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /u01/app/19c_Software/31305339/31281355
Log: 
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Unable to create patchObject
Possible causes are:
   ORACLE_HOME/inventory/oneoffs/31281355 is corrupted. PatchObject constructor: 
Input file "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/actions" or "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/inventory" does not exist. 

After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Sun Oct  4 03:31:50 2020
Time taken to complete the session 0 minute, 30 seconds
opatchauto failed with error code 42

Error 2: patch apply again failed, 
Cause: Patch meta data files are missing 
Input file "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/actions" or "/u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config/inventory" does not exist. 

Solution: we can continue patch apply with two option 
Solution 1 (Right way to Apply Patch): Patch was successfully applied on node1, Copy patch directory from node1
1.Take backup of 31281355 folder and copy to node 2 and untar file 
Node 1: tar -cvf 31281355_node1.tar /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355
Node2: 
cd /u01/app/19.7.0.0/grid/inventory/oneoffs/
tar -xvf 31281355_node1.tar

Rollback failed patch:
root@racdbnode02~ # opatchauto rollback /u01/app/19c_Software/31305339/31281355 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 22:49:45 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_10-49-49PM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_10-50-04PM.log
The id for this session is QLI7

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start rolling back binary patch on home /u01/app/19.7.0.0/grid

Binary patch rolled back successfully on home /u01/app/19.7.0.0/grid

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid

OPatchAuto successful.
--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:
Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY rolled back:
Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_22-51-53PM_1.log
OPatchauto session completed at Sun Oct  4 23:01:53 2020
Time taken to complete the session 12 minutes, 8 seconds

[grid@racdbnode02 ~]$ opatch lspatches
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
30869156;Database Release Update : 19.7.0.0.200414 (30869156)
OPatch succeeded.

Apply Patch:
root@racdbnode02~ # opatchauto apply /u01/app/19c_Software/31305339/31281355 -oh /u01/app/19.7.0.0/grid

OPatchauto session is initiated at Sun Oct  4 23:03:34 2020
System initialization log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-10-04_11-03-38PM.log.

Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_11-03-53PM.log
The id for this session is FEF8

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.7.0.0/grid
Patch applicability verified successfully on home /u01/app/19.7.0.0/grid

Bringing down CRS service on home /u01/app/19.7.0.0/grid
CRS service brought down successfully on home /u01/app/19.7.0.0/grid

Start applying binary patch on home /u01/app/19.7.0.0/grid
Binary patch applied successfully on home /u01/app/19.7.0.0/grid

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:

Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/19c_Software/31305339/31281355
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_23-05-44PM_1.log
OPatchauto session completed at Sun Oct  4 23:20:59 2020
Time taken to complete the session 17 minutes, 25 seconds

[grid@racdbnode02 ~]$ opatch lspatches
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
OPatch succeeded.

Solution 2 : 
Copy Missed meta datafile from patch staging folder and applly patch 
1.mkdir -p /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config

cd /u01/app/19c_Software/31305339/31281355/etc/config
[grid@racdbnode02 config]$ ls -ltr
total 2912
-rwxrwxrwx 1 grid oinstall  282218 Jul 10 05:23 inventory.xml
-rwxrwxrwx 1 grid oinstall 2697765 Jul 10 05:23 actions.xml

[grid@racdbnode02 config]$ cp * /u01/app/19.7.0.0/grid/inventory/oneoffs/31281355/etc/config

2.Restart patch apply with resume option: 
root@racdbnode02 ~ # opatchauto resume 

OPatchauto session is initiated at Sun Oct  4 03:47:42 2020
Session log file is /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-10-04_03-47-45AM.log
Resuming existing session with id ST84

Start applying binary patch on home /u01/app/19.7.0.0/grid
Binary patch applied successfully on home /u01/app/19.7.0.0/grid

Checking shared status of home.....

Starting CRS service on home /u01/app/19.7.0.0/grid
CRS service started successfully on home /u01/app/19.7.0.0/grid

OPatchAuto successful
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:

Host:racdbnode02
CRS Home:/u01/app/19.7.0.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SKIPPED:
Patch: /u01/app/19c_Software/31305339/31281355
Reason: This patch is already been applied, so not going to apply again.

==Following patches were SUCCESSFULLY applied:
Patch: /u01/app/19c_Software/31305339/31304218
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

Patch: /u01/app/19c_Software/31305339/31305087
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

Patch: /u01/app/19c_Software/31305339/31335188
Log: /u01/app/19.7.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-10-04_03-48-08AM_1.log

OPatchauto session completed at Sun Oct  4 03:58:33 2020
Time taken to complete the session 10 minutes, 51 seconds
root@racdbnode02 ~ #


[grid@ racdbnode02 ~]$ opatch lspatches
31281355;Database Release Update : 19.8.0.0.200714 (31281355)
31335188;TOMCAT RELEASE UPDATE 19.0.0.0.0 (31335188)
31305087;OCW RELEASE UPDATE 19.8.0.0.0 (31305087)
31304218;ACFS RELEASE UPDATE 19.8.0.0.0 (31304218)
OPatch succeeded.


Patch apply will completed successfully by skipping previously failed patch( 31281355) 
it misleads us by showing patch details in Opatch lspatches and also opatch lsinventory 

When we check full version details from v$instance or while connecting to sqlplus we can observer it's still in previous version 

select INSTANCE_NAME,host_name,status,logins,version,VERSION_FULL, to_char(STARTUP_TIME,'DD/MM/YYYY HH24:MI:SS') "STARTUP_TIME" from gv$instance;
INSTANCE_NAME      HOST_NAME          STATUS       LOGINS     VERSION           VERSION_FULL      STARTUP_TIME
----------------  ----------------- ------------ ---------- ----------------- ----------------- -------------------
+ASM2             racdbnode02         OPEN         ALLOWED    19.0.0.0.0        19.7.0.0.0        04/10/2020 06:41:00
+ASM1             racdbnode01         OPEN         ALLOWED    19.0.0.0.0        19.8.0.0.0        04/10/2020 06:41:00


We can also observer it while connecting to db with sqlplus “/as sysdba”

[grid@ racdbnode02 config]$ sqlplus “/as sysdba”
SQL*Plus: Release 19.0.0.0.0 - Production on Mon Oct 5 21:56:36 2020
Version 19.8.0.0.0
Copyright (c) 1982, 2020, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.7.0.0.0

SQL>

Note: Right way to do is copy from successfully patch applied node and rollback patch and reapply 

Friday 2 October 2020

You (oracle) are not allowed to use this program (crontab)

error while checking crontab details as oracle user

[oracle@dbracnode02 ~]$ crontab -l
You (oracle) are not allowed to use this program (crontab)
See crontab(1) for more information

Check entries in /etc/cron.allow

[root@dbracnode02 ~]# cat /etc/cron.allow


To solve this error you should add oracle or any other users that will use the Crontab into this file as follows.

Add oracle user and save /etc/cron.allow
[root@dbracnode02 ~]# vi /etc/cron.allow
root
oracle
grid

[root@dbracnode02 ~]# cat /etc/cron.allow
root
oracle
grid

Check now:
[oracle@dbracnode02 ~]$ crontab -l
no crontab for oracle


Thursday 24 September 2020

Cluster Service with and without Domain Parameter

Service with Domain Parameter: 
SQL> show parameter domain
NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain      string corporation.local

Add Service with srvctl: (If you do not qualify the names in this parameter with a domain, Oracle qualifies them with the value of the DB_DOMAIN parameter) 

[oracle@racnode05 ~]$srvctl add service -d poc19c00 -r POC19C001,POC19C002 -s poc19c0_s1 -l PRIMARY -e SELECT -m BASIC -P BASIC
   
[oracle@racnode05 ~]$ srvctl start service -d poc19c00
[oracle@racnode05 ~]$ srvctl status service -d poc19c00
Service poc19c0_s1 is running on instance(s) POC19C001,POC19C002

Service details from listener status:
[oracle@racnode05 ~]$ lsnrctl status listener
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 24-SEP-2020 03:37:12
Copyright (c) 1991, 2020, Oracle.  All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                23-SEP-2020 23:50:42
Uptime                    0 days 3 hr. 46 min. 29 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/19.0.0/grid/network/admin/listener.ora
Listener Log File         /oracle/grid/diag/tnslsnr/racnode05/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.6)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_VOTEDISK" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "POC19C00" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this 
service...
Service "poc19c0_s1.corporation.local" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this service...
The command completed successfully

Service without  Domain Parameter: 
SQL> show parameter domain
NAME      TYPE VALUE
------------------------------------ ----------- ------------------------------
db_domain      string corporation.local

unset domain parameter and restart db:
SQL>  alter system set db_domain='' scope=spfile sid='*';
System altered.

[oracle@racnode05 ~]$ srvctl status database -d poc19c00
Instance POC19C001 is running on node racnode05
Instance POC19C002 is running on node racnode06

[oracle@racnode05 ~]$ srvctl stop database -d poc19c00
[oracle@racnode05 ~]$ srvctl start database -d poc19c00
[oracle@racnode05 ~]$ srvctl status database -d poc19c00
Instance POC19C001 is running on node racnode05
Instance POC19C002 is running on node racnode06

Check Service details:
[oracle@racnode05 ~]$ lsnrctl status listener

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 24-SEP-2020 03:41:03
Copyright (c) 1991, 2020, Oracle.  All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                23-SEP-2020 23:50:42
Uptime                    0 days 3 hr. 50 min. 20 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /oracle/19.0.0/grid/network/admin/listener.ora
Listener Log File         /oracle/grid/diag/tnslsnr/racnode05/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.5)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=00.000.00.6)(PORT=1521)))
Services Summary...
Service "+ASM" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP1" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_ORADSKGRP2" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "+ASM_VOTEDISK" has 1 instance(s).
  Instance "+ASM1", status READY, has 1 handler(s) for this service...
Service "POC19C00" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this 
service...
Service "poc19c0_s1" has 1 instance(s).
  Instance "POC19C001", status READY, has 2 handler(s) for this service...
The command completed successfully

If DB_DOMAIN is not specified, then no domain will be applied to the non-qualified SERVICE_NAMES values

Wednesday 23 September 2020

Find default ExaCS Storage Server Username and Password

On Exadata Cloud@Customer, the preconfigured user for Exadata Storage Server is cloud_user_clustername, where clustername is the name of the virtual machine (VM) cluster that is being used.


Check Clustername:

[grid@exanode1 ~]$ crsctl get cluster name

CRS-6724: Current cluster name is 'CLUSTER_NAME'


The password for cloud_user_clustername is initially set to a random value


Check default password:

[root@exanode1 ~]# /opt/exacloud/get_cs_data.py --data_file /opt/exacloud/cs_data.enc

**********************  PASSWORD




Tuesday 22 September 2020

bitmap and b-tree indexes

In this article, we'll explain bitmap and b-tree indexes along with an example and corresponding commands for creating each type of index. 

B-Tree Index:

  • B-tree indexes are widely used in database systems, including Oracle. The "B" in B-tree stands for balanced, indicating that these indexes maintain a balanced tree structure.
  • B-tree indexes are most suitable for columns with high cardinality, meaning they have many distinct values. Examples of such columns include primary keys, unique identifiers, and highly selective columns.
  • The structure of a B-tree index consists of multiple levels, starting with a root node and extending to leaf nodes. Leaf nodes contain index key values and pointers to the corresponding rows in the table.
  • B-tree indexes excel in supporting range searches and equality conditions. They efficiently find a specific value or a range of values within the index, making them beneficial for queries involving comparisons such as greater than, less than, or between.
  • Updates and inserts on B-tree indexes can be efficient due to the balanced structure, as the index tree does not require significant restructuring during modifications.
  • B-tree indexes work well for queries that retrieve a small percentage of rows from a table, as they provide efficient navigation to the desired data.


Example: Let's assume we have a table called employees with columns employee_id, first_name, last_name, and we want to create a B-tree index on the last_name column.
Command to create a B-tree index:
CREATE INDEX idx_employees_last_name ON employees(last_name);


Bitmap Index:

  • Bitmap indexes are suitable for columns with low cardinality, meaning they have few distinct values. Examples include boolean fields or columns representing categories or flags.
  • Bitmap indexes create a bitmap for each distinct value in the indexed column. A bitmap is a sequence of bits, where each bit represents a row in the table. The bits are set to 1 if the corresponding row contains the indexed value and 0 otherwise.
  • Bitmap indexes are efficient for queries involving multiple conditions combined with logical AND or OR operations. They can quickly determine which rows satisfy complex combinations of conditions by performing bitmap operations like AND, OR, and NOT on the bitmaps.
  • Bitmap indexes require less storage space compared to B-tree indexes for columns with low cardinality since they represent the presence or absence of a value rather than storing individual pointers.
  • Updates and inserts on bitmap indexes can be relatively expensive, especially if the index covers many rows. Modifying a single row may require updating multiple bitmaps.


Example: Let's consider a table called orders with columns order_id, product_name, order_date, and we want to create a bitmap index on the product_name column.

Command to create a Bitmap index:
CREATE BITMAP INDEX idx_orders_product_name ON orders(product_name);

B-tree indexes are well-suited for high cardinality columns and queries involving range searches and equality conditions. Bitmap indexes are effective for low cardinality columns and queries with complex combinations of conditions. It's essential to analyze your data and query patterns to determine the most appropriate index type for your specific use case.