Saturday 27 June 2020

How to Clear shared server sessions from Oracle database

Tried to kill shared server session in db with kill immediate, even after multiple attempts i can still see session status is active,
Normally it will terminate when job restart from background or apps, 

if session using dedication connection we can directly kill os pid, Lets see how we can do it with shared server settings 

Clear sid with kill immediate option:
SQL> select sid,serial#,inst_id,status from gv$session where sid='7407';
   SID    SERIAL#    INST_ID STATUS
------ ---------- ---------- --------
  7407      39883          2 ACTIVE

SQL> ALTER SYSTEM KILL SESSION '7407,39883,@2' immediate;
System altered.

SQL> select sid,serial#,inst_id,status from gv$session where sid='7407';
   SID    SERIAL#    INST_ID STATUS
------ ---------- ---------- --------
  7407      39883          2 ACTIVE

--> Session status is active after killing 

Find OS process id from sid:
select a.sid, a.serial#,a.username, a.osuser, b.spid
from v$session a, v$process b
where a.paddr= b.addr
and a.sid='&sid'
   SID    SERIAL# USERNAME        OSUSER          SPID
------ ---------- --------------- --------------- ------------------------
  7407      39883 MANAGE          oracle          115907
  
  
SQL> !ps -ef | grep 115907
oracle   115907      1 13 Jun04 ?        8-17:45:14 ora_s000_prddb012
oracle   140285 139222  0 22:23 pts/0    00:00:00 /bin/bash -c ps -ef | grep 115907
oracle   140287 140285  0 22:23 pts/0    00:00:00 grep 115907

we can see session using shared server process ( from extension ora_s00*), 
need to be careful while killing it from os level since one shared server process will work for multiple sessions, Let's check if any other sessions are using same shared server process or not. 

Check session details from pid:
select p.spid,s.sid, s.serial#,s.username, s.osuser
from gv$session s, gv$process p
where s.paddr= p.addr
and p.spid='&spid'
order by p.spid;
SPID                        SID    SERIAL# USERNAME             OSUSER
------------------------ ------ ---------- -------------------- ---------------
115907                     7407      39883 MANAGE               oracle

also make sure process is not active/running by using strace command:
strace command:
$strace -o strace_output_115907.txt -p 115907
$tail -90 strace_output_115907.txt

no other session is related with shared server process,we are good to kill it pid  

Kill OS PID:
$Kill -9 115907

Check Sid status: it's gone
SQL>  select sid,serial#,inst_id,status from gv$session where sid='7407';
no rows selected

PMON will start new shared server process immediately 
[oracle ~]$ ps -ef | grep ora_s000
oracle   149329      1  0 22:27 ?        00:00:00 ora_s000_prddb012
oracle   150183  84710  0 22:27 pts/0    00:00:00 grep --color=auto ora_s000

Wednesday 24 June 2020

EM Event: Critical: The standby database is approximately X seconds behind the primary database

One of our standby database running 10 min behind primary db, 
as a dba we need to look at multiple areas to find root cause, we can see one of the issue for standby redo gap and solution today

Critical alert from OEM: 
EM Event: Critical:prddbdg01_stdby - The standby database is approximately 736 seconds behind the primary database

Quickly checked MRP process status and alert log:
SYS@prddbdg01>select process, status, thread#, sequence#, block#, blocks from gv$managed_standby;
PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCK#     BLOCKS
--------- ------------ ---------- ---------- ---------- ----------
ARCH      CONNECTED             0          0          0          0
ARCH      CONNECTED             0          0          0          0
ARCH      CONNECTED             0          0          0          0
ARCH      CONNECTED             0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  0          0          0          0
RFS       IDLE                  1      76769      82881          1
RFS       IDLE                  0          0          0          0
MRP0      WAIT_FOR_LOG          1      76769          0          0
9 rows selected.

--> MRP waiting for log 

Alert log:
RFS[7]: No standby redo logfiles available for thread 1 
RFS[7]: Opened log for thread 1 sequence 76760 dbid -647642248 branch 1014078423
Tue Jun 23 8:21:28 2020
Media Recovery Log /san/arch/prddbdg01/1_76759_1014078423.arc
Media Recovery Waiting for thread 1 sequence 76760 (in transit)
Tue Jun 23 08:31:28 2020
Archived Log entry 31993 added for thread 1 sequence 76760 rlc 1014078423 ID 0xd96b2953 dest 3:
RFS[7]: No standby redo logfiles available for thread 1 
RFS[7]: Opened log for thread 1 sequence 76761 dbid -647642248 branch 1014078423
Tue Jun 23 08:31:28 2020
Media Recovery Log /san/arch/prddbdg01/1_76760_1014078423.arc
Media Recovery Waiting for thread 1 sequence 76761 (in transit)
Tue Jun 23 08:38:56 2020
Archived Log entry 31994 added for thread 1 sequence 76761 rlc 1014078423 ID 0xd96b2953 dest 3:
RFS[7]: No standby redo logfiles available for thread 1 
RFS[7]: Opened log for thread 1 sequence 76762 dbid -647642248 branch 1014078423


--> alert log clearly says that standby logs are not available for RFS to write redo logs directly ,so mrp waiting for archive to generate 
-->Lets check redo logs on primary and standby logs on standby 
Redologs  in primary:
SYS@prddbdg01>SELECT thread#, group#, sequence#, bytes, archived ,status FROM v$log ORDER BY thread#, group#;
   THREAD#     GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- ---------- --- ----------------
         1          1      76769 4294967296 NO  CURRENT
         1          2      76762 4294967296 YES INACTIVE
         1          3      76763 4294967296 YES INACTIVE
         1          4      76764 4294967296 YES INACTIVE
         1          5      76765 4294967296 YES INACTIVE
         1          6      76766 4294967296 YES INACTIVE
         1          7      76767 4294967296 YES INACTIVE
         1          8      76768 4294967296 YES INACTIVE

8 rows selected.

Standby logs in DR Side:
SYS@prddbdg01>SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;
   THREAD#     GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- ---------- --- ----------
         0          9          0 4294967296 YES UNASSIGNED
         0         10          0 4294967296 YES UNASSIGNED
         0         11          0 4294967296 YES UNASSIGNED
         0         12          0 4294967296 YES UNASSIGNED
         0         13          0 4294967296 YES UNASSIGNED
         0         14          0 4294967296 YES UNASSIGNED
         0         15          0 4294967296 YES UNASSIGNED
         0         16          0 4294967296 YES UNASSIGNED
         0         17          0 4294967296 YES UNASSIGNED
         1         19          0 4294967296 YES UNASSIGNED

10 rows selected.

--> only one standby log created for thread 1,Better practice is to create one extra standby logs in DR when compare to primary redos,

Lets Create standby redologs:
Stop MRP Process:
SYS@prddbdg01> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.

Standby logs creation:
alter database add standby logfile THREAD 1 group 20 '/san/redo2/prddbdg01/srl_20_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 21 '/san/redo2/prddbdg01/srl_21_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 22 '/san/redo2/prddbdg01/srl_22_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 23 '/san/redo2/prddbdg01/srl_23_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 24 '/san/redo2/prddbdg01/srl_24_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 25 '/san/redo2/prddbdg01/srl_25_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 26 '/san/redo2/prddbdg01/srl_26_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 27 '/san/redo2/prddbdg01/srl_27_1.rdo' SIZE 4096M;
alter database add standby logfile THREAD 1 group 28 '/san/redo2/prddbdg01/srl_28_1.rdo' SIZE 4096M;

Verify standby logs:
SYS@prddbdg01>SELECT thread#, group#, sequence#, bytes, archived, status FROM v$standby_log order by thread#, group#;
   THREAD#     GROUP#  SEQUENCE#      BYTES ARC STATUS
---------- ---------- ---------- ---------- --- ----------
         0          9          0 4294967296 YES UNASSIGNED
         0         10          0 4294967296 YES UNASSIGNED
         0         11          0 4294967296 YES UNASSIGNED
         0         12          0 4294967296 YES UNASSIGNED
         0         13          0 4294967296 YES UNASSIGNED
         0         14          0 4294967296 YES UNASSIGNED
         0         15          0 4294967296 YES UNASSIGNED
         0         16          0 4294967296 YES UNASSIGNED
         0         17          0 4294967296 YES UNASSIGNED
         1         19          0 4294967296 YES UNASSIGNED
         1         20      76770 4294967296 YES ACTIVE
         1         21          0 4294967296 YES UNASSIGNED
         1         22          0 4294967296 YES UNASSIGNED
         1         23          0 4294967296 YES UNASSIGNED
         1         24          0 4294967296 YES UNASSIGNED
         1         25          0 4294967296 YES UNASSIGNED
         1         26          0 4294967296 YES UNASSIGNED
         1         27          0 4294967296 YES UNASSIGNED
         1         28          0 4294967296 YES UNASSIGNED

19 rows selected.

--> we can delete unused thread 0 standby logs 
Start mrp process:
SYS@prddbdg01>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT FROM SESSION;
Database altered.

Apply lag from OEM Dataguard Performance:
After Adding Standby logs we can see graph went down and no gap between primary and standby 

Tuesday 23 June 2020

Unable to Connect to Cluster databases due to ORA-27301: OS failure message: No buffer space available

DB alert log:

skgxpvfynet: mtype: 61 process 35044 failed because of a resource problem in the OS. The OS has most likely run out of buffers (rval: 4)

Errors in file /u01/app/oracle/diag/rdbms/racdb01/racdb011/trace/racdb011_ora_35044.trc  (incident=1120006):

ORA-00603: ORACLE server session terminated by fatal error

ORA-27504: IPC error creating OSD context

ORA-27300: OS system dependent operation:sendmsg failed with status: 105

ORA-27301: OS failure message: No buffer space available

ORA-27302: failure occurred at: sskgxpsnd2

Incident details in: /u01/app/oracle/diag/rdbms/racdb01/racdb011/incident/incdir_1120006/racdb011_ora_35044_i1120006.trc

opiodr aborting process unknown ospid (35044) as a result of ORA-603

Wed May 27 15:06:03 2020

Dumping diagnostic data in directory=[cdmp_20200527150603], requested by (instance=1, osid=35044), summary=[incident=1120006].

Wed May 27 15:06:41 2020

 

 

Issue: Connections failing due to loopback mtu settings, we need to lower MTU value  for loopback adapter on all cluster nodes 


We can make changes without bringing down cluster service 


Loopback Settings:

 1. Change MTU size of loopback adapter as below.

  ifconfig lo mtu 16384

 

 2. To make the change persist after a reboot: Add MTU value in ifcfg-lo

  vi /etc/sysconfig/network-scripts/ifcfg-lo

  MTU=16384

19c Grid New Features

  • Support for Dry-Run Validation of Oracle Clusterware Upgrade
  • Multiple ASMB
  • Parity Protected Files
  • Secure Cluster Communication
  • Zero-Downtime Oracle Grid Infrastructure Patching Using Oracle Fleet Patching and Provisioning
  • Re-support of Direct File Placement for OCR and Voting Disks
  • Optional Install for the Grid Infrastructure Management Repository

1.Support for Dry-Run Validation of Oracle Cluster ware Upgrade 19c: 

 Starting with Oracle Grid Infrastructure 19c, the Oracle Grid Infrastructure installation wizard (gridSetup.sh) enables you to perform a dry-run mode upgrade to check your system’s upgrade readiness.


2. Multiple ASMB

Given that +ASM1 has DG1 mounted but not DG2, and +ASM2 has DG2 mounted but not DG1, the Multiple ASMB project allows for the Database to use both DG1 and DG2 by connecting to both ASM instances simultaneously. Instead of having just ASMB, we can now have ASMBn.

 

This feature increases the availability of the Real Application Clusters (RAC) stack by allowing DB to use multiple disk groups even if a given ASM instance happens not to have all of them mounted.


3.Parity Protected Files:

A great deal of space is consumed when two or three way Oracle ASM mirroring is used for files associated with database backup operations. Backup files are write-once files, and this feature allows parity protection for protection rather than conventional mirroring. Considerable space savings are the result.

 

If a file is created as HIGH, MIRROR, or UNPROTECTED redundancy, its redundancy can change to HIGH, MIRROR, or UNPROTECTED. If redundancy has been changed, then the REMIRROR column of V$ASM_FILE contains Y to indicate that the file needs new mirroring, initiating a rebalance to put the new redundancy into effect. After the rebalance completes, the value in the REMIRROR column contains N .

 

When a file is created with PARITY redundancy, that file can never change redundancy.

 

When the file group redundancy property is modified from a HIGH, MIRROR, or UNPROTECTED setting to a PARITY setting, the redundancy of the existing files in the file group does not change. This behaviour also applies to a change from PARITY to a HIGH, MIRROR, or UNPROTECTED setting. However, any files created in the future adopt the new redundancy setting.


4.Secure Cluster Communication:

Secure Cluster Communication protects the cluster interconnect from common security threats when used together with Single Network Support. Secure Cluster Communication includes message digest mechanisms, protection against fuzzing, and uses Transport Layer Security (TLS) to provide privacy and data integrity between the cluster members.

 

The increased security for the cluster interconnect is invoked automatically as part of a new Oracle Grid Infrastructure 19c deployment or an upgrade to Oracle Grid Infrastructure 19c. Database administrators or cluster administrators do not need to make any configuration changes for this feature.


5.Zero-Downtime Oracle Grid Infrastructure Patching Using Oracle Fleet Patching and Provisioning:

Use Fleet Patching and Provisioning to patch Oracle Grid Infrastructure without bringing down Oracle RAC database instances.

 

Current methods of patching the Oracle Grid Infrastructure require that you bring down all Oracle RAC database instances on the node where you are patching the Oracle Grid Infrastructure home. This issue is addressed in the Grid Infrastructure layer where by the database instances can continue to run during the grid infrastructure patching.


6.Resupport of Direct File Placement for OCR and Voting Disks:

Starting with Oracle Grid Infrastructure 19c, the desupport for direct OCR and voting disk file placement on shared file systems is rescinded for Oracle Standalone Clusters. For Oracle Domain Services Clusters the requirement to place OCR and voting files in Oracle Automatic Storage Management (Oracle ASM) on top of files hosted on shared file systems and used as ASM disks remains.

 

In Oracle Grid Infrastructure 12c Release 2 (12.2), Oracle announced that it would no longer support the placement of the Oracle Grid Infrastructure Oracle Cluster Registry (OCR) and voting files directly on a shared file system. This desupport is now rescinded. Starting with Oracle Grid Infrastructure 19c (19.3), with Oracle Standalone Clusters, you can again place OCR and voting disk files directly on shared file systems.


7.Optional Install for the Grid Infrastructure Management Repository:

Starting with Oracle Grid Infrastructure 19c, the Grid Infrastructure Management Repository (GIMR) is optional for new installations of Oracle Standalone Cluster. Oracle Domain Services Clusters still require the installation of a GIMR as a service component.

 

The data contained in the GIMR is the basis for preventative diagnostics based on applied Machine Learning and can help to increase the availability of Oracle Real Application Clusters (Oracle RAC) databases. Having an optional installation for the GIMR allows for more flexible storage space management and faster deployment, especially during the installation of test and development systems.





Add Node to oracle 18c Cluster

18c Cluster running on racnode01 host, we are going to add racnode02 to cluster

 

Cluster Status:

[grid@racnode01 ~]$ crsctl stat res -t

--------------------------------------------------------------------

Name           Target  State        Server           State details      

--------------------------------------------------------------------

Local Resources

---------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

               ONLINE  ONLINE       racnode01            STABLE

ora.LISTENER.lsnr

               ONLINE  ONLINE       racnode01            STABLE

ora.POC_DATA.dg

               ONLINE  ONLINE       racnode01            STABLE

ora.POC_FRA.dg

               ONLINE  ONLINE       racnode01            STABLE

ora.VOTEDISK.GHCHKPT.advm

               OFFLINE OFFLINE      racnode01            STABLE

ora.VOTEDISK.dg

               ONLINE  ONLINE       racnode01            STABLE

ora.chad

               ONLINE  ONLINE       racnode01            STABLE

ora.helper

               OFFLINE OFFLINE      racnode01            IDLE,STABLE

ora.net1.network

               ONLINE  ONLINE       racnode01            STABLE

ora.ons

               ONLINE  ONLINE       racnode01            STABLE

ora.proxy_advm

               ONLINE  ONLINE       racnode01            STABLE

ora.votedisk.ghchkpt.acfs

               OFFLINE OFFLINE      racnode01            STABLE

-----------------------------------------------------------------------

Cluster Resources

-----------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       racnode01            STABLE

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       racnode01            STABLE

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       racnode01            STABLE

ora.MGMTLSNR

      1        ONLINE  ONLINE       racnode01            17.XX.XX.XX,STABLE

ora.asm

      1        ONLINE  ONLINE       racnode01            Started,STABLE

      2        ONLINE  OFFLINE                               STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       racnode01            STABLE

ora.racnode01.vip

      1        ONLINE  ONLINE       racnode01            STABLE

ora.mgmtdb

      1        ONLINE  ONLINE       racnode01            Open,STABLE

ora.poc1.db

      1        ONLINE  ONLINE       racnode01          Open,HOME=/u01/app/oracle/product/11.2.0_64,STABLE

ora.qosmserver

      1        ONLINE  ONLINE       racnode01            STABLE

ora.rhpserver

      1        OFFLINE OFFLINE                               STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       racnode01            STABLE

ora.scan2.vip

      1        ONLINE  ONLINE       racnode01            STABLE

ora.scan3.vip

      1        ONLINE  ONLINE       racnode01            STABLE

--------------------------------------------------------------------

ssh Setup as a root user:

root@racnode01 scripts # cd /u01/app/18.3.0.0/grid/oui/prov/resources/scripts

./sshUserSetup.sh -user grid -hosts "racnode01 racnode02" -noPromptPassphrase -confirm -advanced

 

./sshUserSetup.sh -user oracle -hosts "racnode01 racnode02" -noPromptPassphrase -confirm -advanced

 

Prechecks:

[grid@racnode01 bin]$ ./cluvfy stage -pre nodeadd -flex -hub racnode02 -verbose

 

Verifying Physical Memory ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  125.4517GB (1.315456E8KB)  8GB (8388608.0KB)         passed   

  racnode01  125.4516GB (1.3154554E8KB)  8GB (8388608.0KB)         passed   

Verifying Physical Memory ...PASSED

Verifying Available Physical Memory ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  124.9865GB (1.31057884E8KB)  50MB (51200.0KB)          passed   

  racnode01  107.7003GB (1.1293196E8KB)  50MB (51200.0KB)          passed   

Verifying Available Physical Memory ...PASSED

Verifying Swap Size ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  32GB (3.3554428E7KB)      16GB (1.6777216E7KB)      passed   

  racnode01  32GB (3.3554428E7KB)      16GB (1.6777216E7KB)      passed   

Verifying Swap Size ...PASSED

Verifying Free Space: racnode02:/usr,racnode02:/etc,racnode02:/u01/app/18.3.0.0/grid,racnode02:/sbin ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  --------

  /usr              racnode02  /             152.6816GB    25MB          passed     

  /etc              racnode02  /             152.6816GB    25MB          passed     

  /u01/app/18.3.0.0/grid  racnode02  /             152.6816GB    6.9GB         passed     

  /sbin             racnode02  /             152.6816GB    10MB          passed     

Verifying Free Space: racnode02:/usr,racnode02:/etc,racnode02:/u01/app/18.3.0.0/grid,racnode02:/sbin ...PASSED

Verifying Free Space: racnode02:/var ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ---------

  /var              racnode02  /var          8.9688GB      5MB           passed     

Verifying Free Space: racnode02:/var ...PASSED

Verifying Free Space: racnode02:/tmp ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ---------

  /tmp              racnode02  /tmp          4.832GB       1GB           passed     

Verifying Free Space: racnode02:/tmp ...PASSED

Verifying Free Space: racnode01:/usr,racnode01:/etc,racnode01:/u01/app/18.3.0.0/grid,racnode01:/sbin ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  -------

  /usr              racnode01  /             112.5479GB    25MB          passed     

  /etc              racnode01  /             112.5479GB    25MB          passed     

  /u01/app/18.3.0.0/grid  racnode01  /             112.5479GB    6.9GB         passed     

  /sbin             racnode01  /             112.5479GB    10MB          passed     

Verifying Free Space: racnode01:/usr,racnode01:/etc,racnode01:/u01/app/18.3.0.0/grid,racnode01:/sbin ...PASSED

Verifying Free Space: racnode01:/var ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------

  /var              racnode01  /var          8.7168GB      5MB           passed     

Verifying Free Space: racnode01:/var ...PASSED

Verifying Free Space: racnode01:/tmp ...

  Path              Node Name     Mount point   Available     Required      Status     

  ----------------  ------------  ------------  ------------  ------------  ------

  /tmp              racnode01  /tmp          4.8018GB      1GB           passed     

Verifying Free Space: racnode01:/tmp ...PASSED

Verifying User Existence: oracle ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    exists(54321)          

  racnode01  passed                    exists(54321)          

 

  Verifying Users With Same UID: 54321 ...PASSED

Verifying User Existence: oracle ...PASSED

Verifying User Existence: grid ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    exists(54322)           

  racnode01  passed                    exists(54322)          

 

  Verifying Users With Same UID: 54322 ...PASSED

Verifying User Existence: grid ...PASSED

Verifying User Existence: root ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    exists(0)              

  racnode01  passed                    exists(0)               

 

  Verifying Users With Same UID: 0 ...PASSED

Verifying User Existence: root ...PASSED

Verifying Group Existence: asmdba ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    exists                 

  racnode01  passed                    exists                  

Verifying Group Existence: asmdba ...PASSED

Verifying Group Existence: oinstall ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    exists                 

  racnode01  passed                    exists                 

Verifying Group Existence: oinstall ...PASSED

Verifying Group Membership: oinstall ...

  Node Name         User Exists   Group Exists  User in Group  Status          

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     yes           yes           yes           passed         

  racnode01     yes           yes           yes           passed         

Verifying Group Membership: oinstall ...PASSED

Verifying Group Membership: asmdba ...

  Node Name         User Exists   Group Exists  User in Group  Status         

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     yes           yes           yes           passed         

  racnode01     yes           yes           yes           passed         

Verifying Group Membership: asmdba ...PASSED

Verifying Run Level ...

  Node Name     run level                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  3                         3,5                       passed   

  racnode01  3                         3,5                       passed   

Verifying Run Level ...PASSED

Verifying Hard Limit: maximum open file descriptors ...

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     hard          65536         65536         passed         

  racnode01     hard          65536         65536         passed          

Verifying Hard Limit: maximum open file descriptors ...PASSED

Verifying Soft Limit: maximum open file descriptors ...

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     soft          65536         1024          passed         

  racnode01     soft          65536         1024          passed         

Verifying Soft Limit: maximum open file descriptors ...PASSED

Verifying Hard Limit: maximum user processes ...

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     hard          unlimited     16384         passed         

  racnode01     hard          unlimited     16384         passed         

Verifying Hard Limit: maximum user processes ...PASSED

Verifying Soft Limit: maximum user processes ...

  Node Name         Type          Available     Required      Status         

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     soft          unlimited     2047          passed         

  racnode01     soft          unlimited     2047          passed         

Verifying Soft Limit: maximum user processes ...PASSED

Verifying Soft Limit: maximum stack size ...

  Node Name         Type          Available     Required      Status          

  ----------------  ------------  ------------  ------------  ----------------

  racnode02     soft          32768         10240         passed         

  racnode01     soft          32768         10240         passed         

Verifying Soft Limit: maximum stack size ...PASSED

Verifying Architecture ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  x86_64                    x86_64                    passed   

  racnode01  x86_64                    x86_64                    passed   

Verifying Architecture ...PASSED

Verifying OS Kernel Version ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  4.14.35-1902.300.11.el7uek.x86_64  3.8.13                    passed   

  racnode01  4.14.35-1902.300.11.el7uek.x86_64  3.8.13                    passed   

Verifying OS Kernel Version ...PASSED

Verifying OS Kernel Parameter: semmsl ...

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  --------

  racnode01     250           250           250           passed         

  racnode02     250           250           250           passed         

Verifying OS Kernel Parameter: semmsl ...PASSED

Verifying OS Kernel Parameter: semmns ...

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  ------

  racnode01     32000         32000         32000         passed         

  racnode02     32000         32000         32000         passed         

Verifying OS Kernel Parameter: semmns ...PASSED

Verifying OS Kernel Parameter: semopm ...

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  --------

  racnode01     100           100           100           passed         

  racnode02     100           100           100           passed         

Verifying OS Kernel Parameter: semopm ...PASSED

Verifying OS Kernel Parameter: semmni ...

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  -------

  racnode01     200           200           128           passed         

  racnode02     200           200           128           passed         

Verifying OS Kernel Parameter: semmni ...PASSED

Verifying OS Kernel Parameter: shmmax ...

  Node Name         Current       Configured    Required      Status        Comment    

  ----------------  ------------  ------------  ------------  ------------  -------

  racnode01     4398046511104  4398046511104  67351316480   passed         

  racnode02     4398046511104  4398046511104  67351347200   passed         

Verifying OS Kernel Parameter: shmmax ...PASSED

Verifying OS Kernel Parameter: shmmni ...

  Node Name         Current       Configured    Required      Status        Comment    

  -------------  ------------  ------------  ------------  ------------  -------

  racnode01     4096          4096          4096          passed         

  racnode02     4096          4096          4096          passed         

Verifying OS Kernel Parameter: shmmni ...PASSED

Verifying OS Kernel Parameter: shmall ...

  Node Name         Current       Configured    Required      Status        Comment    

  ------------  ------------  ------------  ------------  ----------  ----------

  racnode01     4294967296    4294967296    1073741824    passed         

  racnode02     4294967296    4294967296    1073741824    passed         

Verifying OS Kernel Parameter: shmall ...PASSED

Verifying OS Kernel Parameter: file-max ...

  Node Name         Current       Configured    Required      Status        Comment     

  -------------  ------------  ------------  ------------  ---------  ----------

  racnode01     6815744       6815744       6815744       passed         

  racnode02     6815744       6815744       6815744       passed         

Verifying OS Kernel Parameter: file-max ...PASSED

Verifying OS Kernel Parameter: ip_local_port_range ...

  Node Name         Current       Configured    Required      Status        Comment    

  -------------  ------------  ------------  ------------  ---------  ---------

  racnode01     between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed         

  racnode02     between 9000 & 65500  between 9000 & 65500  between 9000 & 65535  passed         

Verifying OS Kernel Parameter: ip_local_port_range ...PASSED

Verifying OS Kernel Parameter: rmem_default ...

  Node Name         Current       Configured    Required      Status        Comment    

  -----------  ------------  ------------  ------------  ---------  --------

  racnode01     262144        262144        262144        passed         

  racnode02     262144        262144        262144        passed         

Verifying OS Kernel Parameter: rmem_default ...PASSED

Verifying OS Kernel Parameter: rmem_max ...

  Node Name         Current       Configured    Required      Status        Comment    

  ------------  ------------  ------------  ------------  ------------  ------------

  racnode01     125829120     125829120     4194304       passed          

  racnode02     125829120     125829120     4194304       passed         

Verifying OS Kernel Parameter: rmem_max ...PASSED

Verifying OS Kernel Parameter: wmem_default ...

  Node Name         Current       Configured    Required      Status        Comment    

  -------------  ------------  ------------  ------------  ------------  ------------

  racnode01     4194304       4194304       262144        passed         

  racnode02     4194304       4194304       262144        passed          

Verifying OS Kernel Parameter: wmem_default ...PASSED

Verifying OS Kernel Parameter: wmem_max ...

  Node Name         Current       Configured    Required      Status        Comment    

  -------------  ------------  ------------  ------------  ------------  ------------

  racnode01     4194304       4194304       1048576       passed         

  racnode02     4194304       4194304       1048576       passed          

Verifying OS Kernel Parameter: wmem_max ...PASSED

Verifying OS Kernel Parameter: aio-max-nr ...

  Node Name         Current       Configured    Required      Status        Comment    

  -----------  ------------  ------------  ------------  ------------  ------------

  racnode01     1048576       1048576       1048576       passed          

  racnode02     1048576       1048576       1048576       passed         

Verifying OS Kernel Parameter: aio-max-nr ...PASSED

Verifying OS Kernel Parameter: panic_on_oops ...

  Node Name         Current       Configured    Required      Status        Comment    

  --------------    ---------  ------------  ------------  ------------  ------

  racnode01     1             1             1             passed         

  racnode02     1             1             1             passed         

Verifying OS Kernel Parameter: panic_on_oops ...PASSED

Verifying Package: binutils-2.23.52.0.1 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  binutils-2.27-43.base.0.1.el7_8.1  binutils-2.23.52.0.1      passed   

  racnode01  binutils-2.27-43.base.0.1.el7_8.1  binutils-2.23.52.0.1      passed   

Verifying Package: binutils-2.23.52.0.1 ...PASSED

Verifying Package: compat-libcap1-1.10 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  compat-libcap1-1.10-7.el7  compat-libcap1-1.10       passed   

  racnode01  compat-libcap1-1.10-7.el7  compat-libcap1-1.10       passed   

Verifying Package: compat-libcap1-1.10 ...PASSED

Verifying Package: libgcc-4.8.2 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  libgcc(x86_64)-4.8.5-39.0.3.el7  libgcc(x86_64)-4.8.2      passed   

  racnode01  libgcc(x86_64)-4.8.5-39.0.3.el7  libgcc(x86_64)-4.8.2      passed   

Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED

Verifying Package: libstdc++-4.8.2 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  libstdc++(x86_64)-4.8.5-39.0.3.el7  libstdc++(x86_64)-4.8.2   passed   

  racnode01  libstdc++(x86_64)-4.8.5-39.0.3.el7  libstdc++(x86_64)-4.8.2   passed   

Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED

Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  libstdc++-devel(x86_64)-4.8.5-39.0.3.el7  libstdc++-devel(x86_64)-4.8.2  passed   

  racnode01  libstdc++-devel(x86_64)-4.8.5-39.0.3.el7  libstdc++-devel(x86_64)-4.8.2  passed   

Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED

Verifying Package: sysstat-10.1.5 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  sysstat-10.1.5-19.el7     sysstat-10.1.5            passed   

  racnode01  sysstat-10.1.5-19.el7     sysstat-10.1.5            passed   

Verifying Package: sysstat-10.1.5 ...PASSED

Verifying Package: ksh ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  ksh                       ksh                       passed   

  racnode01  ksh                       ksh                       passed   

Verifying Package: ksh ...PASSED

Verifying Package: make-3.82 ...

  Node Name     Available                 Required                  Status    

  ------------  ------------------------  ------------------------  ----------

  racnode02  make-3.82-24.el7          make-3.82                 passed   

  racnode01  make-3.82-24.el7          make-3.82                 passed   

Verifying Package: make-3.82 ...PASSED

Verifying Package: glibc-2.17 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  glibc(x86_64)-2.17-307.0.1.el7.1  glibc(x86_64)-2.17        passed   

  racnode01  glibc(x86_64)-2.17-307.0.1.el7.1  glibc(x86_64)-2.17        passed   

Verifying Package: glibc-2.17 (x86_64) ...PASSED

Verifying Package: glibc-devel-2.17 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  glibc-devel(x86_64)-2.17-307.0.1.el7.1  glibc-devel(x86_64)-2.17  passed   

  racnode01  glibc-devel(x86_64)-2.17-307.0.1.el7.1  glibc-devel(x86_64)-2.17  passed   

Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED

Verifying Package: libaio-0.3.109 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  libaio(x86_64)-0.3.109-13.el7  libaio(x86_64)-0.3.109    passed   

  racnode01  libaio(x86_64)-0.3.109-13.el7  libaio(x86_64)-0.3.109    passed   

Verifying Package: libaio-0.3.109 (x86_64) ...PASSED

Verifying Package: libaio-devel-0.3.109 (x86_64) ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  libaio-devel(x86_64)-0.3.109-13.el7  libaio-devel(x86_64)-0.3.109  passed   

  racnode01  libaio-devel(x86_64)-0.3.109-13.el7  libaio-devel(x86_64)-0.3.109  passed   

Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED

Verifying Package: nfs-utils-1.2.3-15 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  nfs-utils-1.3.0-0.66.0.1.el7  nfs-utils-1.2.3-15        passed   

  racnode01  nfs-utils-1.3.0-0.66.0.1.el7  nfs-utils-1.2.3-15        passed   

Verifying Package: nfs-utils-1.2.3-15 ...PASSED

Verifying Package: smartmontools-6.2-4 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  smartmontools-7.0-2.el7   smartmontools-6.2-4       passed    

  racnode01  smartmontools-7.0-2.el7   smartmontools-6.2-4       passed   

Verifying Package: smartmontools-6.2-4 ...PASSED

Verifying Package: net-tools-2.0-0.17 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  net-tools-2.0-0.25.20131004git.el7  net-tools-2.0-0.17        passed   

  racnode01  net-tools-2.0-0.25.20131004git.el7  net-tools-2.0-0.17        passed   

Verifying Package: net-tools-2.0-0.17 ...PASSED

Verifying Users With Same UID: 0 ...PASSED

Verifying Current Group ID ...PASSED

Verifying Root user consistency ...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racnode02                         passed                 

  racnode01                         passed                  

Verifying Root user consistency ...PASSED

Verifying Package: cvuqdisk-1.0.10-1 ...

  Node Name     Available                 Required                  Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed   

  racnode01  cvuqdisk-1.0.10-1         cvuqdisk-1.0.10-1         passed   

Verifying Package: cvuqdisk-1.0.10-1 ...PASSED

Verifying Node Addition ...

  Verifying CRS Integrity ...PASSED

  Verifying Clusterware Version Consistency ...PASSED

  Verifying '/u01/app/18.3.0.0/grid' ...PASSED

Verifying Node Addition ...PASSED

Verifying Host name ...PASSED

Verifying Node Connectivity ...

  Verifying Hosts File ...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racnode01                         passed                 

  racnode02                         passed                  

  Verifying Hosts File ...PASSED

 

Interface information for node "racnode01"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ ------------ --------------- ----------- -------- ---------- ------

 bond0  aaa.aa.aa.11   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.14   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.15   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.17   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.16   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond1  aaa.aa.bb.11   aaa.aa.bb.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:8D 9000 

 

Interface information for node "racnode02"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----------

 bond0  aaa.aa.aa.12   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:5F 1500 

 bond1  aaa.aa.bb.12   aaa.aa.bb.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:7D 9000 

 

Check: MTU consistency on the private interfaces of subnet "aaa.aa.bb.0"

  Node              Name          IP Address    Subnet        MTU            

  ----------------  ------------  ------------  ------------  ----------------

  racnode01     bond1         aaa.aa.bb.11  aaa.aa.bb.0   9000           

  racnode02     bond1         aaa.aa.bb.12  aaa.aa.bb.0   9000           

 

Check: MTU consistency of the subnet "aaa.aa.aa.0".

  Node              Name          IP Address    Subnet        MTU            

  ----------------  ------------  ------------  ------------  ----------------

  racnode01     bond0         aaa.aa.aa.11  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.14  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.15  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.17  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.16  aaa.aa.aa.0   1500           

  racnode02     bond0         aaa.aa.aa.12  aaa.aa.aa.0   1500            

 

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.14]  yes             

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.15]  yes             

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.15]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.17]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.17]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.16]  racnode02[bond0:aaa.aa.aa.12]  yes            

 

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racnode01[bond1:aaa.aa.bb.11]  racnode02[bond1:aaa.aa.bb.12]  yes            

  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

  Verifying subnet mask consistency for subnet "aaa.aa.aa.0" ...PASSED

  Verifying subnet mask consistency for subnet "aaa.aa.bb.0" ...PASSED

Verifying Node Connectivity ...PASSED

Verifying Multicast or broadcast check ...

Checking subnet "aaa.aa.bb.0" for multicast communication with multicast group "4.0.0.251"

Verifying Multicast or broadcast check ...PASSED

Verifying ASM Integrity ...

  Verifying Node Connectivity ...

    Verifying Hosts File ...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racnode02                         passed                 

    Verifying Hosts File ...PASSED

 

Interface information for node "racnode01"

 

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ------ ------

 bond1  aaa.aa.bb.11   aaa.aa.bb.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:8D 9000 

 bond0  aaa.aa.aa.11   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.14   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.15   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.17   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 bond0  aaa.aa.aa.16   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:2F 1500 

 

Interface information for node "racnode02"

 

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----

 bond1  aaa.aa.bb.12   aaa.aa.bb.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:7D 9000 

 bond0  aaa.aa.aa.12   aaa.aa.aa.0     0.0.0.0         aaa.aa.aa.1     00.6:C6:00:02:5F 1500 

 

Check: MTU consistency on the private interfaces of subnet "aaa.aa.bb.0"

 

  Node              Name          IP Address    Subnet        MTU            

  ----------------  ------------  ------------  ------------  ----------------

  racnode01     bond1         aaa.aa.bb.11  aaa.aa.bb.0   9000           

  racnode02     bond1         aaa.aa.bb.12  aaa.aa.bb.0   9000           

 

Check: MTU consistency of the subnet "aaa.aa.aa.0".

 

  Node              Name          IP Address    Subnet        MTU            

  ----------------  ------------  ------------  ------------  ----------------

  racnode01     bond0         aaa.aa.aa.11  aaa.aa.aa.0   1500            

  racnode01     bond0         aaa.aa.aa.14  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.15  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.17  aaa.aa.aa.0   1500           

  racnode01     bond0         aaa.aa.aa.16  aaa.aa.aa.0   1500           

  racnode02     bond0         aaa.aa.aa.12  aaa.aa.aa.0   1500           

 

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.14]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.15]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.11]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.15]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.14]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode01[bond0:aaa.aa.aa.17]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.15]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.17]  racnode01[bond0:aaa.aa.aa.16]  yes            

  racnode01[bond0:aaa.aa.aa.17]  racnode02[bond0:aaa.aa.aa.12]  yes            

  racnode01[bond0:aaa.aa.aa.16]  racnode02[bond0:aaa.aa.aa.12]  yes             

 

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  racnode01[bond1:aaa.aa.bb.11]  racnode02[bond1:aaa.aa.bb.12]  yes             

    Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

    Verifying subnet mask consistency for subnet "aaa.aa.aa.0" ...PASSED

    Verifying subnet mask consistency for subnet "aaa.aa.bb.0" ...PASSED

  Verifying Node Connectivity ...PASSED

Verifying ASM Integrity ...PASSED

Verifying Device Checks for ASM ...Disks "/dev/racnode01_pocdb01_data01,/dev/racnode01_pocdb01_data01,/dev/racnode01_votedisk,/dev/racnode01_votedisk,/dev/racnode01_pocdb01_fra01,/dev/racnode01_pocdb01_fra01" are managed by ASM.

Verifying Device Checks for ASM ...PASSED

Verifying Database home availability ...PASSED

Verifying OCR Integrity ...PASSED

Verifying Time zone consistency ...PASSED

Verifying Network Time Protocol (NTP) ...

  Verifying '/etc/ntp.conf' ...

  Node Name                             File exists?           

  ------------------------------------  ------------------------

  racnode02                         no                     

  racnode01                         no                     

 

  Verifying '/etc/ntp.conf' ...PASSED

  Verifying '/etc/chrony.conf' ...

  Node Name                             File exists?           

  ------------------------------------  ------------------------

  racnode02                         no                      

  racnode01                         no                     

 

  Verifying '/etc/chrony.conf' ...PASSED

  Verifying '/var/run/ntpd.pid' ...

  Node Name                             File exists?           

  ------------------------------------  ------------------------

  racnode02                         no                     

  racnode01                         no                     

 

  Verifying '/var/run/ntpd.pid' ...PASSED

  Verifying '/var/run/chronyd.pid' ...

  Node Name                             File exists?           

  ------------------------------------  ------------------------

  racnode02                         no                     

  racnode01                         no                     

 

  Verifying '/var/run/chronyd.pid' ...PASSED

Verifying Network Time Protocol (NTP) ...PASSED

Verifying User Not In Group "root": grid ...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  racnode02  passed                    does not exist         

  racnode01  passed                    does not exist         

Verifying User Not In Group "root": grid ...PASSED

Verifying resolv.conf Integrity ...

  Node Name                             Status                  

  ------------------------------------  ------------------------

  racnode01                         passed                 

  racnode02                         passed                 

 

checking response for name "racnode01" from each of the name servers

specified in "/etc/resolv.conf"

 

  Node Name     Source                    Comment                   Status   

  ------------  ------------------------  ------------------------  ----------

  racnode01  aaa.aa.1.135              IPv4                      passed   

 

checking response for name "racnode02" from each of the name servers

specified in "/etc/resolv.conf"

 

  Node Name     Source                    Comment                   Status   

  ------------  ------------------------  ------------------------  ----------

  racnode02  aaa.aa.1.135              IPv4                      passed   

Verifying resolv.conf Integrity ...PASSED

Verifying DNS/NIS name service ...PASSED

Verifying User Equivalence ...

  Node Name                             Status                 

  ------------------------------------  ------------------------

  racnode02                         passed                 

Verifying User Equivalence ...PASSED

Verifying /dev/shm mounted as temporary file system ...PASSED

Verifying /boot mount ...PASSED

Verifying zeroconf check ...PASSED

 

Pre-check for node addition was successful.

CVU operation performed:      stage -pre nodeadd

Date:                         Sep 6, 2020 3:23:45 AM

CVU home:                     /u01/app/18.3.0.0/grid/

User:                         grid

 

 

Add Node to Cluster:

[grid@racnode01 addnode]$ pwd

/u01/app/18.3.0.0/grid/addnode

[grid@racnode01 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={racnode02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={racnode02-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"

 

 

Prepare Configuration in progress.

Prepare Configuration successful.

..................................................   7% Done.

Copy Files to Remote Nodes in progress.

..................................................   12% Done.

..................................................   17% Done.

..............................

Copy Files to Remote Nodes successful.

You can find the log of this install session at:

 /app/oraInventory/logs/addNodeActions2020-09-06_03-49-51AM.log

 

Instantiate files in progress.

Instantiate files successful.

..................................................   49% Done.

Saving cluster inventory in progress.

..................................................   83% Done.

Saving cluster inventory successful.

The Cluster Node Addition of /u01/app/18.3.0.0/grid was successful.

Please check '/u01/app/18.3.0.0/grid/inventory/silentInstall2020-09-06_03-49-52AM.log' for more details.

 

Setup Oracle Base in progress.

Setup Oracle Base successful.

..................................................   90% Done.

Update Inventory in progress.

Update Inventory successful.

..................................................   97% Done.

As a root user, execute the following script(s):

        1. /app/oraInventory/orainstRoot.sh

        2. /u01/app/18.3.0.0/grid/root.sh

 

Execute /app/oraInventory/orainstRoot.sh on the following nodes:

[racnode02]

Execute /u01/app/18.3.0.0/grid/root.sh on the following nodes:

[racnode02]

 

The scripts can be executed in parallel on all the nodes.

 

Successfully Setup Software.

..................................................   100% Done.

 

Run root scripts:

root@racnode02 grid # sh /app/oraInventory/orainstRoot.sh

Changing permissions of /app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /app/oraInventory to oinstall.

The execution of the script is complete.

 

root@racnode02 grid # sh /u01/app/18.3.0.0/grid/root.sh

Performing root user operation.

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/18.3.0.0/grid

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

 

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/18.3.0.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /app/grid/crsdata/racnode02/crsconfig/rootcrs_racnode02_2020-09-06_04-44-52AM.log

2020/07/06 04:45:16 CLSRSC-594: Executing installation step 1 of 20: 'SetupTFA'.

2020/07/06 04:45:16 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2020/07/06 04:45:39 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2020/07/06 04:45:39 CLSRSC-594: Executing installation step 2 of 20: 'ValidateEnv'.

2020/07/06 04:45:51 CLSRSC-594: Executing installation step 3 of 20: 'CheckFirstNode'.

2020/07/06 04:45:52 CLSRSC-594: Executing installation step 4 of 20: 'GenSiteGUIDs'.

2020/07/06 04:45:57 CLSRSC-594: Executing installation step 5 of 20: 'SaveParamFile'.

2020/07/06 04:45:59 CLSRSC-594: Executing installation step 6 of 20: 'SetupOSD'.

2020/07/06 04:45:59 CLSRSC-594: Executing installation step 7 of 20: 'CheckCRSConfig'.

2020/07/06 04:46:00 CLSRSC-594: Executing installation step 8 of 20: 'SetupLocalGPNP'.

2020/07/06 04:46:01 CLSRSC-594: Executing installation step 9 of 20: 'CreateRootCert'.

2020/07/06 04:46:01 CLSRSC-594: Executing installation step 10 of 20: 'ConfigOLR'.

2020/07/06 04:46:09 CLSRSC-594: Executing installation step 11 of 20: 'ConfigCHMOS'.

2020/07/06 04:46:09 CLSRSC-594: Executing installation step 12 of 20: 'CreateOHASD'.

2020/07/06 04:46:10 CLSRSC-594: Executing installation step 13 of 20: 'ConfigOHASD'.

2020/07/06 04:46:10 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2020/07/06 04:49:24 CLSRSC-594: Executing installation step 14 of 20: 'InstallAFD'.

2020/07/06 04:52:08 CLSRSC-594: Executing installation step 15 of 20: 'InstallACFS'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode02'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode02' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2020/07/06 04:54:26 CLSRSC-594: Executing installation step 16 of 20: 'InstallKA'.

2020/07/06 04:54:27 CLSRSC-594: Executing installation step 17 of 20: 'InitConfig'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode02'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode02' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racnode02'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'racnode02'

CRS-2677: Stop of 'ora.drivers.acfs' on 'racnode02' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racnode02' has completed

CRS-4133: Oracle High Availability Services has been stopped.

2020/07/06 04:54:55 CLSRSC-594: Executing installation step 18 of 20: 'StartCluster'.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.mdnsd' on 'racnode02'

CRS-2672: Attempting to start 'ora.evmd' on 'racnode02'

CRS-2676: Start of 'ora.mdnsd' on 'racnode02' succeeded

CRS-2676: Start of 'ora.evmd' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'racnode02'

CRS-2676: Start of 'ora.gpnpd' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'racnode02'

CRS-2676: Start of 'ora.gipcd' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'racnode02'

CRS-2676: Start of 'ora.cssdmonitor' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.crf' on 'racnode02'

CRS-2672: Attempting to start 'ora.cssd' on 'racnode02'

CRS-2672: Attempting to start 'ora.diskmon' on 'racnode02'

CRS-2676: Start of 'ora.diskmon' on 'racnode02' succeeded

CRS-2676: Start of 'ora.crf' on 'racnode02' succeeded

CRS-2676: Start of 'ora.cssd' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'racnode02'

CRS-2672: Attempting to start 'ora.ctssd' on 'racnode02'

CRS-2676: Start of 'ora.ctssd' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'racnode02'

CRS-2676: Start of 'ora.crsd' on 'racnode02' succeeded

CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'racnode02'

CRS-2676: Start of 'ora.asm' on 'racnode02' succeeded

CRS-6017: Processing resource auto-start for servers: racnode02

CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'racnode01'

CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode02'

CRS-2672: Attempting to start 'ora.chad' on 'racnode02'

CRS-2672: Attempting to start 'ora.ons' on 'racnode02'

CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'racnode01' succeeded

CRS-2673: Attempting to stop 'ora.scan1.vip' on 'racnode01'

CRS-2677: Stop of 'ora.scan1.vip' on 'racnode01' succeeded

CRS-2672: Attempting to start 'ora.scan1.vip' on 'racnode02'

CRS-2676: Start of 'ora.chad' on 'racnode02' succeeded

CRS-2676: Start of 'ora.scan1.vip' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'racnode02'

CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'racnode02' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'racnode02'

ORA-01078: failure in processing system parameters

ORA-29701: unable to connect to Cluster Synchronization Service

ORA-29701: unable to connect to Cluster Synchronization Service

CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'racnode02' succeeded

CRS-2674: Start of 'ora.asm' on 'racnode02' failed

CRS-2679: Attempting to clean 'ora.asm' on 'racnode02'

CRS-2681: Clean of 'ora.asm' on 'racnode02' succeeded

CRS-2674: Start of 'ora.ons' on 'racnode02' failed

===== Summary of resource auto-start failures follows =====

CRS-2807: Resource 'ora.asm' failed to start automatically.

CRS-2807: Resource 'ora.ons' failed to start automatically.

CRS-6016: Resource auto-start has completed for server racnode02

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2020/07/06 04:57:47 CLSRSC-343: Successfully started Oracle Clusterware stack

2020/07/06 04:57:49 CLSRSC-594: Executing installation step 19 of 20: 'ConfigNode'.

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 12c Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

2020/07/06 04:58:16 CLSRSC-594: Executing installation step 20 of 20: 'PostConfig'.

2020/07/06 04:59:25 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

You have mail in /var/spool/mail/root

 

 

Post Checks:

[grid@racnode01 ~]$ crsctl get node role config -all

Node 'racnode01' configured role is 'hub'

Node 'racnode02' configured role is 'hub'

 

[grid@racnode01 bin]$ ./c  stage -post nodeadd -n racnode02

 

Verifying Node Connectivity ...

  Verifying Hosts File ...PASSED

  Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED

  Verifying subnet mask consistency for subnet "aaa.aa.aa.0" ...PASSED

  Verifying subnet mask consistency for subnet "aaa.aa.bb.0" ...PASSED

Verifying Node Connectivity ...PASSED

Verifying Cluster Integrity ...PASSED

Verifying Node Addition ...

  Verifying CRS Integrity ...PASSED

  Verifying Clusterware Version Consistency ...PASSED

  Verifying '/u01/app/18.3.0.0/grid' ...PASSED

Verifying Node Addition ...PASSED

Verifying Multicast or broadcast check ...PASSED

Verifying Node Application Existence ...PASSED

Verifying Single Client Access Name (SCAN) ...

  Verifying DNS/NIS name service 'racnode-scan' ...

    Verifying Name Service Switch Configuration File Integrity ...PASSED

  Verifying DNS/NIS name service 'racnode-scan' ...PASSED

Verifying Single Client Access Name (SCAN) ...PASSED

Verifying User Not In Group "root": grid ...PASSED

Verifying Clock Synchronization ...PASSED

 

Post-check for node addition was successful.

 

CVU operation performed:      stage -post nodeadd

Date:                         Sep 6, 2020 5:52:29 AM

CVU home:                     /u01/app/18.3.0.0/grid/

User:                         grid

 


 

Verify Cluster Status:

[grid@racnode02 ~]$ crsctl  stat res -t

----------------------------------------------------------------------

Name           Target  State        Server             State details      

----------------------------------------------------------------------

Local Resources

----------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.LISTENER.lsnr

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.POC_DATA.dg

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.POC_FRA.dg

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.VOTEDISK.GHCHKPT.advm

               OFFLINE OFFLINE      racnode01            STABLE

               OFFLINE OFFLINE      racnode02            STABLE

ora.VOTEDISK.dg

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.chad

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.helper

               OFFLINE OFFLINE      racnode01            STABLE

               OFFLINE OFFLINE      racnode02            IDLE,STABLE

ora.net1.network

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.ons

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.proxy_advm

               ONLINE  ONLINE       racnode01            STABLE

               ONLINE  ONLINE       racnode02            STABLE

ora.votedisk.ghchkpt.acfs

               OFFLINE OFFLINE      racnode01            STABLE

               OFFLINE OFFLINE      racnode02            STABLE

---------------------------------------------------------------------

Cluster Resources

----------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       racnode02            STABLE

ora.LISTENER_SCAN2.lsnr

      1        ONLINE  ONLINE       racnode01            STABLE

ora.LISTENER_SCAN3.lsnr

      1        ONLINE  ONLINE       racnode01            STABLE

ora.MGMTLSNR

      1        ONLINE  ONLINE       racnode01            XXX.XX.XX.XX STABLE

ora.asm

      1        ONLINE  ONLINE       racnode01            Started,STABLE

      2        ONLINE  ONLINE       racnode02            Started,STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       racnode01            STABLE

ora.racnode01.vip

      1        ONLINE  ONLINE       racnode01            STABLE

ora.racnode02.vip

      1        ONLINE  ONLINE       racnode02            STABLE

ora.mgmtdb

      1        ONLINE  ONLINE       racnode01            Open,STABLE

ora.poc1.db

      1        ONLINE  ONLINE       racnode01            Open,HOME=/u01/app/o

                                                             racle/product/11.2.0

                                                             _64,STABLE

ora.qosmserver

      1        ONLINE  ONLINE       racnode01            STABLE

ora.rhpserver

      1        OFFLINE OFFLINE                               STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       racnode02            STABLE

ora.scan2.vip

      1        ONLINE  ONLINE       racnode01            STABLE

ora.scan3.vip

      1        ONLINE  ONLINE       racnode01            STABLE