Sunday, 9 November 2025

OCI Ops Insights: Turning Data Into Proactive Intelligence

 What Is OCI Ops Insights? 

Ops Insights is Oracle’s intelligent observability and analytics service that provides comprehensive visibility into resource usage, capacity, and SQL performance across databases and hosts — whether they run on OCI, on-premises, or in hybrid environments.

Think of it as your command center for operational intelligence — combining analytics, automation, and AI-driven recommendations to keep your systems optimized and predictable.

Core Capabilities

 

1. Database Insights
Gain complete visibility into the performance and health of your databases.
SQL Insights – Analyze SQL performance trends, find inefficient queries, and identify tuning opportunities.
Database Performance – Track database-level metrics and diagnose bottlenecks before they impact users.
ADDM Spotlight & AWR Hub – Access Automatic Workload Repository data across your entire fleet for unified analysis.

2. Capacity Planning
Forecast capacity issues before they happen.
Monitor CPU and storage utilization across databases, hosts, and Exadata systems.
Predict growth trends to plan for future expansion or cost optimization.

3. Exadata Insights
Get specialized performance and capacity visibility for Exadata infrastructure.
Analyze workloads with Exadata Warehouse.
Explore data with Exadata Explorer to pinpoint system-level trends.

4. Dashboards & Reporting
Visualize and communicate insights effectively:
Create custom dashboards using out-of-box widgets or saved searches.
Generate news-style reports to share operational summaries with teams and management.
Use the AWR Explorer and Data Object Explorer for deep performance exploration.

5. Administration & Configuration
Seamlessly manage your monitored environment:
Configure agent-managed and Enterprise Manager-managed resources.
Enable Autonomous AI Database Full Feature for advanced analytics.
Manage endpoints, AWR Hubs, and collection configurations with ease.  

 

 

 

 

 

 

 

 

 

Saturday, 1 November 2025

Securing Oracle Databases with Oracle Data Safe

 

 
What Is Oracle Data Safe?
Oracle Data Safe is a cloud-based, unified security control center designed specifically for Oracle Databases — whether they reside in Oracle Cloud Infrastructure (OCI), Autonomous Database, or on-premises deployments.

It simplifies the complex, manual tasks involved in securing databases and meeting compliance requirements. With a few clicks, you can evaluate risks, analyze user privileges, discover sensitive data, apply masking policies, and audit activities.
 
Features of Oracle Data Safe:
 
 
 
๐Ÿ” 1. Security Assessment
The Security Assessment feature evaluates the security posture of your Oracle Databases.
It reviews configurations, user accounts, and security controls, then provides detailed findings with actionable recommendations to reduce or mitigate risks.

Key aspects:
  • Analyzes configuration settings, user privileges, and security parameters.
  • Compares against industry frameworks like STIG, CIS Benchmarks, EU GDPR, and Oracle best practices.
  • Generates an overall Security Score and a prioritized list of vulnerabilities.
  • This ensures your databases consistently align with compliance standards and internal security policies.

๐Ÿ‘ฅ 2. User Assessment
User Assessment identifies users and accounts that may pose security risks due to excessive privileges, weak authentication, or poor password practices.
It analyzes user data stored in the database dictionary and assigns a risk score to each user.

Capabilities include:
  • Identifies highly privileged or inactive accounts.
  • Evaluates password policies, authentication types, and password change frequency.
  • Links directly to related audit trail entries for deeper investigation.
  • This enables DBAs and security teams to implement least-privilege access controls and strengthen user governance.

๐Ÿงญ 3. Data Discovery

Data Discovery automates the identification of sensitive data within your Oracle Databases.
It scans both data and metadata to locate information that could fall under privacy or compliance regulations.

Highlights:
  • Detects data across multiple sensitivity categories — personal, financial, healthcare, employment, academic, and more.
  • Offers default discovery templates or lets you define custom data models to fit your organization’s classification standards.
  • Produces clear reports listing schemas, tables, and columns containing sensitive data.
  • With Data Discovery, you know exactly where your critical data resides — a foundational step toward compliance and data protection.
๐Ÿงฉ 4. Data Masking
The Data Masking feature helps organizations protect sensitive data when replicating or sharing databases for development, testing, or analytics.
It replaces real values with realistic but fictitious data, maintaining referential integrity while ensuring privacy.

Key benefits:
  • Supports multiple masking formats — randomization, substitution, nullification, and lookup-based.
  • Integrates seamlessly with Data Discovery results for consistent masking policies.
  • Enables safe use of production-like data in non-production environments.
  • This reduces the risk of data exposure and helps organizations comply with data privacy regulations.
๐Ÿ“œ 5. Activity Auditing
Activity Auditing provides continuous visibility into who is doing what in your databases.
It captures user activities — from logins and schema changes to data queries and privilege modifications.

Capabilities:
  • Monitors database activity in real time.
  • Generates audit reports for compliance and governance reviews.
  • Detects unusual or unauthorized access patterns.
  • Auditing is crucial for incident investigation, accountability, and regulatory compliance.
⚡ 6. Alerts
Alerts keep you informed of unusual or high-risk database activities as they occur.
You can define custom thresholds or use predefined alert templates to detect anomalies in user behavior or database operations.
With proactive alerting, teams can respond faster to threats, minimizing potential damage and downtime.

๐Ÿงฑ 7. SQL Firewall (New in Oracle AI Database 26ai)
The SQL Firewall introduces an advanced layer of protection directly at the SQL level, helping safeguard databases from SQL injection attacks, compromised accounts, and unauthorized queries.
Oracle Data Safe acts as the central management hub for SQL Firewall policies across all connected databases.

Capabilities:
  • Collects and baselines authorized SQL activities for each user.
  • Generates allowlist-based firewall policies that define approved SQL statements and connection paths.
  • Monitors and reports SQL Firewall violations in real time across your entire database fleet.
  • This feature enables a zero-trust approach to database access — ensuring only verified SQL statements are executed against your most sensitive systems. 
 
Step-by-Step Configuration Guide:
  • Sign in to your OCI Console with appropriate privileges (Security Administrator or tenancy-level admin).
  • In the left navigation menu, go to Oracle AI Database → Data Safe - Database Security 
 

  
Step 2: Register Your Database
Before you can run any assessments or audits, your database needs to be registered with Data Safe.

Supported Target Databases:
  • On-Premises Oracle AI Database
  • Oracle Autonomous AI Database on Dedicated Exadata Infrastructure 
  • Oracle Autonomous AI Database on Exadata Cloud@Customer
  • Oracle Autonomous AI Database Serverless
  • Oracle Base Database Service
  • Oracle AI Database on a compute instance in Oracle Cloud Infrastructure
  • Oracle Exadata Database Service on Cloud@Customer
  • Oracle Exadata Database Service on Dedicated Infrastructure
  • Oracle Exadata Database Service on Exascale Infrastructure
  • Amazon RDS for Oracle
  • Oracle Database@AWS
  • Oracle Database@Azure
  • Oracle Database@Google Cloud
Lets Register an Autonomous Database 
In the OCI Console, navigate to Data Safe → Targets → Register Target Database.
 

 
For Database Type, select Autonomous Database.
Under Data Safe Target Information:
  • Choose the Compartment where your database resides.
  • Select your database from the drop-down list of available Autonomous Databases.
  • Enter a Display Name for your Data Safe target.
  • (Optional) Add a Description to help identify the purpose or environment of this database (e.g., “Data Safe practice environment”).
  • Choose a Compartment for the target registration and (Optional) apply Tags for easier management and automation.
  • Review the connection details to ensure the selected database and compartment information are correct. 

 

Click Register to complete the process.
 
 
Step 3: Explore the Data Safe Dashboard

After completing the registration, your target database will now appear in the Targets list with an Active status — confirming a successful connection to Oracle Data Safe.



Now, let’s move to the Oracle Data Safe Dashboard, the central console where you can view, monitor, and manage all your database security operations.
 
In the OCI Console, navigate to
Oracle AI Database → Data Safe - Database Security → Dashboard and click

This will take you to the Data Safe → Security Center → Dashboard, where you can view an integrated overview of your database security posture — including assessments, user risks, sensitive data discovery, and audit summaries across all registered databases.

 

 

You can view quick summaries such as:

Security assessment:

 

User assessment:

 

From this dashboard, you can easily navigate to each of the key features:
Assessments – Run or view Security and User Assessments
Data Discovery & Masking – Identify and protect sensitive data
Auditing – Monitor and analyze database activities
SQL Firewall & Alerts – Manage SQL protection and incident notifications

This blog covers the high-level steps to set up Oracle Data Safe.
In the next post, I will share more detailed insights and advanced configurations to get the most out of Data Safe.


 


 

Sunday, 28 September 2025

Instantly Assigning a Public IP to an Existing OCI Compute Instance

Assigning the Ephemeral Public IP

Follow these steps directly in the OCI Console:

Step 1: Navigate to Your Instance

  1. Log in to the OCI Console.

  2. Open the navigation menu () and go to Compute Instances.

  3. Select the Compartment where your instance is located.

  4. Click the Name of the target Compute instance.

Step 2: Locate the Primary VNIC

  1. On the Instance Details page, scroll down to the Resources section and click Attached VNICs.

  2. You will typically see one entry—the Primary VNIC. Click on its Name (which is usually a long string starting with ocid1.vnic...).

Step 3: Edit the Private IP Address

  1. On the VNIC Details page, navigate to Resources on the left and click IP Addresses.

  2. You will see a table listing the Primary Private IP Address. Notice the Public IP column currently shows "Not Assigned".

  3. Click the Actions menu () next to the private IP address and select Edit.

Step 4: Assign the Ephemeral Public IP

  1. The Edit Private IP Address dialog box will open.

  2. Under the Public IP Type section, select the radio button for Ephemeral public IP.

  3. (Optional) You can give the Ephemeral IP a name, but it is not required.

  4. Click the Update button.

Sunday, 10 August 2025

DML Error Logging

What Is DBMS_ERRLOG?
DBMS_ERRLOG is a built-in Oracle package that helps you manage errors during DML operations (like INSERT, UPDATE, or DELETE). Its core function is to create and use an error logging table. Instead of failing the entire statement when it encounters a bad row, database simply logs the error and moves on to the next one. 

How Does It Work?
The process is simple and involves two main steps:
    Create an Error Logging Table: use the DBMS_ERRLOG.CREATE_ERROR_LOG procedure to create a table that will store information about any rows that fail your DML statement. only need to do this once for each target table.
    Enable Error Logging in Your DML Statement: add the LOG ERRORS clause to your INSERT, UPDATE, or DELETE statement. This tells the database to use the error logging table  created.

Lets try sample table
SQL> CREATE TABLE contacts (
    contact_id NUMBER PRIMARY KEY,
    first_name VARCHAR2(50),
    last_name VARCHAR2(50)
);  

Table created.
Create an error log table for the contacts table using the
DBMS_ERRLOG.CREATE_ERROR_LOG procedure. This only needs to be done once per target table.
SQL> BEGIN
  DBMS_ERRLOG.CREATE_ERROR_LOG('contacts', 'contacts_err_log');
END;
/  
PL/SQL procedure successfully completed.


Insert data to contacts table
SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (1, 'John', 'Doe') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (2, NULL, 'Smith') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (1, 'Jane', 'Doe') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
0 rows created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (3, 'Peter', 'Jones') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> commit;
Commit complete.

Lets check error details from error log table:

SQL> SELECT *  FROM contacts_err_log;
   ORA_ERR_NUMBER$ ORA_ERR_MESG$                                                            ORA_ERR_ROWID$         OR ORA_ERR_TAG$                 CONTACT_ID               FIRST_NAME     LAST_NAME
------------------ ------------------------------------------------------------------------ ---------------------- -- ---------------------------- ------------------------ -------------- ----------------
                 1 ORA-00001: unique constraint (SYS.SYS_C008882) violated                                         I  bulk_insert_test             1                        Jane           Doe


The output will show you the specific error codes and messages for each failed row, allowing you to easily identify and correct the data issues.


Sunday, 20 July 2025

Oracle Optimizer Hints

What Exactly is an Optimizer Hint?
A hint is a comment placed within an SQL statement (after SELECT, UPDATE, INSERT, MERGE, or DELETE) that instructs the Cost-Based Optimizer (CBO) to use a particular execution approach, even if the CBO might calculate that path to be more "expensive."
Hints are enclosed in special comment syntax: /*+ hint_name(arguments) */.
For example, to prioritize the fastest return of the first 10 rows:
SELECT /*+ FIRST_ROWS(10) */ employee_id, last_name FROM employees;
While powerful for testing and troubleshooting, remember: hints are extra code to manage and can become obsolete or even detrimental after database changes. For long-term stability, tools like SQL Plan Baselines are often preferred.

Types of Hints: Scope of Influence
Oracle hints are categorized by the scope of the SQL they affect, determining the specific part of the statement they influence:
    Single-Table Hints: These are specified on a single table or view within the query. They are used to influence operations directly against that specific object. An example is the INDEX hint, which instructs the optimizer to use a particular index when accessing the table.
    Multi-Table Hints: Similar to single-table hints, but they can specify one or more tables or views, often to influence the relationship or order between them. The LEADING hint is a prime example, used to enforce a specific order in which tables are joined. Note that a hint like USE_NL(table1 table2) is often just a shorthand for two single-table hints.
    Query Block Hints: These hints operate on an entire query block, such as the main query or a named subquery. They typically affect transformations or high-level execution strategies for that block. Examples include STAR_TRANSFORMATION and UNNEST.
    Statement Hints: These apply to the entire SQL statement, regardless of complexity or query blocks. They often control global execution parameters. The ALL_ROWS hint, which optimizes for best overall throughput rather than fastest initial response, is a common statement hint.

Hints by Category:
Oracle's many hints are logically grouped into categories, providing granular control over different phases of query optimization and execution:
1. Hints for Optimization Approaches and Goals:
These hints set the overarching objective for the optimizer, overriding the session or system settings:
    ALL_ROWS
    FIRST_ROWS(n)
2.Hints for Enabling Optimizer Features
This hint allows you to maintain plan stability across database versions:
    OPTIMIZER_FEATURES_ENABLE
3.Hints for Access Paths
These hints instruct the optimizer to use a specific method for accessing data in a table:
    FULL
    CLUSTER
    HASH
    INDEX and NO_INDEX
    INDEX_ASC and INDEX_DESC
    INDEX_COMBINE
    INDEX_JOIN
    INDEX_FFS and NO_INDEX_FFS
    INDEX_SS and NO_INDEX_SS
    INDEX_SS_ASC and INDEX_SS_DESC
4.Hints for Join Orders
These hints suggest the order in which tables should be joined:
    LEADING
    ORDERED
5.Hints for Join Operations
These hints instruct the optimizer on the algorithm to use for joining tables:
    USE_NL and NO_USE_NL
    USE_NL_WITH_INDEX
    USE_MERGE and NO_USE_MERGE
    USE_HASH and NO_USE_HASH
6.Hints for Online Application Upgrade
These specialized hints manage concurrency during Edition-Based Redefinition (EBR) upgrades:
    CHANGE_DUPKEY_ERROR_INDEX
    IGNORE_ROW_ON_DUPKEY_INDEX
    RETRY_ON_ROW_CHANGE
7.Hints for Parallel Execution
These hints control whether and how the SQL statement should be executed in parallel:
    PARALLEL and NO_PARALLEL
    PARALLEL_INDEX and NO_PARALLEL_INDEX
    PQ_DISTRIBUTE
8.Hints for Query Transformations
These hints allow you to enable or disable specific logical rewrites of the SQL statement:
    NO_QUERY_TRANSFORMATION
    USE_CONCAT
    NO_EXPAND
    REWRITE and NO_REWRITE
    MERGE and NO_MERGE
    STAR_TRANSFORMATION and NO_STAR_TRANSFORMATION
    FACT and NO_FACT
    UNNEST and NO_UNNEST
9.Additional Hints
This is a collection of hints for various other specific needs and behaviors:
    APPEND, APPEND_VALUES, and NOAPPEND
    CACHE and NOCACHE
    PUSH_PRED and NO_PUSH_PRED
    PUSH_SUBQ and NO_PUSH_SUBQ
    QB_NAME
    CURSOR_SHARING_EXACT
    DRIVING_SITE
    DYNAMIC_SAMPLING
    MODEL_MIN_ANALYSIS

A key addition in Oracle Database 23ai is related to vector data processing, essential for AI-driven similarity searches:
    VECTOR_INDEX_SCAN: This hint instructs the optimizer to explicitly use a Vector Index to satisfy the query. This is vital for accelerating similarity searches on vector data, ensuring the database leverages its specialized vector processing capabilities for these modern data types.
    
    

Why Local Indexes Stay Usable in Oracle Partitioned/Subpartitioned Tables

Understanding the Players: Partitioning and Indexes
Before we jump into TRUNCATE, let's quickly recap the two stars of our show:
    Partitioning: Imagine your massive database table as a giant book. Partitioning is like dividing that book into smaller, more manageable chapters (partitions) and even sections within those chapters (subpartitions). This helps with performance, maintenance, and data lifecycle management.
 Indexes: Indexes are like the index at the back of our book. They help Oracle quickly find specific rows without scanning the entire table.
Now, when you have a partitioned table, you can have two main types of indexes:
 Global Indexes: These are like a single, comprehensive index for the entire book. They don't care about your chapters; they just list every keyword and its page number, no matter which chapter it's in.
 Local Indexes: These are the game-changers for partitioned tables. With a local index, for every chapter (or sub-chapter), you get its own mini-index. So, "Chapter 1 Index," "Chapter 2 Index," and so on. Critically, the local index is partitioned exactly the same way as its table. This means the index entries for data in Chapter 1 only reside in Chapter 1 Index.

Power of TRUNCATE
The TRUNCATE command is a Data Definition Language (DDL) operation. Unlike DELETE (which logs every row deletion and generates undo), TRUNCATE rapidly deallocates all space used by the data in a table or partition/subpartition. It essentially "resets" the high-water mark, making it incredibly efficient for emptying large segments.

Why TRUNCATE Keeps Local Indexes Usable
This is where the magic happens! When you TRUNCATE a subpartition (or a partition):
 Direct Correspondence: Because a local index subpartition is explicitly linked, one-to-one, with its corresponding table subpartition, Oracle knows exactly which part of the index needs to be cleared.
 Atomic Operation: When you issue TRUNCATE SUBPARTITION, Oracle doesn't just clear the data. It simultaneously and atomically truncates the associated local index subpartition. The index entries belonging to that specific data segment are removed right alongside the data.
 No Inconsistency: Since the index entries are removed from their respective local index subpartition, there are no "dangling pointers" or inconsistencies. The local index remains valid and perfectly synchronized with its remaining data. Other local index subpartitions (for untouched data) are completely unaffected.
 No Rebuild Needed: This is the best part! Because the local index subpartition is reset and ready, there's no need for a costly ALTER INDEX REBUILD operation. Your index is immediately usable for any new data inserted into the truncated subpartition.

Contrast this with Global Indexes: If you TRUNCATE a subpartition when you have a global index, that global index will become UNUSABLE. Why? Because it contains entries from all subpartitions. When one goes away, the global index is left with pointers to non-existent data, making it inconsistent and requiring a full rebuild.

Sunday, 13 July 2025

What is "enq: TX - index contention"?

At its core, "enq: TX - index contention" signifies that multiple sessions are trying to acquire a Transaction (TX) enqueue on the same index block concurrently. An enqueue is a locking mechanism used by Oracle to protect shared resources. When sessions are waiting on a TX enqueue for an index, it means they are effectively "stuck" trying to modify (insert, update, or delete) rows that would impact the same small portion of an index.  

Think of it like this: Imagine a very popular book in a library. Many people want to borrow it, but there's only one copy. Everyone lines up, waiting for their turn. In our database analogy, the "book" is a specific index leaf block, and the "people" are concurrent transactions trying to write to it. The "line" is the "enq: TX - index contention" wait.

The Root Causes: Why Does This Happen?
"enq: TX - index contention" isn't random; it stems from specific access patterns that create "hot spots" within your indexes.

1. The Monotonically Increasing Key (The Most Common Culprit)
This is by far the leading cause. When you have an index built on a column whose values are always increasing, like:
    Sequence-generated Primary Keys (e.g., ID NUMBER DEFAULT MY_SEQ.NEXTVAL)
    DATE or TIMESTAMP columns using SYSDATE / SYSTIMESTAMP
    Application-generated unique identifiers that increment sequentially

Every new INSERT operation will attempt to add a new entry to the "right-hand side" or the highest value end of the index. If you have many sessions inserting concurrently, they all converge on the same few index leaf blocks at the "end" of the index. This creates a severe bottleneck as sessions contend for exclusive access to these blocks, leading to:
    Frequent Block Splits: As the hot block fills up, Oracle has to split it into two blocks to make room for new entries. This is an expensive operation and exacerbates contention.
    Increased Buffer Cache Activity: The hot block is constantly being read into and written from the buffer cache.
    High Interconnect Traffic (in RAC): In a RAC environment, these hot blocks are constantly "ping-ponging" between instances as different nodes try to insert into them, leading to high "gc buffer busy" waits in addition to "enq: TX" waits.

Example Scenario:
Imagine an ORDER table with ORDER_ID as a primary key, populated by ORDER_SEQ.NEXTVAL. In a busy e-commerce application, hundreds of orders are placed per second. Each INSERT INTO ORDERS (ORDER_ID, ...) attempts to write a new, higher ORDER_ID into the ORDER_PK index. All these inserts bottleneck on the same few index leaf blocks, resulting in "enq: TX - index contention."

2. Heavy Deletes Followed by Inserts (Index Fragmentation)
While less common than monotonic keys, this can also contribute. If you perform large-scale DELETE operations on a table, the index entries are marked as deleted but the space isn't immediately reclaimed. Subsequent INSERT operations might then try to reuse this freed space. If many concurrent sessions are attempting to insert into these "fragmented" or "sparse" index blocks, it can lead to contention as they fight for ITL (Interested Transaction List) slots or space within those blocks.

3. Low PCT_FREE for the Index
PCT_FREE specifies the minimum percentage of free space reserved in each data block for future updates. While more critical for data blocks, a very low PCT_FREE on an index could theoretically contribute to contention if updates to existing index entries cause the blocks to become excessively full, leading to more frequent splits or difficulty in allocating ITL slots for concurrent transactions. This is a less frequent direct cause but can be a contributing factor.
The Impact: How Does It Hurt Performance?
The consequences of "enq: TX - index contention" are significant:
    Reduced Throughput: Transactions wait, slowing down overall DML operations.
    Increased Response Times: User experience suffers as applications become sluggish.
    High CPU Utilization: Index block splits, consistent read generation, and frequent buffer cache operations consume more CPU.
    Increased I/O (Indirectly): While primarily a CPU/contention issue, increased block splits can indirectly lead to more I/O.
    RAC-Specific Woes: In RAC, this contention translates directly to high "gc buffer busy" waits, as instances spend excessive time transferring the same hot blocks back and forth across the interconnect.

The Fixes: Solutions to Alleviate Index Contention
Once you've identified "enq: TX - index contention" as a problem (typically via AWR reports showing high waits for this event and pinpointing the exact index), here are the most effective strategies:

Solution 1: Reverse Key Indexes (The Go-To for Monotonic Keys)
    A reverse key index physically reverses the byte order of the column's values before storing them in the index. This effectively scrambles the sequence of monotonically increasing keys, distributing them randomly across the index's leaf blocks instead of concentrating them at one end.
    When to Use: Ideal for indexes on sequence-generated primary keys or other monotonically increasing values where range scans are not critical.
    -- Original index (if it exists)
    DROP INDEX my_table_pk;
    -- Create a reverse key index
    CREATE UNIQUE INDEX my_table_pk ON my_table (id) REVERSE;
    Pros: Highly effective at eliminating hot blocks and reducing contention for inserts.
    Cons: Significantly degrades performance for index range scans (e.g., WHERE ID BETWEEN 100 AND 200). This is because the values are no longer stored in logical order. You would need to perform a full index scan or a full table scan for such queries, which can be much slower.
    Consideration: Use this only if your application primarily performs direct lookups (WHERE ID = 123) and high-volume inserts. If range scans are frequent, look at other solutions first.

Example:
If ORDER_ID is a sequence, creating CREATE UNIQUE INDEX ORDER_PK ON ORDERS (ORDER_ID) REVERSE; would distribute new order inserts across the index. Queries like SELECT * FROM ORDERS WHERE ORDER_ID = 54321; would still be efficient, but SELECT * FROM ORDERS WHERE ORDER_ID BETWEEN 1000 AND 2000; would likely be slow.

Solution 2: Hash Partitioned Global Indexes (Requires Partitioning License)
    Concept: If you have the Oracle Partitioning option, you can create a Global Hash Partitioned Index. Oracle applies a hash function to the index key, distributing the index entries across multiple partitions. This effectively spreads out the hot spot caused by monotonic keys over multiple physical index segments.
    When to Use: When you need the benefits of distributing contention but also require efficient range scans (which reverse key indexes sacrifice).

    -- Assuming MY_TABLE is already hash partitioned by some column,
    -- or you just want to partition the index itself.
    CREATE UNIQUE INDEX my_table_pk ON my_table (id)
    GLOBAL PARTITION BY HASH (id)
    (
        PARTITION p1,
        PARTITION p2,
        PARTITION p3,
        PARTITION p4
    );

    Pros: Excellent for distributing inserts and maintaining good range scan performance.
    Cons: Requires the Oracle Partitioning option (additional license). Can be more complex to manage than simple indexes.

Solution 3: Increase Sequence CACHE Size
    If your index key is derived from a sequence (e.g., MY_SEQ.NEXTVAL), increasing the CACHE size for that sequence can help. Instead of fetching one number at a time, each session (or instance in RAC) can fetch a block of numbers (e.g., 100 or 1000). This reduces the contention on the sequence object itself. More importantly, in RAC, using CACHE with NOORDER allows different instances to use different ranges of sequence numbers simultaneously, indirectly distributing the inserts across the index.
    When to Use: Always consider this if sequences are generating your contended keys.
    ALTER SEQUENCE my_schema.my_sequence CACHE 1000 NOORDER;
    (Adjust 1000 based on your concurrency, often 10000 or more for very high volumes).
    Pros: Simple to implement, no impact on index scan performance.
    Cons: Does not directly address the index block contention but reduces pressure on the sequence and can help distribute inserts if combined with other factors like different session start times. It's more of a complementary solution.

Solution 4: Rebuild/Shrink Indexes After Large Deletes
If your contention arises after massive DELETE operations (which leave "holes" in the index), rebuilding or shrinking the index reclaims the space and reorganizes the index structure. This can reduce contention by ensuring blocks are full and contiguous, minimizing splits when subsequent inserts occur.
    When to Use: After significant data purges.

    ALTER INDEX my_schema.my_index REBUILD ONLINE; -- Rebuilds index
    -- OR
    ALTER INDEX my_schema.my_index SHRINK SPACE; -- Shrinks index (less disruptive)

    Pros: Reclaims space, improves index efficiency, can reduce future block splits.
    Cons: Rebuilds can be resource-intensive and lock the index (though ONLINE mitigates this).

Solution 5: Increase PCT_FREE for the Index (Less Common Primary Fix)
PCT_FREE reserves space within blocks. If index blocks are consistently filling up too fast, increasing PCT_FREE provides more space for new entries and ITL slots, potentially reducing immediate contention.

When to Use: If analysis shows contention is due to rapid block filling and ITL waits.
    ALTER INDEX my_schema.my_index PCTFREE 20; -- Default is 10
    Pros: Simple change.
    Cons: Can increase index size (less efficient storage), and often not the primary solution for monotonic key contention.


Monday, 30 June 2025

Oracle Indexes: What BLEVEL, LEAF_BLOCKS, and CLUSTERING_FACTOR Really Tell You

BLEVEL: The GPS Navigation System
Imagine you're trying to find a book in a massive library. BLEVEL tells you how many floors you need to climb to reach your destination.

What BLEVEL really means:
- 0: You're already on the right floor (tiny indexes)
- 1: One elevator ride to the correct section (most common)
- 2+: Multiple elevator rides and stair climbs (very large tables)

Real Story:  BLEVEL is your index's "navigation depth." It counts how many levels Oracle must traverse from the root block to reach your data. Think of it as the number of "hops" your query makes through the index structure.
When to Worry:  A BLEVEL that suddenly jumps from 1 to 3 without a corresponding data increase might signal index fragmentation. But don't panic—modern Oracle versions handle this pretty well.

LEAF_BLOCKS: The Storage Bill
LEAF_BLOCKS is straightforward: it's how much disk space your index is consuming at the leaf level.
The Simple Truth:  More data = more leaf blocks = bigger index size.

Performance Reality Check:
- Single-row lookups: Don't care much about leaf blocks (you only read one)
- Range scans: Care a lot (you read multiple blocks)

Pro Tip: If your LEAF_BLOCKS is growing faster than your table data, you might be over-indexing or indexing the wrong columns.

CLUSTERING_FACTOR: The Performance Crystal Ball
This is where things get interesting. CLUSTERING_FACTOR is like a fortune teller for your query performance.
CLUSTERING_FACTOR = Number of times Oracle switches table blocks when reading index entries in order

What the Numbers Mean:
- Low CF (close to LEAF_BLOCKS): Your data is well-organized—like books sorted by author on library shelves
- High CF (close to table NUM_ROWS): Your data is scattered—like books randomly placed throughout the library

The Performance Impact: Range scans become expensive when Oracle has to jump between hundreds of different table blocks instead of reading consecutive blocks.

## Let's Build a Real Example

Step 1: Create Our Test Environment
CREATE TABLE employees_test (
    employee_id NUMBER(6) PRIMARY KEY,
    first_name VARCHAR2(20),
    last_name VARCHAR2(25),
    email VARCHAR2(25),
    phone_number VARCHAR2(20),
    hire_date DATE,
    job_id VARCHAR2(10),
    salary NUMBER(8,2),
    commission_pct NUMBER(2,2),
    manager_id NUMBER(6),
    department_id NUMBER(4)
);

-- Populate with 100,000 employees
BEGIN
    FOR i IN 1..100000 LOOP
        INSERT INTO employees_test VALUES (
            i,                                    -- Sequential employee_id (good clustering)
            'FirstName' || i,
            'LastName' || i,
            'email' || i || '@example.com',
            '555-123-' || LPAD(i, 4, '0'),
            SYSDATE - DBMS_RANDOM.VALUE(1, 365*5),
            CASE MOD(i, 5) WHEN 0 THEN 'IT_PROG' ELSE 'SA_REP' END,
            ROUND(DBMS_RANDOM.VALUE(3000, 15000), 2),
            ROUND(DBMS_RANDOM.VALUE(0, 0.4), 2),
            CASE WHEN MOD(i, 100) = 0 THEN NULL 
                 ELSE ROUND(DBMS_RANDOM.VALUE(100, 200)) END,
            CASE MOD(i, 3) WHEN 0 THEN 90 
                           WHEN 1 THEN 60 
                           ELSE 30 END            -- Scattered department_id (poor clustering)
        );
    END LOOP;
    COMMIT;
END;
/

-- Create our test index
CREATE INDEX idx_emp_dept_id ON employees_test (department_id);
-- Gather fresh statistics
EXEC DBMS_STATS.GATHER_TABLE_STATS(
    OWNNAME => USER, 
    TABNAME => 'EMPLOYEES_TEST', 
    CASCADE => TRUE
);

Step 2: The Moment of Truth
Now let's see what our indexes look like:
-- Check our table stats
SELECT table_name, blocks, num_rows  FROM dba_tables  WHERE table_name = 'EMPLOYEES_TEST';
TABLE_NAME                           BLOCKS   NUM_ROWS
-------------------------------- ---------- ----------
EMPLOYEES_TEST                         1467     100000

-- Check our index stats
SELECT index_name, table_name, CLUSTERING_FACTOR,blevel, leaf_blocks, num_rows, distinct_keys FROM user_indexes WHERE table_name = 'EMPLOYEES_TEST';
INDEX_NAME                       TABLE_NAME                       CLUSTERING_FACTOR     BLEVEL LEAF_BLOCKS   NUM_ROWS DISTINCT_KEYS
-------------------------------- -------------------------------- ----------------- ---------- ----------- ---------- -------------
SYS_C008470                      EMPLOYEES_TEST                                1464          1         187     100000        100000
IDX_EMP_DEPT_ID                  EMPLOYEES_TEST                                4392          1         196     100000             3

## Decoding the Results
Good Guy:  Primary Key Index (SYS_C008470)
BLEVEL = 1:  Perfect! One hop to reach any employee ID
LEAF_BLOCKS = 187:  Reasonable size for 100,000 unique values
CLUSTERING_FACTOR = 1464:  Excellent! Very close to table blocks (1467)
Why This Rocks:  Employee IDs were inserted sequentially (1, 2, 3...), so the physical table order matches the index order. When you query `WHERE employee_id BETWEEN 1000 AND 1050`, Oracle reads consecutive table blocks—like reading pages in a book.
Troublemaker:  Department Index (IDX_EMP_DEPT_ID)
BLEVEL = 1: Same efficient navigation
LEAF_BLOCKS = 196: Slightly larger (non-unique values)
CLUSTERING_FACTOR = 4392: Uh-oh! Much higher than table blocks
Why This Hurts: Department IDs are scattered (90, 60, 30, 90, 60, 30...). A query like `WHERE department_id = 90` will find all matching rows in the index, but then jump all over the table to fetch them—like trying to read a book where every page is in a different room.

## Performance Showdown
Let's see these differences in action with real explain plans:
###Fast Query (Good Clustering)
SELECT * FROM employees_test WHERE employee_id BETWEEN 1000 AND 1050;
Explain Plan:
Plan hash value: 240947081
------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |    52 |  5200 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES_TEST |    52 |  5200 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | SYS_C008470    |    52 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
   2 - access("EMPLOYEE_ID">=1000 AND "EMPLOYEE_ID"<=1050)

Scorecard:
- Cost: 3 - Lightning fast
- INDEX RANGE SCAN - Oracle loves this index
- 52 rows retrieved efficiently

###Slow Query (Poor Clustering)

SELECT * FROM employees_test  WHERE department_id = 90;
Explain Plan:
Plan hash value: 304096605
------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                | 33333 |  3255K|   400   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES_TEST | 33333 |  3255K|   400   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
   1 - filter("DEPARTMENT_ID"=90)

Scorecard:
- Cost: 400 - 133x slower!
- TABLE ACCESS FULL - Oracle completely ignored the index!
- 33,333 rows processed with full table scan

Plot Twist:  Why Oracle Ignored the Index
Here's the fascinating part:  Oracle's optimizer looked at the department index and said, "No thanks, I'll do a full table scan instead."

Math Behind the Decision:
- Department 90:  ~33,333 rows (1/3 of 100,000)
- Clustering Factor:  4,392 (very high)
- Table Blocks:  1,467
Oracle calculated that using the index would require reading nearly 4,400 different table blocks randomly. A full table scan (1,467 blocks sequentially) was actually faster! 

Lesson: High clustering factors can make indexes so inefficient that the optimizer prefers full table scans. This is why monitoring CLUSTERING_FACTOR is crucial.

Sunday, 15 June 2025

Global Hash-Partitioned Index

What is a Global Hash-Partitioned Index?
A global index is a single index structure that spans across all partitions of a partitioned table. It's not tied to the table's partitioning strategy.

Hash Partitioning:
The index itself is divided into a fixed number of partitions using a hash algorithm on the index key. This aims to distribute index entries evenly across these index partitions.
Independence from Table Partitioning: The partitioning of a global index does not necessarily align with the partitioning of the underlying table. You can have a range-partitioned table with a hash-partitioned global index, for example.
Supports Unique Constraints: Global indexes can enforce unique constraints across the entire table, even if the table is partitioned. This is a key advantage over local indexes in certain scenarios where the unique key doesn't include the table's partition key.

How do Global Hash-Partitioned Indexes Improve Performance?
Global hash-partitioned indexes are particularly beneficial in specific scenarios, primarily by:
Reducing Contention for "Hot Blocks" (Especially during Inserts):
In OLTP (Online Transaction Processing) environments, where there are a high number of concurrent inserts into a table, especially when the indexed column is a sequence-generated primary key (e.g., an IDENTITY column or a sequence NEXTVAL), a standard B-tree index can become a "hot spot." This is because new entries are constantly being added to the "rightmost" leaf block of the index, leading to contention for that single block.

Hash partitioning distributes these new index entries across multiple index partitions (and thus multiple physical blocks) based on the hash of the key. This spreads the insert workload, reducing contention, "buffer busy waits," and improving overall insert throughput. This is often referred to as addressing the "90-10 split problem."
Improving Point Queries:
For equality queries (e.g., WHERE index_column = :value), the database can quickly calculate the hash of the value and go directly to the specific index partition where that value's entry is located. This allows for efficient access to data, often with fewer logical I/Os compared to a non-partitioned index on a very large table, as the index tree to traverse within a partition is smaller.
Facilitating Parallelism:
While not as inherently parallel for range scans as local non-prefixed indexes, global hash-partitioned indexes can still benefit from parallel query execution for certain operations, as the workload can be distributed across the index partitions.
Managing Large Indexes:
Partitioning a very large index into smaller, more manageable pieces can simplify maintenance tasks (e.g., rebuilding a single partition instead of the entire index), though global indexes do require more manual maintenance if table partitions are dropped or added, as the global index may become unusable and need rebuilding or UPDATE INDEXES clause to stay valid.

When to Consider Using Global Hash-Partitioned Indexes:

High-volume OLTP systems: Especially when suffering from "hot block" contention on indexes due to sequential key generation.
Indexes on non-partitioning key columns: 

When you need a unique constraint or fast access on columns that are not part of the table's partitioning key.
Queries primarily involving equality (=) or IN list predicates: These types of queries can leverage partition pruning on the index. Range predicates (like BETWEEN, >, <) generally won't benefit from hash partition pruning on the index side.
When even data distribution is crucial: Hash partitioning aims to distribute data evenly, which can be beneficial for load balancing. For optimal distribution, the number of hash partitions should ideally be a power of 2 (e.g., 2, 4, 8, 16, 32).
When the partitioning key column has high cardinality and is fairly unique.


Potential Drawbacks and Considerations:

No Partition Pruning for Range Scans: 

A significant limitation is that global hash-partitioned indexes typically do not benefit from partition pruning for range queries. If your queries frequently use BETWEEN, >, or < on the indexed column, a different index strategy (like a global range-partitioned index or a local index if applicable) might be more suitable.


Maintenance Overhead: 

When table partitions are dropped or truncated, global indexes (both hash and range) can become UNUSABLE and require rebuilding or an UPDATE INDEXES clause during the table partition operation. This can lead to downtime for large indexes. Local indexes are often preferred for tables where partitions are frequently added/dropped (e.g., historical data).
Choosing the Right Partitioning Key: Just like with table partitioning, selecting an appropriate partitioning key for the index is crucial for achieving even data distribution and performance benefits. A poor choice can lead to skewed data and "hot" index partitions, negating the benefits.
Complexity: Managing partitioned indexes adds a layer of complexity to database administration.

Saturday, 31 May 2025

Understanding GV$INSTANCE_PING:RAC Interconnect Performance Monitoring

In an Oracle Real Application Clusters (RAC) environment, maintaining optimal interconnect performance is crucial for database operations. The GV$INSTANCE_PING view serves as a vital tool for Database Administrators (DBAs) to monitor and analyze the latency between RAC instances. This blog post explores how this view helps DBAs ensure optimal RAC performance.

What is GV$INSTANCE_PING?
GV$INSTANCE_PING is a dynamic performance view that provides detailed information about the measured latency of the interconnect between all instances in an Oracle RAC environment. Every few seconds, the PING process of each instance checks the response time of the interconnect to all other instances in the same database.

Key Features and Measurements
The view tracks two types of messages:
1. **500-byte messages (500B) Smaller test messages
2. **8-kilobyte messages (8K) Larger test messages

For each message type, the view provides comprehensive metrics:
Current Performance Metrics
- CURRENT_500B and CURRENT_8K: Latest response times in microseconds
- AVERAGE_500B and AVERAGE_8K: Average response times since instance startup
- MAX_500B and MAX_8K: Maximum observed response times

Cumulative Statistics
- COUNT_500B and COUNT_8K: Number of measurements taken
- WAIT_TIME_500B and WAIT_TIME_8K: Total accumulated response times
- WAIT_TIME_SQUARED_500B and WAIT_TIME_SQUARED_8K: Sum of squared response times (useful for statistical analysis)

Let's analyze an actual output from a 4-node RAC environment:
SQL> select * from GV$INSTANCE_PING;
   INST_ID   INSTANCE CURRENT_500B AVERAGE_500B   MAX_500B COUNT_500B WAIT_TIME_500B WAIT_TIME_SQUARED_500B CURRENT_8K AVERAGE_8K     MAX_8K   COUNT_8K WAIT_TIME_8K WAIT_TIME_SQUARED_8K     CON_ID
---------- ---------- ------------ ------------ ---------- ---------- -------------- ---------------------- ---------- ---------- ---------- ---------- ------------ -------------------- ----------
         2          1            7            7        250     370458        2924730                   1253          8          8        941     370458      3380206                 2547          0
         2          2            3            3       2249     370458        1217885                   5695          4          4        551     370458      1539529                  881          0
         2          3            7            7        304     370457        2887515                   1393          8          8      14780     370457      3389103               220212          0
         2          4            7            7        349     370458        2847595                   1298         10          8        515     370458      3313461                 1551          0
         3          1            8            7        227     370454        2952602                   1285          8          8        919     370454      3370483                 2855          0
         3          2            8            7      17647     370454        2901210                 314283          8          8       2206     370454      3416102                 6594          0
         3          3            3            3        531     370454        1226169                    760          4          3        611     370454      1557469                 1527          0
         3          4            7            7      17321     370454        2922328                 326255          8          8        617     370454      3322511                 1796          0
         4          1            7            7       1967     370454        2905185                   4652          8          8       1683     370454      3344914                 4081          0
         4          2            8            7        443     370454        2877140                   1129         22          8        709     370454      3396211                 1514          0
         4          3            7            7       1478     370454        2893488                   3060          8          9        367     370454      3372591                 1441          0
         4          4            3            3        749     370455        1211323                    790          4          4        199     370455      1517458                  421          0
         1          1            4            3        458     370459        1215589                    732          4          3        569     370459      1539772                 1330          0
         1          2            7            7        208     370458        2921436                   1109         10          9        561     370458      3448708                 1981          0
         1          3           15            7        151     370458        2929392                   1054          8          8        214     370458      3420401                 1464          0
         1          4            7            7        234     370459        2902809                   1181          8          8        203     370459      3362650                 1415          0

16 rows selected.


Key Observations from the Data:
1.Instance 2's Communication Patterns
   - Current 500B latency to Instance 1: 7 microseconds
   - Current 500B latency to Instance 2: 3 microseconds (self-ping)
   - Current 500B latency to Instance 3: 7 microseconds
   - Current 500B latency to Instance 4: 7 microseconds

2.Performance Characteristics
   - Average 500B latency is consistently 7 microseconds for most connections
   - Self-ping (Instance 2 to 2) shows better performance at 3 microseconds
   - Maximum latencies show some spikes:
     * 2249 microseconds to Instance 2
     * 304 microseconds to Instance 3
     * 349 microseconds to Instance 4

3.8K Message Performance
   - Current 8K latencies range from 4 to 10 microseconds
   - Maximum 8K latency shows significant variation:
     * 941 microseconds to Instance 1
     * 551 microseconds to Instance 2
     * 14780 microseconds to Instance 3 (notable spike)
     * 515 microseconds to Instance 4

4.Measurement Volume
   - Approximately 370,458 measurements taken for each connection
   - Consistent measurement count across instances indicates stable monitoring

Why is it Important for DBAs?
1.Performance Monitoring
   - Helps identify interconnect latency issues
   - Enables proactive detection of performance degradation
   - Provides historical data for trend analysis

2.Troubleshooting
   - Assists in diagnosing RAC performance problems
   - Helps identify instance-to-instance communication issues
   - Enables comparison of current vs. historical performance

Wednesday, 14 May 2025

Oracle Database 19c: Lost Write Protection Gets Smarter with AUTO

One of the most under-the-radar but important changes introduced in Oracle Database 19c Release Update (RU) 19.26 is the new default setting for the parameter DB_LOST_WRITE_PROTECT.
๐Ÿ”„ From NONE to AUTO: A Quiet but Powerful Shift

Previously, the default setting for DB_LOST_WRITE_PROTECT was:
DB_LOST_WRITE_PROTECT = NONE

This meant lost write detection was completely disabled by default unless explicitly turned on.

Starting with RU 19.26, Oracle has made a significant shift:
DB_LOST_WRITE_PROTECT = AUTO

This change is also reflected in Oracle Database 23ai. It’s more than just a default toggle — it's a smarter, adaptive mechanism that brings better protection with minimal performance trade-offs.
๐Ÿ’ก What Is a Lost Write?

A Lost Write is a critical data corruption event where:
    Oracle writes a data block to storage,
    The storage subsystem acknowledges the write as successful,
    But the data block is never actually written to persistent storage.

These can be caused by memory errors, firmware bugs, OS issues, or storage controller failures. Lost writes are notoriously difficult to detect and often surface during recovery or standby failovers.

Oracle defines a lost write as:
    "A data block lost write occurs when an I/O subsystem acknowledges the completion of the block write, while in fact the write did not occur in the persistent storage."

๐Ÿง  Why AUTO Matters: Intelligent Lost Write Detection
The new AUTO value for DB_LOST_WRITE_PROTECT changes how Oracle handles lost write detection:
▶️ On a Primary Database:
    Oracle only logs buffer cache reads in the redo log if:
        Physical standby databases exist, and
        Real-time redo apply is enabled.

▶️ On a Standby Database:
    Oracle performs lost write detection only if the apply lag is within reasonable bounds.
    If apply lag exceeds:
        60 seconds, or
        2/3 of the FSFO apply lag threshold (if Fast-Start Failover is enabled),
    Then lost write detection is temporarily skipped to reduce overhead and ensure smooth role transitions.

⚙️ Comparison of DB_LOST_WRITE_PROTECT Modes
Value           Description                                                                Performance Impact
NONE         No lost write detection.                                                ๐Ÿ”‹ Low
TYPICAL    Logs reads for read/write tablespaces.                       ⚖️ Moderate
FULL           Logs reads for all tablespaces (read-only + read/write).    ๐Ÿ› ️ High
AUTO (NEW)    Adapts based on real-time redo and apply lag.    ⚙️ Balanced


๐Ÿšจ Real-World Impact: Why It Matters
With older RUs and the default NONE, lost write protection was disabled by default — even in Data Guard environments. This left many systems unprotected by mistake, simply because DBAs weren’t aware or didn’t configure it explicitly.

Now, AUTO intelligently enables protection only when it's needed, helping DBAs avoid the all-or-nothing trade-off between performance and data integrity.
๐Ÿงช What Should You Do?

Here are a few actions to take if you’ve recently applied Oracle 19.26 or higher:
✅ 1. Check the Current Setting
SHOW PARAMETER DB_LOST_WRITE_PROTECT;

If you see AUTO, you're on the new default.
✅ 2. Monitor Redo Generation
You may notice an increase in redo generation, especially if:
    You have a physical standby.
    Real-time redo apply is enabled.
See MOS Note: 37260974
“Database Generating Large Amount Of Redo After Applying Jan 2025 DBRU”


✅ 3. Understand the Error ORA-00752

If you ever encounter:
ORA-00752: recovery detected a lost write of a data block
Do not attempt to fix this manually. Instead:
    Shutdown the primary database.
    Failover to the standby.
    Open a support SR with Oracle immediately.