Tuesday, 2 December 2025

Oracle AI World 2025: What Really Stood Out This Year

Oracle AI World 2025 felt different. Not just because the event got a new name, or because it took over Las Vegas with thousands of attendees, or even because there were two new CEOs at the helm. It felt different because Oracle finally showed what it looks like when a company built on enterprise data decides to go all-in on AI.

Most people still think of Oracle as “the database company.” And sure—that’s part of its DNA. But this year’s event made one thing very clear:Oracle is no longer just storing enterprise data. It’s activating it. With AI. Everywhere.

Below is my take on the biggest announcements, the boldest ideas, and why Oracle AI World 2025 may be one of the company’s most important conferences in years.

Big Announcements You Should Know About
AI Agent Marketplace
This was one of the most talked-about launches—an online store of pre-built AI agents that plug directly into Fusion Apps.
And these aren’t little chatbots. These are real workflow-level agents built by partners like Deloitte, Accenture, Stripe, IBM, and others.
Think of it like an App Store, but for enterprise AI automation.

AI Agent Studio  & Agent Hub
AI Agent Studio is basically the design room where you build or fine-tune AI agents for Oracle Fusion Cloud Applications. You don’t need to be a hardcore developer—it's built for no-code/low-code creation.
You can start with Oracle’s pre-built agent templates or build your own from scratch. These agents can do everything from predicting equipment failures to summarizing meetings to handling procurement tasks.
Because it’s tightly connected to Fusion Apps, the agents can securely tap into real business data and take real actions. Developers and admins use the Studio to build and deploy, while business users simply interact with the agents through AI Chat or Ask Oracle right inside their Fusion screens.

If the Studio is where you build, Agent Hub is where you run and scale.
Agent Hub, part of OCI Generative AI, helps enterprises manage, monitor, secure, and expand their AI agents across the organization. It includes an AI Agent Marketplace (so you can pull in new, pre-built agent capabilities from Oracle partners) and support for a wider range of LLMs to power more sophisticated behaviors.

Database 26ai: Oracle Rewrites Its Engine for the AI Era
The upgrade from 23ai to 26ai isn’t incremental—it’s a reinvention.

Key takeaways:

  • Vector and RAG are built into the engine
  • Iceberg support brings openness
  • It reduces data duplication
  • It enables in-database AI agents
  • It’s the foundation for Oracle’s new AI lakehouse

Some people say Oracle is behind Snowflake or Databricks in vectors/RAG.
But Oracle made a simple point:

If your entire business runs on Oracle data, do you want to move it to a new AI store…
or just bring AI to where the data already lives?

OCI Infrastructure & The Zettascale Supercomputer
OCI Dedicated Region25: Bringing an entire OCI region into as little as three racks for sovereign cloud, low-latency, and regulatory control.

NVIDIA Zettascale10 AI Supercomputer: A stunning commitment to scale, announcing a potential world-largest cloud-based supercomputer with hundreds of thousands of NVIDIA GPUs and 16 zettaFLOPS peak performance, built on Oracle’s low-latency Acceleron RoCE network.

Oracle AI Data Platform :
Oracle unveiled its new Autonomous AI Lakehouse—open, multicloud, Iceberg-native, and deeply integrated with Database 26ai.
Everyone is building a lakehouse these days, but Oracle’s spin is unique:
Most vendors approach AI from the unstructured-storage angle.
Oracle approaches it from the structured-enterprise-data angle.

Fusion Apps: AI at No Extra Cost (Yes, Really)
Here’s the part that shocked a lot of people:
Oracle is bundling 600+ AI agents and 100+ new AI features directly into Fusion Apps for no additional license fee.
No AI surcharge.
No per-user pricing.
No “AI edition.”

Final Thoughts: Oracle Is Playing the Long Game
Oracle AI World 2025 felt like a company fully aligned around AI—from Larry’s long-term industry vision to Clay’s infrastructure strategy to Sicilia’s application intelligence.

If you’re an enterprise already in the Oracle ecosystem, the value story is compelling:

  • AI agents included
  • AI in the database
  • AI in the lakehouse
  • AI in every Fusion workflow
  • AI-ready cloud infrastructure
  • Multicloud flexibility


Oracle AI World 2025 made one thing clear: Oracle is no longer just keeping up in the AI race—it’s shaping its own lane.
 

Adding some of my favorite photos from Oracle AI World 2025. Speaking at the event was truly a highlight for me. 


 

 

 

 

 

 

 

 

 

 

 

 

  

 

Co-presented "Oracle AI Database Cloud Service: What's New and What's Next" alongside Binoy Sukumaran and Tushar Pandit.  

 


 

 

 

 

 

 

 

 

 

 

 

Guest Speaker : Modern DevX OraOperator + AI Centric App Development with Oracle Database 23ai  

 


 

 

 


 

 

 

 

Thursday, 27 November 2025

Java Garbage Collection (GC): How Modern JVM GC Works, Evolves, and Scales

Garbage Collection is the JVM’s silent guardian. It quietly reclaims memory from objects your application no longer needs—no manual freeing, no memory leaks (well, mostly), no pointer nightmares.

But as applications scale and heap sizes grow into gigabytes or even terabytes, those tiny moments when GC stops your application (known as Stop-The-World pauses) can become the single biggest threat to performance.

To understand why GC pauses happen—and how modern collectors like G1, ZGC, and Shenandoah nearly eliminate them—we need to start with the basics: how Java organises memory.

Java Heap: Where Objects Live and Die
The design of the Java Heap is based on one powerful, observed truth: the "Weak Generational Hypothesis"—that is, most objects die very young. This insight led to Generational Garbage Collection, where the heap is strategically partitioned based on an object's expected lifespan.


GC Roots
This is the starting line for the GC process. An object is only considered "live" if the GC can trace a path to it from one of these roots. They are the application's solid reference points, the objects that must absolutely not be collected:
    Local variables on your thread stacks.
    Static fields of loaded classes.
    Active threads and native JNI references.

Eden:
Every new object you create with new is born here. This is the most volatile area, constantly being collected by the Minor GC. It acts like a nursery where the majority of objects (≈90% or more) are created and die almost instantly, never leaving this space.

Survivor Spaces (S0 / S1):
Objects that managed to survive their first encounter with the Minor GC in Eden are moved here. They ping-pong back and forth between the two small spaces (S0 and S1). Each time an object survives this trip, its "age" counter ticks up, proving its longevity.

Old Generation: 
Objects that successfully pass a predefined age threshold (usually around 15 minor collections) are considered long-lived and are promoted to the Old Generation. This area contains the stable, long-term residents, and consequently, it is collected much less often by a Major GC or Full GC.

Metaspace: 
This area is technically outside the Heap in native system memory. Since Java 8 (it replaced the old PermGen), Metaspace holds the metadata about the classes your application loads—the structure, names, and methods. It's the blueprint archive for your application's code. 

GC Mechanisms: How Garbage is Found and Removed
How does the JVM actually clean up? There are three primary mechanisms that all GCs use in some combination.
Mark & Sweep:
Mark Phase: The GC walks the object graph starting from the GC Roots and marks everything reachable (live).
Sweep Phase: The GC scans the heap and reclaims memory from unmarked (garbage) objects.
The catch? This leaves the heap with Swiss-cheese-like holes, known as fragmentation. This fragmentation can lead to a dreaded Full GC when the JVM can't find a contiguous space large enough for a new object, even if there is technically enough free memory overall.

Mark–Sweep–Compact: Solving the Fragmentation Problem
To fix fragmentation, a third step is added:
    Compact Phase: All live objects are shuffled to one side of the heap, leaving the free space as one large, clean block. This is great for allocation, but compaction takes time, adding significantly to the STW pause.

The Copying Algorithm :
In the Young Generation, the JVM uses a far faster trick: copying. Instead of marking, sweeping, and compacting, it simply copies live objects from the active spaces (Eden + S0) into the empty space (S1). It then wipes the old spaces clean. Copying is naturally compacting and lightning-fast—this is why Minor GCs are usually so quick.

Tri-Color Marking:
For modern GCs (G1, ZGC, Shenandoah) to work concurrently—meaning the application runs while the GC cleans—they use Tri-Color Marking. This helps the GC understand the current state of objects even as application threads (Mutators) are busy changing references.
    White: Unvisited (suspected garbage).
    Gray: Visited, but its object references have not yet been scanned.
    Black: Visited, and all of its references have been scanned (known-live).

To prevent the application from accidentally hiding a live object (the "tri-color invariant" violation), these GCs use write barriers or load barriers—tiny, quick bits of code inserted by the JVM compiler to manage references whenever the application touches memory.

GC Evolution & Timelines:

Java VersionCollectorNotes
Java 1.3Serial GCFirst simple GC
Java 1.4Parallel GCMultithreaded, throughput-focused
Java 5CMSFirst low-pause GC
Java 7G1 (experimental)Region-based innovation
Java 9G1 defaultCMS deprecated
Java 11ZGC (experimental)Sub-millisecond pauses
Java 15ZGC GAProduction-ready
Java 12–15ShenandoahUltra-low latency
Java 14CMS removedEnd of an era


Serial GC:
Think of Serial GC as a single janitor who locks the doors before cleaning.
    The Vibe: Simple and sequential. It uses a single thread for all collection work.
    The Cost: This is the definition of a Stop-The-World (STW) pause. Every single application thread must halt for both Young and Old generation collections.
    Best For: Tiny stuff. We're talking small clients, embedded systems, or containers with heaps well under 100MB. If you have plenty of CPU cores, don't use this.
    Enable: -XX:+UseSerialGC

Parallel GC:
Parallel GC is the natural evolution of Serial: "If one thread is slow, use ten!"
    The Goal: It’s nicknamed the Throughput Collector because its mission is to maximize the total amount of work your application gets done. It does this by using multiple GC threads to speed up the collection phase.
    The Tradeoff: It still pauses the world (it’s an STW collector), but the pauses are much shorter than Serial. However, on multi-gigabyte heaps, these pauses can still be noticeable—sometimes hitting the half-second or even one-second mark.
    Mechanism: It uses multi-threaded Mark–Sweep–Compact for both Young and Old collections.
    Enable: -XX:+UseParallelGC

CMS (Concurrent Mark Sweep):
CMS was Java's first serious attempt at achieving low latency. It was a game-changer but came with baggage.
    The Breakthrough: It figured out how to do most of the marking concurrently—meaning the GC was tracking objects while your application threads were still running. This dramatically minimized the longest STW pauses.
    The Flaw: CMS was a non-compacting collector. Over time, the heap became terribly fragmented (Swiss cheese holes!). Eventually, the JVM would fail to find a large enough contiguous block for a new object, leading to a catastrophic, hours-long STW Full GC just to compact everything.
    Status: Due to its complexity and fragmentation issues, CMS is considered legacy—it was deprecated in Java 9 and removed entirely in Java 14.
    Enable: -XX:+UseConcMarkSweepGC
    
G1 GC (Garbage-First):
G1 is the modern standard, a massive leap forward that shifted the focus from the whole heap to manageable regions.
    Core Idea: Instead of treating the heap as three fixed blocks (Eden/Survivor/Old), G1 carves it up into ≈2048 fixed-size regions. These regions dynamically switch roles (Young, Old, Humongous) as needed.
    Pause Prediction: G1 tracks which regions have the most garbage (the best "return on investment"). It follows the Garbage-First principle, prioritizing those regions to meet your specified pause time goal (e.g., "I promise to pause no longer than 200ms").
    Collection: It uses Evacuation (copying) to move live objects out of the selected regions. This means it compacts memory as it cleans, eliminating the fragmentation nightmare that plagued CMS. G1 is the default collector since Java 9 for a reason: it's a great all-around performer.
    Enable: -XX:+UseG1GC

ZGC (Ultra-Low Latency):
ZGC is the future. Its design goal was radical: pause times must be independent of the heap size. You can run a TB-sized heap, and your application will pause for the same fraction of a millisecond as a 1GB heap.
    Concurrent Everything: It does marking, relocation, and reference processing all concurrently with the application.
    The Magic: It achieves this via Colored Pointers and Load Barriers. The GC can literally move an object while your application is using it. When your code tries to access the object, the Load Barrier briefly intercepts the call, corrects the old pointer to the object's new location, and lets the application continue. The pause for this fix-up is incredibly brief.
    Pause Time: Guaranteed ≈1−3ms pauses. This is the choice for extreme low-latency and massive memory systems.
    Enable: -XX:+UseZGC

Shenandoah:
Developed by Red Hat (now part of OpenJDK), Shenandoah shares ZGC's goal of achieving ultra-low pause times independent of heap size.
    Similarities: It is also region-based and uses a concurrent approach.
    Distinction: Shenandoah's key innovation is its highly optimized concurrent compaction. It can perform memory consolidation while your application is fully running, ensuring the heap stays compact and healthy without any long STW events.
    Best For: Scenarios similar to ZGC—very large heaps and demanding latency requirements.
    Enable: -XX:+UseShenandoahGC
    
 







Sunday, 9 November 2025

OCI Ops Insights: Turning Data Into Proactive Intelligence

 What Is OCI Ops Insights? 

Ops Insights is Oracle’s intelligent observability and analytics service that provides comprehensive visibility into resource usage, capacity, and SQL performance across databases and hosts — whether they run on OCI, on-premises, or in hybrid environments.

Think of it as your command center for operational intelligence — combining analytics, automation, and AI-driven recommendations to keep your systems optimized and predictable.

Core Capabilities

 

1. Database Insights
Gain complete visibility into the performance and health of your databases.
SQL Insights – Analyze SQL performance trends, find inefficient queries, and identify tuning opportunities.
Database Performance – Track database-level metrics and diagnose bottlenecks before they impact users.
ADDM Spotlight & AWR Hub – Access Automatic Workload Repository data across your entire fleet for unified analysis.

2. Capacity Planning
Forecast capacity issues before they happen.
Monitor CPU and storage utilization across databases, hosts, and Exadata systems.
Predict growth trends to plan for future expansion or cost optimization.

3. Exadata Insights
Get specialized performance and capacity visibility for Exadata infrastructure.
Analyze workloads with Exadata Warehouse.
Explore data with Exadata Explorer to pinpoint system-level trends.

4. Dashboards & Reporting
Visualize and communicate insights effectively:
Create custom dashboards using out-of-box widgets or saved searches.
Generate news-style reports to share operational summaries with teams and management.
Use the AWR Explorer and Data Object Explorer for deep performance exploration.

5. Administration & Configuration
Seamlessly manage your monitored environment:
Configure agent-managed and Enterprise Manager-managed resources.
Enable Autonomous AI Database Full Feature for advanced analytics.
Manage endpoints, AWR Hubs, and collection configurations with ease.  

 

 

 

 

 

 

 

 

 

Saturday, 1 November 2025

Securing Oracle Databases with Oracle Data Safe

 

 
What Is Oracle Data Safe?
Oracle Data Safe is a cloud-based, unified security control center designed specifically for Oracle Databases — whether they reside in Oracle Cloud Infrastructure (OCI), Autonomous Database, or on-premises deployments.

It simplifies the complex, manual tasks involved in securing databases and meeting compliance requirements. With a few clicks, you can evaluate risks, analyze user privileges, discover sensitive data, apply masking policies, and audit activities.
 
Features of Oracle Data Safe:
 
 
 
🔍 1. Security Assessment
The Security Assessment feature evaluates the security posture of your Oracle Databases.
It reviews configurations, user accounts, and security controls, then provides detailed findings with actionable recommendations to reduce or mitigate risks.

Key aspects:
  • Analyzes configuration settings, user privileges, and security parameters.
  • Compares against industry frameworks like STIG, CIS Benchmarks, EU GDPR, and Oracle best practices.
  • Generates an overall Security Score and a prioritized list of vulnerabilities.
  • This ensures your databases consistently align with compliance standards and internal security policies.

👥 2. User Assessment
User Assessment identifies users and accounts that may pose security risks due to excessive privileges, weak authentication, or poor password practices.
It analyzes user data stored in the database dictionary and assigns a risk score to each user.

Capabilities include:
  • Identifies highly privileged or inactive accounts.
  • Evaluates password policies, authentication types, and password change frequency.
  • Links directly to related audit trail entries for deeper investigation.
  • This enables DBAs and security teams to implement least-privilege access controls and strengthen user governance.

🧭 3. Data Discovery

Data Discovery automates the identification of sensitive data within your Oracle Databases.
It scans both data and metadata to locate information that could fall under privacy or compliance regulations.

Highlights:
  • Detects data across multiple sensitivity categories — personal, financial, healthcare, employment, academic, and more.
  • Offers default discovery templates or lets you define custom data models to fit your organization’s classification standards.
  • Produces clear reports listing schemas, tables, and columns containing sensitive data.
  • With Data Discovery, you know exactly where your critical data resides — a foundational step toward compliance and data protection.
🧩 4. Data Masking
The Data Masking feature helps organizations protect sensitive data when replicating or sharing databases for development, testing, or analytics.
It replaces real values with realistic but fictitious data, maintaining referential integrity while ensuring privacy.

Key benefits:
  • Supports multiple masking formats — randomization, substitution, nullification, and lookup-based.
  • Integrates seamlessly with Data Discovery results for consistent masking policies.
  • Enables safe use of production-like data in non-production environments.
  • This reduces the risk of data exposure and helps organizations comply with data privacy regulations.
📜 5. Activity Auditing
Activity Auditing provides continuous visibility into who is doing what in your databases.
It captures user activities — from logins and schema changes to data queries and privilege modifications.

Capabilities:
  • Monitors database activity in real time.
  • Generates audit reports for compliance and governance reviews.
  • Detects unusual or unauthorized access patterns.
  • Auditing is crucial for incident investigation, accountability, and regulatory compliance.
⚡ 6. Alerts
Alerts keep you informed of unusual or high-risk database activities as they occur.
You can define custom thresholds or use predefined alert templates to detect anomalies in user behavior or database operations.
With proactive alerting, teams can respond faster to threats, minimizing potential damage and downtime.

🧱 7. SQL Firewall (New in Oracle AI Database 26ai)
The SQL Firewall introduces an advanced layer of protection directly at the SQL level, helping safeguard databases from SQL injection attacks, compromised accounts, and unauthorized queries.
Oracle Data Safe acts as the central management hub for SQL Firewall policies across all connected databases.

Capabilities:
  • Collects and baselines authorized SQL activities for each user.
  • Generates allowlist-based firewall policies that define approved SQL statements and connection paths.
  • Monitors and reports SQL Firewall violations in real time across your entire database fleet.
  • This feature enables a zero-trust approach to database access — ensuring only verified SQL statements are executed against your most sensitive systems. 
 
Step-by-Step Configuration Guide:
  • Sign in to your OCI Console with appropriate privileges (Security Administrator or tenancy-level admin).
  • In the left navigation menu, go to Oracle AI Database → Data Safe - Database Security 
 

  
Step 2: Register Your Database
Before you can run any assessments or audits, your database needs to be registered with Data Safe.

Supported Target Databases:
  • On-Premises Oracle AI Database
  • Oracle Autonomous AI Database on Dedicated Exadata Infrastructure 
  • Oracle Autonomous AI Database on Exadata Cloud@Customer
  • Oracle Autonomous AI Database Serverless
  • Oracle Base Database Service
  • Oracle AI Database on a compute instance in Oracle Cloud Infrastructure
  • Oracle Exadata Database Service on Cloud@Customer
  • Oracle Exadata Database Service on Dedicated Infrastructure
  • Oracle Exadata Database Service on Exascale Infrastructure
  • Amazon RDS for Oracle
  • Oracle Database@AWS
  • Oracle Database@Azure
  • Oracle Database@Google Cloud
Lets Register an Autonomous Database 
In the OCI Console, navigate to Data Safe → Targets → Register Target Database.
 

 
For Database Type, select Autonomous Database.
Under Data Safe Target Information:
  • Choose the Compartment where your database resides.
  • Select your database from the drop-down list of available Autonomous Databases.
  • Enter a Display Name for your Data Safe target.
  • (Optional) Add a Description to help identify the purpose or environment of this database (e.g., “Data Safe practice environment”).
  • Choose a Compartment for the target registration and (Optional) apply Tags for easier management and automation.
  • Review the connection details to ensure the selected database and compartment information are correct. 

 

Click Register to complete the process.
 
 
Step 3: Explore the Data Safe Dashboard

After completing the registration, your target database will now appear in the Targets list with an Active status — confirming a successful connection to Oracle Data Safe.



Now, let’s move to the Oracle Data Safe Dashboard, the central console where you can view, monitor, and manage all your database security operations.
 
In the OCI Console, navigate to
Oracle AI Database → Data Safe - Database Security → Dashboard and click

This will take you to the Data Safe → Security Center → Dashboard, where you can view an integrated overview of your database security posture — including assessments, user risks, sensitive data discovery, and audit summaries across all registered databases.

 

 

You can view quick summaries such as:

Security assessment:

 

User assessment:

 

From this dashboard, you can easily navigate to each of the key features:
Assessments – Run or view Security and User Assessments
Data Discovery & Masking – Identify and protect sensitive data
Auditing – Monitor and analyze database activities
SQL Firewall & Alerts – Manage SQL protection and incident notifications

This blog covers the high-level steps to set up Oracle Data Safe.
In the next post, I will share more detailed insights and advanced configurations to get the most out of Data Safe.


 


 

Sunday, 28 September 2025

Instantly Assigning a Public IP to an Existing OCI Compute Instance

Assigning the Ephemeral Public IP

Follow these steps directly in the OCI Console:

Step 1: Navigate to Your Instance

  1. Log in to the OCI Console.

  2. Open the navigation menu () and go to Compute Instances.

  3. Select the Compartment where your instance is located.

  4. Click the Name of the target Compute instance.

Step 2: Locate the Primary VNIC

  1. On the Instance Details page, scroll down to the Resources section and click Attached VNICs.

  2. You will typically see one entry—the Primary VNIC. Click on its Name (which is usually a long string starting with ocid1.vnic...).

Step 3: Edit the Private IP Address

  1. On the VNIC Details page, navigate to Resources on the left and click IP Addresses.

  2. You will see a table listing the Primary Private IP Address. Notice the Public IP column currently shows "Not Assigned".

  3. Click the Actions menu () next to the private IP address and select Edit.

Step 4: Assign the Ephemeral Public IP

  1. The Edit Private IP Address dialog box will open.

  2. Under the Public IP Type section, select the radio button for Ephemeral public IP.

  3. (Optional) You can give the Ephemeral IP a name, but it is not required.

  4. Click the Update button.

Sunday, 10 August 2025

DML Error Logging

What Is DBMS_ERRLOG?
DBMS_ERRLOG is a built-in Oracle package that helps you manage errors during DML operations (like INSERT, UPDATE, or DELETE). Its core function is to create and use an error logging table. Instead of failing the entire statement when it encounters a bad row, database simply logs the error and moves on to the next one. 

How Does It Work?
The process is simple and involves two main steps:
    Create an Error Logging Table: use the DBMS_ERRLOG.CREATE_ERROR_LOG procedure to create a table that will store information about any rows that fail your DML statement. only need to do this once for each target table.
    Enable Error Logging in Your DML Statement: add the LOG ERRORS clause to your INSERT, UPDATE, or DELETE statement. This tells the database to use the error logging table  created.

Lets try sample table
SQL> CREATE TABLE contacts (
    contact_id NUMBER PRIMARY KEY,
    first_name VARCHAR2(50),
    last_name VARCHAR2(50)
);  

Table created.
Create an error log table for the contacts table using the
DBMS_ERRLOG.CREATE_ERROR_LOG procedure. This only needs to be done once per target table.
SQL> BEGIN
  DBMS_ERRLOG.CREATE_ERROR_LOG('contacts', 'contacts_err_log');
END;
/  
PL/SQL procedure successfully completed.


Insert data to contacts table
SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (1, 'John', 'Doe') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (2, NULL, 'Smith') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (1, 'Jane', 'Doe') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
0 rows created.

SQL> INSERT INTO contacts (contact_id, first_name, last_name) VALUES (3, 'Peter', 'Jones') LOG ERRORS INTO contacts_err_log ('bulk_insert_test') REJECT LIMIT UNLIMITED;
1 row created.

SQL> commit;
Commit complete.

Lets check error details from error log table:

SQL> SELECT *  FROM contacts_err_log;
   ORA_ERR_NUMBER$ ORA_ERR_MESG$                                                            ORA_ERR_ROWID$         OR ORA_ERR_TAG$                 CONTACT_ID               FIRST_NAME     LAST_NAME
------------------ ------------------------------------------------------------------------ ---------------------- -- ---------------------------- ------------------------ -------------- ----------------
                 1 ORA-00001: unique constraint (SYS.SYS_C008882) violated                                         I  bulk_insert_test             1                        Jane           Doe


The output will show you the specific error codes and messages for each failed row, allowing you to easily identify and correct the data issues.


Sunday, 20 July 2025

Oracle Optimizer Hints

What Exactly is an Optimizer Hint?
A hint is a comment placed within an SQL statement (after SELECT, UPDATE, INSERT, MERGE, or DELETE) that instructs the Cost-Based Optimizer (CBO) to use a particular execution approach, even if the CBO might calculate that path to be more "expensive."
Hints are enclosed in special comment syntax: /*+ hint_name(arguments) */.
For example, to prioritize the fastest return of the first 10 rows:
SELECT /*+ FIRST_ROWS(10) */ employee_id, last_name FROM employees;
While powerful for testing and troubleshooting, remember: hints are extra code to manage and can become obsolete or even detrimental after database changes. For long-term stability, tools like SQL Plan Baselines are often preferred.

Types of Hints: Scope of Influence
Oracle hints are categorized by the scope of the SQL they affect, determining the specific part of the statement they influence:
    Single-Table Hints: These are specified on a single table or view within the query. They are used to influence operations directly against that specific object. An example is the INDEX hint, which instructs the optimizer to use a particular index when accessing the table.
    Multi-Table Hints: Similar to single-table hints, but they can specify one or more tables or views, often to influence the relationship or order between them. The LEADING hint is a prime example, used to enforce a specific order in which tables are joined. Note that a hint like USE_NL(table1 table2) is often just a shorthand for two single-table hints.
    Query Block Hints: These hints operate on an entire query block, such as the main query or a named subquery. They typically affect transformations or high-level execution strategies for that block. Examples include STAR_TRANSFORMATION and UNNEST.
    Statement Hints: These apply to the entire SQL statement, regardless of complexity or query blocks. They often control global execution parameters. The ALL_ROWS hint, which optimizes for best overall throughput rather than fastest initial response, is a common statement hint.

Hints by Category:
Oracle's many hints are logically grouped into categories, providing granular control over different phases of query optimization and execution:
1. Hints for Optimization Approaches and Goals:
These hints set the overarching objective for the optimizer, overriding the session or system settings:
    ALL_ROWS
    FIRST_ROWS(n)
2.Hints for Enabling Optimizer Features
This hint allows you to maintain plan stability across database versions:
    OPTIMIZER_FEATURES_ENABLE
3.Hints for Access Paths
These hints instruct the optimizer to use a specific method for accessing data in a table:
    FULL
    CLUSTER
    HASH
    INDEX and NO_INDEX
    INDEX_ASC and INDEX_DESC
    INDEX_COMBINE
    INDEX_JOIN
    INDEX_FFS and NO_INDEX_FFS
    INDEX_SS and NO_INDEX_SS
    INDEX_SS_ASC and INDEX_SS_DESC
4.Hints for Join Orders
These hints suggest the order in which tables should be joined:
    LEADING
    ORDERED
5.Hints for Join Operations
These hints instruct the optimizer on the algorithm to use for joining tables:
    USE_NL and NO_USE_NL
    USE_NL_WITH_INDEX
    USE_MERGE and NO_USE_MERGE
    USE_HASH and NO_USE_HASH
6.Hints for Online Application Upgrade
These specialized hints manage concurrency during Edition-Based Redefinition (EBR) upgrades:
    CHANGE_DUPKEY_ERROR_INDEX
    IGNORE_ROW_ON_DUPKEY_INDEX
    RETRY_ON_ROW_CHANGE
7.Hints for Parallel Execution
These hints control whether and how the SQL statement should be executed in parallel:
    PARALLEL and NO_PARALLEL
    PARALLEL_INDEX and NO_PARALLEL_INDEX
    PQ_DISTRIBUTE
8.Hints for Query Transformations
These hints allow you to enable or disable specific logical rewrites of the SQL statement:
    NO_QUERY_TRANSFORMATION
    USE_CONCAT
    NO_EXPAND
    REWRITE and NO_REWRITE
    MERGE and NO_MERGE
    STAR_TRANSFORMATION and NO_STAR_TRANSFORMATION
    FACT and NO_FACT
    UNNEST and NO_UNNEST
9.Additional Hints
This is a collection of hints for various other specific needs and behaviors:
    APPEND, APPEND_VALUES, and NOAPPEND
    CACHE and NOCACHE
    PUSH_PRED and NO_PUSH_PRED
    PUSH_SUBQ and NO_PUSH_SUBQ
    QB_NAME
    CURSOR_SHARING_EXACT
    DRIVING_SITE
    DYNAMIC_SAMPLING
    MODEL_MIN_ANALYSIS

A key addition in Oracle Database 23ai is related to vector data processing, essential for AI-driven similarity searches:
    VECTOR_INDEX_SCAN: This hint instructs the optimizer to explicitly use a Vector Index to satisfy the query. This is vital for accelerating similarity searches on vector data, ensuring the database leverages its specialized vector processing capabilities for these modern data types.
    
    

Why Local Indexes Stay Usable in Oracle Partitioned/Subpartitioned Tables

Understanding the Players: Partitioning and Indexes
Before we jump into TRUNCATE, let's quickly recap the two stars of our show:
    Partitioning: Imagine your massive database table as a giant book. Partitioning is like dividing that book into smaller, more manageable chapters (partitions) and even sections within those chapters (subpartitions). This helps with performance, maintenance, and data lifecycle management.
 Indexes: Indexes are like the index at the back of our book. They help Oracle quickly find specific rows without scanning the entire table.
Now, when you have a partitioned table, you can have two main types of indexes:
 Global Indexes: These are like a single, comprehensive index for the entire book. They don't care about your chapters; they just list every keyword and its page number, no matter which chapter it's in.
 Local Indexes: These are the game-changers for partitioned tables. With a local index, for every chapter (or sub-chapter), you get its own mini-index. So, "Chapter 1 Index," "Chapter 2 Index," and so on. Critically, the local index is partitioned exactly the same way as its table. This means the index entries for data in Chapter 1 only reside in Chapter 1 Index.

Power of TRUNCATE
The TRUNCATE command is a Data Definition Language (DDL) operation. Unlike DELETE (which logs every row deletion and generates undo), TRUNCATE rapidly deallocates all space used by the data in a table or partition/subpartition. It essentially "resets" the high-water mark, making it incredibly efficient for emptying large segments.

Why TRUNCATE Keeps Local Indexes Usable
This is where the magic happens! When you TRUNCATE a subpartition (or a partition):
 Direct Correspondence: Because a local index subpartition is explicitly linked, one-to-one, with its corresponding table subpartition, Oracle knows exactly which part of the index needs to be cleared.
 Atomic Operation: When you issue TRUNCATE SUBPARTITION, Oracle doesn't just clear the data. It simultaneously and atomically truncates the associated local index subpartition. The index entries belonging to that specific data segment are removed right alongside the data.
 No Inconsistency: Since the index entries are removed from their respective local index subpartition, there are no "dangling pointers" or inconsistencies. The local index remains valid and perfectly synchronized with its remaining data. Other local index subpartitions (for untouched data) are completely unaffected.
 No Rebuild Needed: This is the best part! Because the local index subpartition is reset and ready, there's no need for a costly ALTER INDEX REBUILD operation. Your index is immediately usable for any new data inserted into the truncated subpartition.

Contrast this with Global Indexes: If you TRUNCATE a subpartition when you have a global index, that global index will become UNUSABLE. Why? Because it contains entries from all subpartitions. When one goes away, the global index is left with pointers to non-existent data, making it inconsistent and requiring a full rebuild.

Sunday, 13 July 2025

What is "enq: TX - index contention"?

At its core, "enq: TX - index contention" signifies that multiple sessions are trying to acquire a Transaction (TX) enqueue on the same index block concurrently. An enqueue is a locking mechanism used by Oracle to protect shared resources. When sessions are waiting on a TX enqueue for an index, it means they are effectively "stuck" trying to modify (insert, update, or delete) rows that would impact the same small portion of an index.  

Think of it like this: Imagine a very popular book in a library. Many people want to borrow it, but there's only one copy. Everyone lines up, waiting for their turn. In our database analogy, the "book" is a specific index leaf block, and the "people" are concurrent transactions trying to write to it. The "line" is the "enq: TX - index contention" wait.

The Root Causes: Why Does This Happen?
"enq: TX - index contention" isn't random; it stems from specific access patterns that create "hot spots" within your indexes.

1. The Monotonically Increasing Key (The Most Common Culprit)
This is by far the leading cause. When you have an index built on a column whose values are always increasing, like:
    Sequence-generated Primary Keys (e.g., ID NUMBER DEFAULT MY_SEQ.NEXTVAL)
    DATE or TIMESTAMP columns using SYSDATE / SYSTIMESTAMP
    Application-generated unique identifiers that increment sequentially

Every new INSERT operation will attempt to add a new entry to the "right-hand side" or the highest value end of the index. If you have many sessions inserting concurrently, they all converge on the same few index leaf blocks at the "end" of the index. This creates a severe bottleneck as sessions contend for exclusive access to these blocks, leading to:
    Frequent Block Splits: As the hot block fills up, Oracle has to split it into two blocks to make room for new entries. This is an expensive operation and exacerbates contention.
    Increased Buffer Cache Activity: The hot block is constantly being read into and written from the buffer cache.
    High Interconnect Traffic (in RAC): In a RAC environment, these hot blocks are constantly "ping-ponging" between instances as different nodes try to insert into them, leading to high "gc buffer busy" waits in addition to "enq: TX" waits.

Example Scenario:
Imagine an ORDER table with ORDER_ID as a primary key, populated by ORDER_SEQ.NEXTVAL. In a busy e-commerce application, hundreds of orders are placed per second. Each INSERT INTO ORDERS (ORDER_ID, ...) attempts to write a new, higher ORDER_ID into the ORDER_PK index. All these inserts bottleneck on the same few index leaf blocks, resulting in "enq: TX - index contention."

2. Heavy Deletes Followed by Inserts (Index Fragmentation)
While less common than monotonic keys, this can also contribute. If you perform large-scale DELETE operations on a table, the index entries are marked as deleted but the space isn't immediately reclaimed. Subsequent INSERT operations might then try to reuse this freed space. If many concurrent sessions are attempting to insert into these "fragmented" or "sparse" index blocks, it can lead to contention as they fight for ITL (Interested Transaction List) slots or space within those blocks.

3. Low PCT_FREE for the Index
PCT_FREE specifies the minimum percentage of free space reserved in each data block for future updates. While more critical for data blocks, a very low PCT_FREE on an index could theoretically contribute to contention if updates to existing index entries cause the blocks to become excessively full, leading to more frequent splits or difficulty in allocating ITL slots for concurrent transactions. This is a less frequent direct cause but can be a contributing factor.
The Impact: How Does It Hurt Performance?
The consequences of "enq: TX - index contention" are significant:
    Reduced Throughput: Transactions wait, slowing down overall DML operations.
    Increased Response Times: User experience suffers as applications become sluggish.
    High CPU Utilization: Index block splits, consistent read generation, and frequent buffer cache operations consume more CPU.
    Increased I/O (Indirectly): While primarily a CPU/contention issue, increased block splits can indirectly lead to more I/O.
    RAC-Specific Woes: In RAC, this contention translates directly to high "gc buffer busy" waits, as instances spend excessive time transferring the same hot blocks back and forth across the interconnect.

The Fixes: Solutions to Alleviate Index Contention
Once you've identified "enq: TX - index contention" as a problem (typically via AWR reports showing high waits for this event and pinpointing the exact index), here are the most effective strategies:

Solution 1: Reverse Key Indexes (The Go-To for Monotonic Keys)
    A reverse key index physically reverses the byte order of the column's values before storing them in the index. This effectively scrambles the sequence of monotonically increasing keys, distributing them randomly across the index's leaf blocks instead of concentrating them at one end.
    When to Use: Ideal for indexes on sequence-generated primary keys or other monotonically increasing values where range scans are not critical.
    -- Original index (if it exists)
    DROP INDEX my_table_pk;
    -- Create a reverse key index
    CREATE UNIQUE INDEX my_table_pk ON my_table (id) REVERSE;
    Pros: Highly effective at eliminating hot blocks and reducing contention for inserts.
    Cons: Significantly degrades performance for index range scans (e.g., WHERE ID BETWEEN 100 AND 200). This is because the values are no longer stored in logical order. You would need to perform a full index scan or a full table scan for such queries, which can be much slower.
    Consideration: Use this only if your application primarily performs direct lookups (WHERE ID = 123) and high-volume inserts. If range scans are frequent, look at other solutions first.

Example:
If ORDER_ID is a sequence, creating CREATE UNIQUE INDEX ORDER_PK ON ORDERS (ORDER_ID) REVERSE; would distribute new order inserts across the index. Queries like SELECT * FROM ORDERS WHERE ORDER_ID = 54321; would still be efficient, but SELECT * FROM ORDERS WHERE ORDER_ID BETWEEN 1000 AND 2000; would likely be slow.

Solution 2: Hash Partitioned Global Indexes (Requires Partitioning License)
    Concept: If you have the Oracle Partitioning option, you can create a Global Hash Partitioned Index. Oracle applies a hash function to the index key, distributing the index entries across multiple partitions. This effectively spreads out the hot spot caused by monotonic keys over multiple physical index segments.
    When to Use: When you need the benefits of distributing contention but also require efficient range scans (which reverse key indexes sacrifice).

    -- Assuming MY_TABLE is already hash partitioned by some column,
    -- or you just want to partition the index itself.
    CREATE UNIQUE INDEX my_table_pk ON my_table (id)
    GLOBAL PARTITION BY HASH (id)
    (
        PARTITION p1,
        PARTITION p2,
        PARTITION p3,
        PARTITION p4
    );

    Pros: Excellent for distributing inserts and maintaining good range scan performance.
    Cons: Requires the Oracle Partitioning option (additional license). Can be more complex to manage than simple indexes.

Solution 3: Increase Sequence CACHE Size
    If your index key is derived from a sequence (e.g., MY_SEQ.NEXTVAL), increasing the CACHE size for that sequence can help. Instead of fetching one number at a time, each session (or instance in RAC) can fetch a block of numbers (e.g., 100 or 1000). This reduces the contention on the sequence object itself. More importantly, in RAC, using CACHE with NOORDER allows different instances to use different ranges of sequence numbers simultaneously, indirectly distributing the inserts across the index.
    When to Use: Always consider this if sequences are generating your contended keys.
    ALTER SEQUENCE my_schema.my_sequence CACHE 1000 NOORDER;
    (Adjust 1000 based on your concurrency, often 10000 or more for very high volumes).
    Pros: Simple to implement, no impact on index scan performance.
    Cons: Does not directly address the index block contention but reduces pressure on the sequence and can help distribute inserts if combined with other factors like different session start times. It's more of a complementary solution.

Solution 4: Rebuild/Shrink Indexes After Large Deletes
If your contention arises after massive DELETE operations (which leave "holes" in the index), rebuilding or shrinking the index reclaims the space and reorganizes the index structure. This can reduce contention by ensuring blocks are full and contiguous, minimizing splits when subsequent inserts occur.
    When to Use: After significant data purges.

    ALTER INDEX my_schema.my_index REBUILD ONLINE; -- Rebuilds index
    -- OR
    ALTER INDEX my_schema.my_index SHRINK SPACE; -- Shrinks index (less disruptive)

    Pros: Reclaims space, improves index efficiency, can reduce future block splits.
    Cons: Rebuilds can be resource-intensive and lock the index (though ONLINE mitigates this).

Solution 5: Increase PCT_FREE for the Index (Less Common Primary Fix)
PCT_FREE reserves space within blocks. If index blocks are consistently filling up too fast, increasing PCT_FREE provides more space for new entries and ITL slots, potentially reducing immediate contention.

When to Use: If analysis shows contention is due to rapid block filling and ITL waits.
    ALTER INDEX my_schema.my_index PCTFREE 20; -- Default is 10
    Pros: Simple change.
    Cons: Can increase index size (less efficient storage), and often not the primary solution for monotonic key contention.


Monday, 30 June 2025

Oracle Indexes: What BLEVEL, LEAF_BLOCKS, and CLUSTERING_FACTOR Really Tell You

BLEVEL: The GPS Navigation System
Imagine you're trying to find a book in a massive library. BLEVEL tells you how many floors you need to climb to reach your destination.

What BLEVEL really means:
- 0: You're already on the right floor (tiny indexes)
- 1: One elevator ride to the correct section (most common)
- 2+: Multiple elevator rides and stair climbs (very large tables)

Real Story:  BLEVEL is your index's "navigation depth." It counts how many levels Oracle must traverse from the root block to reach your data. Think of it as the number of "hops" your query makes through the index structure.
When to Worry:  A BLEVEL that suddenly jumps from 1 to 3 without a corresponding data increase might signal index fragmentation. But don't panic—modern Oracle versions handle this pretty well.

LEAF_BLOCKS: The Storage Bill
LEAF_BLOCKS is straightforward: it's how much disk space your index is consuming at the leaf level.
The Simple Truth:  More data = more leaf blocks = bigger index size.

Performance Reality Check:
- Single-row lookups: Don't care much about leaf blocks (you only read one)
- Range scans: Care a lot (you read multiple blocks)

Pro Tip: If your LEAF_BLOCKS is growing faster than your table data, you might be over-indexing or indexing the wrong columns.

CLUSTERING_FACTOR: The Performance Crystal Ball
This is where things get interesting. CLUSTERING_FACTOR is like a fortune teller for your query performance.
CLUSTERING_FACTOR = Number of times Oracle switches table blocks when reading index entries in order

What the Numbers Mean:
- Low CF (close to LEAF_BLOCKS): Your data is well-organized—like books sorted by author on library shelves
- High CF (close to table NUM_ROWS): Your data is scattered—like books randomly placed throughout the library

The Performance Impact: Range scans become expensive when Oracle has to jump between hundreds of different table blocks instead of reading consecutive blocks.

## Let's Build a Real Example

Step 1: Create Our Test Environment
CREATE TABLE employees_test (
    employee_id NUMBER(6) PRIMARY KEY,
    first_name VARCHAR2(20),
    last_name VARCHAR2(25),
    email VARCHAR2(25),
    phone_number VARCHAR2(20),
    hire_date DATE,
    job_id VARCHAR2(10),
    salary NUMBER(8,2),
    commission_pct NUMBER(2,2),
    manager_id NUMBER(6),
    department_id NUMBER(4)
);

-- Populate with 100,000 employees
BEGIN
    FOR i IN 1..100000 LOOP
        INSERT INTO employees_test VALUES (
            i,                                    -- Sequential employee_id (good clustering)
            'FirstName' || i,
            'LastName' || i,
            'email' || i || '@example.com',
            '555-123-' || LPAD(i, 4, '0'),
            SYSDATE - DBMS_RANDOM.VALUE(1, 365*5),
            CASE MOD(i, 5) WHEN 0 THEN 'IT_PROG' ELSE 'SA_REP' END,
            ROUND(DBMS_RANDOM.VALUE(3000, 15000), 2),
            ROUND(DBMS_RANDOM.VALUE(0, 0.4), 2),
            CASE WHEN MOD(i, 100) = 0 THEN NULL 
                 ELSE ROUND(DBMS_RANDOM.VALUE(100, 200)) END,
            CASE MOD(i, 3) WHEN 0 THEN 90 
                           WHEN 1 THEN 60 
                           ELSE 30 END            -- Scattered department_id (poor clustering)
        );
    END LOOP;
    COMMIT;
END;
/

-- Create our test index
CREATE INDEX idx_emp_dept_id ON employees_test (department_id);
-- Gather fresh statistics
EXEC DBMS_STATS.GATHER_TABLE_STATS(
    OWNNAME => USER, 
    TABNAME => 'EMPLOYEES_TEST', 
    CASCADE => TRUE
);

Step 2: The Moment of Truth
Now let's see what our indexes look like:
-- Check our table stats
SELECT table_name, blocks, num_rows  FROM dba_tables  WHERE table_name = 'EMPLOYEES_TEST';
TABLE_NAME                           BLOCKS   NUM_ROWS
-------------------------------- ---------- ----------
EMPLOYEES_TEST                         1467     100000

-- Check our index stats
SELECT index_name, table_name, CLUSTERING_FACTOR,blevel, leaf_blocks, num_rows, distinct_keys FROM user_indexes WHERE table_name = 'EMPLOYEES_TEST';
INDEX_NAME                       TABLE_NAME                       CLUSTERING_FACTOR     BLEVEL LEAF_BLOCKS   NUM_ROWS DISTINCT_KEYS
-------------------------------- -------------------------------- ----------------- ---------- ----------- ---------- -------------
SYS_C008470                      EMPLOYEES_TEST                                1464          1         187     100000        100000
IDX_EMP_DEPT_ID                  EMPLOYEES_TEST                                4392          1         196     100000             3

## Decoding the Results
Good Guy:  Primary Key Index (SYS_C008470)
BLEVEL = 1:  Perfect! One hop to reach any employee ID
LEAF_BLOCKS = 187:  Reasonable size for 100,000 unique values
CLUSTERING_FACTOR = 1464:  Excellent! Very close to table blocks (1467)
Why This Rocks:  Employee IDs were inserted sequentially (1, 2, 3...), so the physical table order matches the index order. When you query `WHERE employee_id BETWEEN 1000 AND 1050`, Oracle reads consecutive table blocks—like reading pages in a book.
Troublemaker:  Department Index (IDX_EMP_DEPT_ID)
BLEVEL = 1: Same efficient navigation
LEAF_BLOCKS = 196: Slightly larger (non-unique values)
CLUSTERING_FACTOR = 4392: Uh-oh! Much higher than table blocks
Why This Hurts: Department IDs are scattered (90, 60, 30, 90, 60, 30...). A query like `WHERE department_id = 90` will find all matching rows in the index, but then jump all over the table to fetch them—like trying to read a book where every page is in a different room.

## Performance Showdown
Let's see these differences in action with real explain plans:
###Fast Query (Good Clustering)
SELECT * FROM employees_test WHERE employee_id BETWEEN 1000 AND 1050;
Explain Plan:
Plan hash value: 240947081
------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                |    52 |  5200 |     3   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES_TEST |    52 |  5200 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | SYS_C008470    |    52 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
   2 - access("EMPLOYEE_ID">=1000 AND "EMPLOYEE_ID"<=1050)

Scorecard:
- Cost: 3 - Lightning fast
- INDEX RANGE SCAN - Oracle loves this index
- 52 rows retrieved efficiently

###Slow Query (Poor Clustering)

SELECT * FROM employees_test  WHERE department_id = 90;
Explain Plan:
Plan hash value: 304096605
------------------------------------------------------------------------------------
| Id  | Operation         | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |                | 33333 |  3255K|   400   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| EMPLOYEES_TEST | 33333 |  3255K|   400   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
   1 - filter("DEPARTMENT_ID"=90)

Scorecard:
- Cost: 400 - 133x slower!
- TABLE ACCESS FULL - Oracle completely ignored the index!
- 33,333 rows processed with full table scan

Plot Twist:  Why Oracle Ignored the Index
Here's the fascinating part:  Oracle's optimizer looked at the department index and said, "No thanks, I'll do a full table scan instead."

Math Behind the Decision:
- Department 90:  ~33,333 rows (1/3 of 100,000)
- Clustering Factor:  4,392 (very high)
- Table Blocks:  1,467
Oracle calculated that using the index would require reading nearly 4,400 different table blocks randomly. A full table scan (1,467 blocks sequentially) was actually faster! 

Lesson: High clustering factors can make indexes so inefficient that the optimizer prefers full table scans. This is why monitoring CLUSTERING_FACTOR is crucial.