Saturday, 16 September 2023

Sql Server Supported Compatibility Levels


ProductDatabase Engine versionDefault compatibility level designationSupported compatibility level values
Azure SQL Database16150160, 150, 140, 130, 120, 110, 100
Azure SQL Managed Instance16150160, 150, 140, 130, 120, 110, 100
SQL Server 2022 (16.x)16160160, 150, 140, 130, 120, 110, 100
SQL Server 2019 (15.x)15150150, 140, 130, 120, 110, 100
SQL Server 2017 (14.x)14140140, 130, 120, 110, 100
SQL Server 2016 (13.x)13130130, 120, 110, 100
SQL Server 2014 (12.x)12120120, 110, 100
SQL Server 2012 (11.x)11110110, 100, 90
SQL Server 2008 R2 (10.50.x)10.5100100, 90, 80
SQL Server 2008 (10.0.x)10100100, 90, 80
SQL Server 2005 (9.x)99090, 80
SQL Server 2000 (8.x)88080

Wednesday, 19 July 2023

CIDR: The Modern Way to Handle Subnets

What is CIDR?
CIDR (Classless Inter-Domain Routing) is a method for allocating IP addresses and routing that replaced the old "classful" system. It uses a slash notation to specify how many bits are used for the network portion.
Format: IP Address/Network Bits
Example: 192.168.1.0/24

How CIDR Notation Works
The Number After the Slash
The /24 tells us how many bits (from left) represent the network:
192.168.1.0/24
├── Network bits: 24 (first 24 bits)
└── Host bits: 8 (remaining 8 bits)

Binary view:
11000000.10101000.00000001.00000000
|--------Network (24 bits)-------|Host|

Common CIDR Examples
CIDR    Subnet Mask               Network Bits    Host Bits    Available IPs
/8     255.0.0.0          8       24     16,777,214
/16    255.255.0.0        16      16     65,534
/24    255.255.255.0      24      8      254
/25    255.255.255.128    25      7      126
/26    255.255.255.192    26      6      62
/27    255.255.255.224    27      5      30
/28    255.255.255.240    28      4      14
/30    255.255.255.252    30      2       2

How CIDR Helps with Subnetting
1. Flexible Subnet Sizing
Instead of fixed Class A, B, C sizes, you can create any size you need:
Old way (Classful):
- Class C: 192.168.1.0 = exactly 254 hosts
- No flexibility

New way (CIDR):
- /25 = 126 hosts
- /26 = 62 hosts  
- /27 = 30 hosts
- /28 = 14 hosts

2. Efficient Address Usage
Example: You need 50 IP addresses
Without CIDR: Forced to use Class C (254 IPs) - waste 204 addresses
With CIDR: Use /26 (62 IPs) - waste only 12 addresses

3. Easy Subnet Calculation
Subnetting 192.168.1.0/24 into smaller subnets:
Original: 192.168.1.0/24 (254 hosts)

Split into /25:
├── 192.168.1.0/25   (126 hosts: .1-.126)
└── 192.168.1.128/25 (126 hosts: .129-.254)

Split into /26:
├── 192.168.1.0/26   (62 hosts: .1-.62)
├── 192.168.1.64/26  (62 hosts: .65-.126)
├── 192.168.1.128/26 (62 hosts: .129-.190)
└── 192.168.1.192/26 (62 hosts: .193-.254)

4. Quick Binary Math
To find subnet info:
Network bits: First X bits (from CIDR)
Host bits: Remaining 32-X bits
Subnet size: 2^(host bits) - 2
Subnets possible: 2^(borrowed bits)

Practical CIDR Examples
Example 1: Office Network Planning

Company gets: 10.0.0.0/16

Departments needed:
- Sales (100 users): 10.0.1.0/25 (126 IPs)
- Engineering (200 users): 10.0.2.0/24 (254 IPs)  
- HR (20 users): 10.0.3.0/27 (30 IPs)
- Printers (10 devices): 10.0.4.0/28 (14 IPs)

Example 2: ISP Address Allocation
ISP has: 203.0.113.0/24

Customer allocations:
- Large business: 203.0.113.0/26 (62 IPs)
- Medium business: 203.0.113.64/27 (30 IPs)
- Small business: 203.0.113.96/28 (14 IPs)
- Home users: 203.0.113.112/28 to 203.0.113.240/28

CIDR vs Old Classful System
Old Way (Classful)
Class A: /8  - 16M addresses (usually too big)
Class B: /16 - 65K addresses (often too big) 
Class C: /24 - 254 addresses (often too small)

New Way (CIDR)
Any size from /1 to /32
Perfect fit for actual needs
No address waste

Quick CIDR Calculation Tricks
Finding Subnet Size
Formula: 2^(32-CIDR) - 2

/24: 2^(32-24) - 2 = 2^8 - 2 = 254 usable IPs
/26: 2^(32-26) - 2 = 2^6 - 2 = 62 usable IPs
/30: 2^(32-30) - 2 = 2^2 - 2 = 2 usable IPs

Finding Number of Subnets
To subnet /24 into /26:
Borrowed bits = 26 - 24 = 2
Number of subnets = 2^2 = 4 subnets

Real-World Benefits
Reduced Routing Tables - ISPs can aggregate routes
Flexible Allocation - Match network size to actual needs
Address Conservation - Use only what you need
Simplified Management - Consistent notation across all networks



Monday, 26 June 2023

Configuration options for SQL Plan Management (SPM)

The DBMS_SPM.CONFIGURE package provides various configuration options for SQL Plan Management (SPM). It allows you to control different aspects of plan capture, plan selection, and plan retention.
Here are the few details and example commands for using DBMS_SPM.CONFIGURE:


1.Automatic Plan Capture Configuration:
   
  AUTO_CAPTURE: Specifies whether automatic plan capture is enabled for the entire database.

  
  -- Enable automatic plan capture
  
EXEC DBMS_SPM.CONFIGURE('AUTO_CAPTURE', 'TRUE');

  -- Disable automatic plan capture
  
EXEC DBMS_SPM.CONFIGURE('AUTO_CAPTURE', 'FALSE');


2.AUTO_CAPTURE_PDB_STATS: Specifies whether automatic plan capture is enabled for pluggable databases (PDBs) individually.

    
 -- Enable automatic plan capture for PDBs
    
 EXEC DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PDB_STATS', 'TRUE');
    
    
 -- Disable automatic plan capture for PDBs
    
 EXEC DBMS_SPM.CONFIGURE('AUTO_CAPTURE_PDB_STATS', 'FALSE');

   
3.Automatic Plan Synchronization Configuration:


 AUTO_PURGE: Specifies whether to automatically purge plans that are no longer accepted or used.
  
 -- Enable automatic plan purging
EXEC DBMS_SPM.CONFIGURE('AUTO_PURGE', 'TRUE');


 -- Disable automatic plan purging
EXEC DBMS_SPM.CONFIGURE('AUTO_PURGE', 'FALSE');



4.AUTO_PURGE_CANCELED_PLANS: Specifies whether to automatically purge plans that have been canceled.
    
 -- Enable automatic purging of canceled plans
    
 EXEC DBMS_SPM.CONFIGURE('AUTO_PURGE_CANCELED_PLANS', 'TRUE');
    
    
 -- Disable automatic purging of canceled plans
    
 EXEC DBMS_SPM.CONFIGURE('AUTO_PURGE_CANCELED_PLANS', 'FALSE');



5.Optimizer Mode Configuration:
    
 OPTIMIZER_MODE: Sets the optimizer mode to be used for selecting plans. Options are 'ALL_ROWS', 'FIRST_ROWS', or 'CHOOSE'.

    
 -- Set optimizer mode to 'ALL_ROWS'
    
 EXEC DBMS_SPM.CONFIGURE('OPTIMIZER_MODE', 'ALL_ROWS');
    
    
 -- Set optimizer mode to 'FIRST_ROWS'
    
 EXEC DBMS_SPM.CONFIGURE('OPTIMIZER_MODE', 'FIRST_ROWS');
    
    
 -- Set optimizer mode to 'CHOOSE'
    
 EXEC DBMS_SPM.CONFIGURE('OPTIMIZER_MODE', 'CHOOSE');


 
6.Plan Retention Configuration:
    
 PLAN_RETENTION_ENABLED: Specifies whether plan retention is enabled.

  
 -- Enable plan retention

 EXEC DBMS_SPM.CONFIGURE('PLAN_RETENTION_ENABLED', 'TRUE');
 

-- Disable plan retention
 
EXEC DBMS_SPM.CONFIGURE('PLAN_RETENTION_ENABLED', 'FALSE');


 
7.PLAN_RETENTION_WEEKS: Sets the number of weeks that plans should be retained in the SQL plan baseline.
    
  -- Set plan retention to 26 weeks
    
  EXEC DBMS_SPM.CONFIGURE('PLAN_RETENTION_WEEKS', 26);
  
 
8.Verification and Reporting Configuration:

  
  VERIFICATION_ENABLED: Specifies whether plan verification is enabled.
  
  -- Enable plan verification

  EXEC DBMS_SPM.CONFIGURE('VERIFICATION_ENABLED', 'TRUE');


  -- Disable plan verification
  
EXEC DBMS_SPM.CONFIGURE('VERIFICATION_ENABLED', 'FALSE');



9.VERIFICATION_RATE: Sets the rate at which SQL execution plans are verified.

  -- Set verification rate to 50%
o    
  EXEC DBMS_SPM.CONFIGURE('VERIFICATION_RATE', 50);

 
10.SPACE_BUDGET_PERCENT: Parameter specifies the maximum percentage of the shared pool that can be used for storing SQL plan baselines.
  -- Set space usage to 10%
  
EXEC DBMS_SPM.CONFIGURE('space_budget_percent', 10);

Wednesday, 14 June 2023

Move a LOBSEGMENT (Large Object Segment) to another tablespace in an Oracle database

To move a LOBSEGMENT (Large Object Segment) to another tablespace in an Oracle database, you can follow these steps:

 

Get lob Segment Details:

SELECT table_name, column_name, segment_name, tablespace_name

FROM dba_lobs

WHERE owner = 'YOUR_SCHEMA' AND table_name = 'YOUR_TABLE';

 

Move to other tablespace:

ALTER TABLE your_schema.your_table MOVE LOB(column_name) STORE AS (TABLESPACE new_tablespace);

Replace 'your_schema' with the schema name, 'your_table' with the table name, 'column_name' with the name of the LOB column, and 'new_tablespace' with the name of the new tablespace where you want to move the LOB segment.

After the LOB segment has been moved, you can verify

SELECT table_name, column_name, segment_name, tablespace_name  FROM dba_lobs  WHERE owner = 'YOUR_SCHEMA' AND table_name = 'YOUR_TABLE';

Ensure that the tablespace_name column shows the name of the new tablespace where the LOB segment resides.

 

Note: Moving a LOB segment is a resource-intensive operation, and it should be performed during a maintenance window or a period of low database activity to minimize impact. Additionally, take appropriate backups before making any changes to critical database objects.

 

Wednesday, 17 May 2023

Fetch First Rows feature in Oracle

FETCH FIRST feature is known as the "ROWNUM" pseudocolumn or the "FETCH FIRST n ROWS ONLY" clause. It allows you to limit the number of rows returned in a query result. Here's how you can use it:

Using ROWNUM pseudocolumn:
SELECT *
FROM table_name
WHERE ROWNUM <= n;

In this example, replace table_name with the name of the table you're querying, and n with the desired number of rows you want to fetch.

Note: When using the ROWNUM pseudocolumn, it's essential to apply the condition on ROWNUM after any necessary ordering or filtering, as ROWNUM is assigned to rows before ordering is applied.

Using FETCH FIRST n ROWS ONLY clause (starting from Oracle 12c):
SELECT *
FROM table_name
ORDER BY column_name
FETCH FIRST n ROWS ONLY;

In this example, replace table_name with the name of the table you're querying, column_name with the column you want to order by, and n with the desired number of rows you want to fetch.

The FETCH FIRST clause allows you to specify an ORDER BY clause, ensuring consistent results when retrieving a limited number of rows.

Performance benefits of using the "FETCH FIRST":
Reduced Data Transfer: By limiting the number of rows fetched from the database, the "FETCH FIRST" feature reduces the amount of data transferred between the database server and the client application. This can significantly improve query performance, especially when dealing with large result sets.
 
Query Optimization: The "FETCH FIRST" feature allows the Oracle optimizer to optimize the query execution plan based on the specified row limit. It enables the optimizer to choose more efficient access paths, such as using an index scan or a fast full scan, instead of performing a full table scan when only a limited number of rows are required.
 
Faster Response Time: By fetching only the necessary rows, the "FETCH FIRST" feature can reduce the time it takes to process and return query results. This can lead to faster response times for queries, improving the overall user experience.
 
Efficient Pagination: When implementing pagination functionality, the "FETCH FIRST" feature can be particularly useful. It allows you to retrieve specific pages of data efficiently by combining the "FETCH FIRST" clause with appropriate offset values. This can enhance performance when displaying paginated results in applications.
 
Improved Scalability: By limiting the number of rows fetched, the "FETCH FIRST" feature helps in optimizing resource utilization. It reduces memory consumption and can improve the scalability of the application by efficiently handling large result sets.  
       

Thursday, 11 May 2023

Microservice architecture Vs monolithic architecture

Microservice architecture and monolithic architecture are two different approaches to software development.

Monolithic architecture is a traditional approach where the entire application is built as a single, self-contained unit. All the functionalities of the application are tightly integrated and dependent on each other. This approach can be simple to develop and deploy, but it can become complex and difficult to maintain as the application grows and changes over time.

 

On the other hand, Microservice architecture is an approach where the application is broken down into smaller, independent services that can communicate with each other through APIs. Each microservice is responsible for a specific business capability and can be developed, deployed, and scaled independently. This approach provides flexibility, scalability, and allows for faster innovation and release cycles. However, it can also add complexity in terms of managing the communication and coordination between the microservices.


Main differences between microservice architecture and monolithic architecture are:

  • Monolithic architecture is a single, self-contained unit while microservice architecture is composed of smaller, independent services.
  • In monolithic architecture, all functionalities are tightly integrated and dependent on each other while in microservice architecture, each service is responsible for a specific business capability.
  • Monolithic architecture can be simpler to develop and deploy but can become complex and difficult to maintain while microservice architecture provides flexibility, scalability, and faster innovation but can add complexity in terms of managing communication and coordination between services.

Monday, 8 May 2023

ORA-00979: not a GROUP BY expression

ORA-00979 error in Oracle usually occurs when you try to execute a SQL statement that includes a GROUP BY clause, but one or more non-aggregated columns in the SELECT clause are not included in the GROUP BY clause.

Here's an example that demonstrates the issue:

SELECT dept_name, employee_name, AVG(salary) FROM employees GROUP BY dept_name;

This query calculates the average salary for each department in the employees table, but it includes the non-aggregated employee_name column in the SELECT clause without including it in the GROUP BY clause. This will result in the ORA-00979 error, because employee_name is not included in the GROUP BY clause and it's not part of an aggregate function.

To fix this error, you need to include all non-aggregated columns in the SELECT clause in the GROUP BY clause as well. Here's the corrected query:

SELECT dept_name, employee_name, AVG(salary) FROM employees GROUP BY dept_name, employee_name;

In this query, we include the employee_name column in the GROUP BY clause along with the dept_name column. This ensures that all non-aggregated columns in the SELECT clause are included in the GROUP BY clause, and the AVG function can be calculated for each combination of dept_name and employee_name.

Thursday, 4 May 2023

HANDLECOLLISIONS in GoldenGate

When data is replicated from a source database to a target database, conflicts can occur if the same data is modified in both databases simultaneously. For example, if a row is updated in the source database and the same row is also updated in the target database, a conflict arises."HANDLECOLLISIONS" parameter allows you to specify how these conflicts should be handled.

The "HANDLECOLLISIONS" parameter can take various values to determine the resolution strategy for conflicts. Some common values include:

ABORT: This value causes Oracle GoldenGate to abort the Replicat process when a conflict is encountered. This might be appropriate if you want to manually address conflicts before continuing replication.

SKIP: This value instructs Oracle GoldenGate to skip the conflicting transaction and continue replication with subsequent transactions.

OVERWRITE: This value allows the target transaction to overwrite the source transaction when a conflict occurs.

WARNING: This value generates a warning message and continues replication with one of the transactions.

DISCARD: This value discards the conflicting transaction without applying it to the target.

FAILONCOLLISION: This value causes Replicat to fail when a collision is encountered.

The appropriate value to use depends on your business logic and the importance of maintaining data integrity. You can set the "HANDLECOLLISIONS" parameter in your Oracle GoldenGate configuration file for the Replicat process. Here's an example:

HANDLECOLLISIONS OVERWRITE;

 

Sunday, 30 April 2023

Best practices for improving delete performance in Oracle

Here are some ways to improve delete query performance in Oracle:

1.DELETE statement with the WHERE clause: The DELETE statement with a WHERE clause is the most efficient way to delete rows from a table. The WHERE clause can be used to specify the conditions that must be met for a row to be deleted, which can reduce the number of rows that need to be deleted.


2.Batch processing: Instead of deleting all records in one large transaction, consider deleting records in smaller batches using a loop. This can help reduce the amount of undo and redo generated by the delete operation and improve performance.

Use COMMIT_INTERVAL parameter


3.Use the TRUNCATE statement: If you need to delete all records from a table, consider using the TRUNCATE statement instead of the DELETE statement. TRUNCATE is a faster operation because it does not generate undo and redo, and it resets the high-water mark of the table.


4.Disable indexes and constraints: If you have indexes or constraints on the table being deleted, consider disabling them temporarily during the delete operation. This can improve performance because the indexes and constraints do not need to be updated during the delete operation. Just remember to re-enable them once the delete operation is complete.


5.Partitioning: If the table being deleted is partitioned, consider using partitioning to delete only the partitions that need to be deleted. This can help reduce the amount of data that needs to be deleted and improve performance.


6.NOLOGGING option: If you do not need to recover the deleted data, consider using the NOLOGGING option during the delete operation. This can help reduce the amount of undo and redo generated by the delete operation and improve performance.


7.Parallel Processing: If your system has multiple CPUs, consider using parallel processing to delete the data. This can help divide the workload among the CPUs and improve performance.

By following these best practices, We should be able to improve delete performance in Oracle. However, it is important to test the performance of delete operation after implementing any of these optimizations to ensure that it is improving performance as expected.