Showing posts with label MSSQL. Show all posts
Showing posts with label MSSQL. Show all posts

Sunday, 6 April 2025

Log Generation Rate in Azure SQL Database Hyperscale Pool

What is Log Generation Rate?
Log generation rate refers to the speed at which transaction logs are produced in a database. In Hyperscale Pool, log generation is closely monitored and regulated to prevent overloading the system. Azure implements log rate governance to ensure that log generation stays within defined limits, keeping the system stable and performing efficiently.
Log Rate Governance in Hyperscale
By default, Hyperscale databases have a log generation limit of 105 MB/s, irrespective of the compute size. If everything is running smoothly, the log generation can reach 100 MiB/s. This is designed to ensure that logs are consistently processed and replicated without overwhelming system resources.
However, there may be situations where Azure needs to temporarily reduce the log generation rate. This happens when a secondary replica or page server falls behind in applying the transaction logs. The system will then throttle the log generation rate to allow the lagging components to catch up, ensuring the overall stability of the database.
When Does Log Generation Rate Get Reduced?
Log generation rate may be reduced for several reasons:
    Delayed log consumption by a page server or replica.
    A geo-secondary replica might be lagging in applying logs.
    Slow database checkpointing could delay log processing on the page server.
    Migration or reverse migration from Hyperscale to a non-Hyperscale database can also cause temporary delays in log consumption.
Monitoring Log Generation Rate with sys.dm_hs_database_log_rate
Azure provides the sys.dm_hs_database_log_rate dynamic management function (DMF) to monitor and troubleshoot log generation rates in Hyperscale. This function returns detailed information on which components are limiting the log generation rate, including:
    Current log rate limit
    Catch-up rate of components (bytes per second)
    Component-specific delays and logs that are behind
Key Columns in the DMF:
    current_max_log_rate: Maximum log rate limit in bytes per second.
    catchup_rate: The rate at which lagging components are catching up.
    catchup_bytes: The amount of log data that must be processed to catch up.
    role_desc: Describes the role of the component affecting log rate, such as a page server, replica, or geo-replica.
This tool helps you quickly identify any components causing delays and allows you to take corrective actions if needed.
How to Check Log Generation Rate in Your Database
To check the log generation rate for a specific database, use the following query:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(DB_ID(N'YourDatabaseName'));

For databases within an elastic pool, you can use NULL to get results for all databases in the pool:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(NULL);

Wait types appear in sys.dm_os_wait_stats when the log rate is reduced:
Wait type    Reason
RBIO_RG_STORAGE    Delayed log consumption by a page server
RBIO_RG_DESTAGE    Delayed log consumption by the long-term log storage
RBIO_RG_REPLICA    Delayed log consumption by an HA secondary replica or a named replica
RBIO_RG_GEOREPLICA    Delayed log consumption by a geo-secondary replica
RBIO_RG_DESTAGE    Delayed log consumption by the log service
RBIO_RG_LOCALDESTAGE    Delayed log consumption by the log service
RBIO_RG_STORAGE_CHECKPOINT    Delayed log consumption on by a page server due to slow database checkpoint
RBIO_RG_MIGRATION_TARGET    Delayed log consumption by the non-Hyperscale database during reverse migration
 


 

Monday, 24 March 2025

Common Blocking Scenarios in Azure SQL Database: Causes & Resolutions

Blocking happens when one session (SPID) holds a lock on a resource, preventing another session from accessing it. Unlike deadlocks, where two or more processes are stuck indefinitely, blocking can eventually resolve—but it can still lead to performance bottlenecks.
Common Blocking Scenarios & Their Resolutions
Scenario Wait Type Open Transactions Status Resolves?
1 NOT NULL ≥ 0 Runnable ✅ Yes, when the query finishes.
2 NULL >0 Sleeping ❌ No, but SPID can be killed.
3 NULL ≥ 0 Runnable ❌ No, won’t resolve until the client fetches all rows or closes the connection.
 Killing SPID may take up to 30 seconds.
4 Varies ≥ 0 Runnable ❌ No, won’t resolve until the client cancels queries or closes connections.
 Killing SPIDs may take up to 30 seconds.
5 NULL >0 Rollback ✅ Yes.
6 NULL >0 Sleeping ⏳ Eventually. When Windows NT detects inactivity, the connection will break.

How to Identify Blocking in Azure SQL Database
1. Identify Blocked and Blocking Sessions
SELECT blocking_session_id, session_id, wait_type, wait_time, wait_resource  
FROM sys.dm_exec_requests  
WHERE blocking_session_id <> 0;
2. Check Open Transactions
SELECT session_id, open_transaction_count, status  
FROM sys.dm_exec_sessions  
WHERE open_transaction_count > 0;
3. Analyze Query Execution Details
SELECT r.session_id, s.host_name, s.program_name, r.command, r.wait_type, r.wait_time  
FROM sys.dm_exec_requests r  
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id  
WHERE r.blocking_session_id <> 0;

How to Resolve Blocking in Azure SQL Database
✅ Scenarios that resolve automatically
    Scenario 1: Query completes, releasing locks.
    Scenario 5: Rollback operation finishes.
    Scenario 6: Windows NT eventually disconnects the session.

❌ Scenarios requiring manual intervention
If blocking does not resolve, consider the following approaches:
1. Kill the Blocking SPID
If a transaction is stuck, you can terminate it:
KILL <session_id>;
Use this cautiously, as it may cause rollbacks.
2. Optimize Long-Running Queries
    Index Optimization: Ensure proper indexing to reduce query execution time.
    Query Tuning: Use QUERY_PLAN to optimize slow queries.
    Batch Processing: Process data in smaller batches to prevent long locks.
3. Handle Open Transactions Properly
    Regularly check sys.dm_tran_active_transactions for long-running transactions.
    Ensure all transactions explicitly COMMIT or ROLLBACK when completed.
4. Improve Connection Management
    Ensure clients properly fetch all rows or close connections.
    Avoid unnecessary long-running transactions that hold locks.

Thursday, 13 March 2025

How to Identify MAXDOP Value for Running/Completed Queries

 To find the MAXDOP (Maximum Degree of Parallelism) used by running queries in SQL Server, you can use Dynamic Management Views (DMVs) such as sys.dm_exec_requests and sys.dm_exec_query_profiles. These views provide details about query execution, including parallelism levels.
1. Checking MAXDOP for Running Queries
SELECT  
    r.session_id,  
    r.request_id,  
    r.start_time,  
    r.status,  
    r.cpu_time,  
    r.total_elapsed_time,  
    r.logical_reads,  
    r.writes,  
    r.dop AS MAXDOP,  -- Degree of Parallelism
    st.text AS sql_text  
FROM sys.dm_exec_requests r  
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st  
WHERE r.dop > 1  -- Filtering only parallel queries
ORDER BY r.start_time DESC;

Explanation:
    r.dop: Shows the degree of parallelism (i.e., the number of CPU cores used for execution).
    r.session_id: Identifies the session running the query.
    r.status: Shows the execution status (e.g., running, suspended).
    st.text: Displays the actual SQL query text.
    Note: If dop = 1, the query is running serially without parallelism.
2. Checking MAXDOP for Completed Queries
SELECT  
    qs.execution_count,  
    qs.total_worker_time / qs.execution_count AS avg_worker_time,  
    qs.max_dop,  -- MAXDOP used
    st.text AS sql_text  
FROM sys.dm_exec_query_stats qs  
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st  
ORDER BY qs.total_worker_time DESC;

3. Checking MAXDOP for Running Query Execution Plans
SELECT  
    er.session_id,  
    qp.query_plan  
FROM sys.dm_exec_requests er  
CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) qp  
WHERE er.dop > 1;

Look for Parallelism (Gather Streams) in the query plan XML to confirm parallel execution.

4. Checking MAXDOP Setting at Database Level
EXEC sp_configure 'show advanced options', 1;  
RECONFIGURE;  
EXEC sp_configure 'max degree of parallelism';

To check the database-level MAXDOP setting in Azure SQL Database:
SELECT *  
FROM sys.database_scoped_configurations  
WHERE name = 'MAXDOP';

5. Checking MAXDOP for Index Operations

SELECT  
    r.session_id,  
    r.command,  
    r.dop,  
    st.text  
FROM sys.dm_exec_requests r  
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st  
WHERE r.command LIKE '%INDEX%';

Wednesday, 12 March 2025

Intelligent Query Processing (IQP) in SQL Databases

Efficient query performance is crucial for modern applications, as databases handle massive amounts of data. Traditionally, query optimization relied on static cost-based estimations, which sometimes led to suboptimal execution plans due to incorrect assumptions.
To address this, modern databases—particularly Microsoft SQL Server—have introduced Intelligent Query Processing (IQP). IQP enhances query execution by automatically adapting, optimizing, and learning from past executions. This minimizes performance issues without requiring code changes.
What is Intelligent Query Processing (IQP)?
Intelligent Query Processing (IQP) is a set of advanced query optimization features in SQL Server (starting from SQL Server 2017 and significantly expanded in SQL Server 2019 and later).
IQP enhances query performance dynamically by making real-time adjustments based on execution statistics, feedback loops, and AI-driven techniques.

How is IQP different from Traditional Query Processing?

AspectTraditional Query ProcessingIntelligent Query Processing (IQP)
Optimization StageStatic, before executionDynamic, adjusts during execution
Query Plan AdjustmentsBased on fixed statisticsAdapts based on real-time data
Handling Plan RegressionRequires manual interventionAutomatically detects & corrects
Performance TuningDBA-driven tuning requiredMinimal or no code changes needed
Machine Learning InfluenceNoneUses feedback loops & AI

Why Do We Need Intelligent Query Processing?
Traditional query optimization relies on cardinality estimation—predicting the number of rows a query will process. However, real-world queries often face:
✅ Bad Cardinality Estimates – Outdated statistics or complex predicates lead to poor execution plans.
✅ Query Plan Regressions – A once-efficient query suddenly slows down due to a bad plan.
✅ Memory Allocation Issues – Queries either over-allocate (wasting resources) or under-allocate (causing spills to disk).
✅ Suboptimal Join Strategies – Poor join selection (Nested Loop instead of Hash Join) causes performance degradation.
IQP fixes these problems automatically, reducing the need for manual performance tuning.


🚀 Key Features of Intelligent Query Processing
IQP introduces a range of powerful enhancements that improve query performance dynamically. Let’s explore some of its most impactful features.

1️⃣ Batch Mode on Rowstore
📌 What it does:
Originally available only for Columnstore indexes, Batch Mode Execution improves the performance of queries running on rowstore tables (traditional tables with B-tree indexes).
📈 Benefits:
    Uses vectorized execution, reducing CPU usage.
    Drastically improves performance for aggregations, joins, and large scans.
    No changes needed—SQL Server automatically enables it when beneficial.
💡 Example:
SELECT CustomerID, COUNT(*)  FROM Sales.Orders  GROUP BY CustomerID;
Without batch mode, this query processes one row at a time. With batch mode, SQL Server processes thousands of rows at once, leading to faster execution.
2️⃣ Adaptive Joins
📌 What it does:
Instead of selecting a Nested Loop Join, Hash Join, or Merge Join at compile time, Adaptive Joins allow SQL Server to switch the join strategy dynamically at runtime.
📈 Benefits:
    Prevents bad join choices due to incorrect row estimates.
    Ensures optimal join selection for varying input sizes.
💡 Example:
If SQL Server expects 100 rows but actually gets 10 million rows, it will switch from a Nested Loop Join to a Hash Join automatically.
3️⃣ Adaptive Memory Grants
📌 What it does:
Allocates just the right amount of memory for query execution instead of over- or under-allocating.
📈 Benefits:
    Prevents out-of-memory issues for large queries.
    Reduces spilling to tempdb, which slows down execution.
💡 Example:
A complex report query initially requests 500MB but actually needs 5GB. SQL Server dynamically adjusts memory allocation for future executions.
4️⃣ Interleaved Execution for Multi-Statement Table-Valued Functions (MSTVFs)
📌 What it does:
Traditional table-valued functions (TVFs) always assumed fixed row estimates. This often led to poor query plans.
With Interleaved Execution, SQL Server delays optimization until runtime to get an accurate row estimate.
📈 Benefits:
    Prevents underestimating or overestimating TVF outputs.
    Optimizes execution plans based on real row counts.
💡 Example:
SELECT * FROM dbo.GetCustomerOrders(@CustomerID);
Before IQP, SQL Server guessed a default row count. Now, it waits until the function runs and then optimizes the query plan dynamically.
5️⃣ Table Variable Deferred Compilation
📌 What it does:
Table variables previously used fixed row estimates, often leading to poor execution plans. IQP defers their compilation until runtime, allowing SQL Server to optimize based on actual data size.
📈 Benefits:
    Improves performance of queries using table variables.
    Prevents incorrect join and index choices.
💡 Example:
DECLARE @TempTable TABLE (ID INT, Value VARCHAR(50));  
INSERT INTO @TempTable SELECT ID, Value FROM LargeTable;  
SELECT * FROM @TempTable JOIN AnotherTable ON @TempTable.ID = AnotherTable.ID;

SQL Server waits until the actual row count is known before optimizing the execution plan.


SQL Server Extended Events: Monitoring Queries Running Longer Than X Minutes

What Are Extended Events in SQL Server?
Extended Events provide a flexible and lightweight framework to capture detailed performance data in SQL Server. They help in diagnosing slow-running queries, deadlocks, waits, and other issues affecting database performance.
Why Use Extended Events Instead of SQL Profiler?
Low Overhead: Uses fewer system resources.
More Powerful: Captures granular event data.
Better Filtering: Allows precise filtering on execution time, database, users, etc.
Replaces SQL Trace/Profiler: Profiler is deprecated in newer SQL Server versions.

Step-by-Step: Configuring Extended Events for Queries Running More Than 5 Minutes
1. Create an Extended Events Session
We will create an Extended Events session to capture queries that take longer than 300 seconds (5 minutes) to execute.
Using SSMS GUI:
    Open SQL Server Management Studio (SSMS).
    Expand Management > Extended Events > Sessions.
    Right-click Sessions and choose New Session....
    Provide a name, e.g., Long_Running_Queries.
    Under Events, click "Add Event", search for sql_statement_completed, and add it.
    Under the Global Fields (Actions) tab, select:
        sql_text (to capture the query text)
        session_id (to track the session)
        database_id (to identify the database)
    Apply a Filter (Predicate):
        Click Configure, then Filter (Predicate).
        Select duration, set it to >= 3000000000 (300 seconds in microseconds).
    Configure Data Storage:
        Choose Event File as the target.
        Specify a file path for saving captured events.
    Click OK, then right-click the session and select Start Session.

Using T-SQL:
Alternatively, use the following T-SQL script to create the session:

CREATE EVENT SESSION [Long_Running_Queries]  
ON SERVER  
ADD EVENT sqlserver.sql_statement_completed (  
    WHERE duration >= 3000000000  -- 300 seconds (5 minutes) in microseconds  
)  
ADD TARGET package0.event_file (  
    SET filename = 'C:\Temp\LongRunningQueries.xel', max_file_size = 50MB  
)  
WITH (STARTUP_STATE = ON);  
GO  

2. Viewing and Analyzing the Captured Events
Using SSMS:
    Expand Management > Extended Events > Sessions.
    Right-click your session (Long_Running_Queries) and choose Watch Live Data.
    Execute long-running queries and monitor captured events in real-time.


Using T-SQL to Read the Event File:
To analyze captured events from the event file:
SELECT  
    event_data.value('(event/@name)', 'VARCHAR(100)') AS event_name,  
    event_data.value('(event/data[@name="sql_text"]/value)', 'NVARCHAR(MAX)') AS sql_text,  
    event_data.value('(event/data[@name="duration"]/value)', 'BIGINT') / 1000000 AS duration_seconds  
FROM  
(  
    SELECT CAST(event_data AS XML) AS event_data  
    FROM sys.fn_xe_file_target_read_file('C:\Temp\LongRunningQueries*.xel', NULL, NULL, NULL)  
) AS xevents  
ORDER BY duration_seconds DESC;

To stop the session:
ALTER EVENT SESSION [Long_Running_Queries] ON SERVER STATE = STOP;

To drop (delete) the session:
DROP EVENT SESSION [Long_Running_Queries] ON SERVER;
 

Monday, 24 February 2025

MAXDOP in SQL Server and Azure SQL Database

 


MAXDOP (Maximum Degree of Parallelism) is a crucial setting in SQL Server and Azure SQL Database that controls the level of intra-query parallelism. By adjusting MAXDOP, database administrators can optimize query execution speed while managing CPU resource utilization.
In Azure SQL Database, the default MAXDOP is set to 8 for new single databases and elastic pool databases. This setting was introduced in September 2020, based on years of telemetry, to prevent excessive parallelism issues while ensuring good performance. Before this, the default was MAXDOP = 0, allowing SQL Server to use all available logical processors.
How MAXDOP Works?
When a query is executed, SQL Server determines whether to use parallelism. If parallelism is enabled, multiple CPU cores work together to process different parts of the query, often improving execution time. However, excessive parallelism can overload the CPU, leading to contention and degraded performance for other queries.
The following table summarizes the behavior of different MAXDOP settings:
MAXDOP Value Behavior
 1    Forces single-threaded execution (no parallelism).
>1    Allows multiple parallel threads but limits the number of schedulers used to the smaller of MAXDOP or the total number of logical processors.
 0    Allows SQL Server to use up to 64 logical processors for parallel execution (or fewer, depending on system configuration).

Note: Each query executes with at least one scheduler and one worker thread. A parallel query can use multiple schedulers and threads, sometimes exceeding the specified MAXDOP value.
Considerations for Configuring MAXDOP
1. Changing MAXDOP in Azure SQL Database
In Azure SQL Database, you can modify MAXDOP using:
    Query-level configuration: By adding the OPTION (MAXDOP N) hint to specific queries.
    Database-level configuration: Using the ALTER DATABASE SCOPED CONFIGURATION statement.
2. Impact on Performance
    Too Low (MAXDOP = 1) → Queries run sequentially, which may slow down execution, especially for large, complex queries.
    Too High (MAXDOP > Optimal Value) → Excessive CPU consumption, leading to performance issues for concurrent workloads.
    Balanced Setting (Recommended MAXDOP) → Optimizes query execution without overwhelming system resources.
3. Index Operations and MAXDOP
Operations such as index creation, rebuilds, and drops can be CPU-intensive. You can override the database-level MAXDOP setting for index operations by specifying the MAXDOP option in CREATE INDEX or ALTER INDEX statements.
Example:
CREATE INDEX IX_Customer ON Customers (LastName) WITH (MAXDOP = 4);
4. Additional Parallel Operations
MAXDOP also affects parallel execution of:
    DBCC CHECKTABLE
    DBCC CHECKDB
    DBCC CHECKFILEGROUP

These operations may consume excessive CPU if MAXDOP is too high.
Best Practices and Recommendations
1. Avoid MAXDOP = 0
Although MAXDOP = 0 allows full CPU utilization, it can lead to excessive parallelism, starving other queries of resources. This is especially critical in Azure SQL Database, where resource governance is stricter.
2. Consider Workload-Specific MAXDOP Settings
Different workloads may benefit from different MAXDOP settings:
    OLTP Workloads (high concurrency, short queries) → Lower MAXDOP (e.g., 1-4).
    OLAP/Data Warehousing (complex queries, large datasets) → Higher MAXDOP (e.g., 8+).
3. Test Before Modifying MAXDOP
    Load test the workload with realistic concurrent queries before changing MAXDOP.
    Monitor CPU usage, query execution time, and worker thread contention.
4. Configure MAXDOP Independently for Replicas
For read scale-out, geo-replication, and Hyperscale replicas, MAXDOP can be set independently for primary and secondary replicas, allowing better optimization for read-write vs. read-only workloads.
Modifying MAXDOP in SQL Server and Azure SQL Database
1. Changing MAXDOP at the Database Level
To change MAXDOP for an entire Azure SQL Database, use:
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 4;
2. Setting MAXDOP for Specific Queries
To override the database setting at the query level, use:
SELECT * FROM Sales OPTION (MAXDOP 2);
3. Setting MAXDOP for Index Operations
ALTER INDEX IX_Customer ON Customers REBUILD WITH (MAXDOP = 4);

Monday, 20 May 2024

Find Sql Server Version, Product and Host details

Check Various SQL Server Instance Related Properties using  SERVERPROPERTY function

SELECT
 SERVERPROPERTY('MachineName') AS ComputerName,
 SERVERPROPERTY('ServerName') AS InstanceName,
 SERVERPROPERTY('Edition') AS Edition,
 SERVERPROPERTY('ProductVersion') AS ProductVersion,
 SERVERPROPERTY('ProductLevel') AS ProductLevel;
GO


Check TDE is Enabled or not on SQL Server

We can check if TDE Is enabled using SQL Server Management Studio.

Login to Sql Server From SSMS
Expand Databases section
Right Click on DB
Choose Options  


Check if TDE is enabled using a query :

SELECT
DB_NAME(database_id) AS DatabaseName,
encryption_state,
CASE encryption_state
WHEN 0 THEN 'No Database Encryption Key Present'
WHEN 1 THEN 'Unencrypted'
WHEN 2 THEN 'Encryption In Progress'
WHEN 3 THEN 'Encrypted'
WHEN 4 THEN 'Key Change In Progress'
WHEN 5 THEN 'Decryption In Progress'
ELSE 'Unknown'
END AS EncryptionStatus
FROM sys.dm_database_encryption_keys;