top of page

AL updates and locking, Part II - Optimized locking in Azure SQL

In my previous post AL updates and locking, I gave a high-level introduction into the SQL Server locking model. That post was inspired by the changes in the Azure SQL locking architecture, which were the aim of my tests. Now, after the detour to the lock escalation overview, I am ready to venture to run a few tests on an Azure instance and match the results with Business Central experience.


Optimized locking in Azure SQL


In this test, I am taking a step closer to the ultimate goal of my locking experiments and running the same queries on an Azure SQL instance with Optimized Locking enabled.

Long story short, with this new locking model, SQL Server engine does not hold row and page locks for the whole duration of the transaction. Instead, it places a lock on the transaction ID (TID) which marks all rows modified by the active transaction. This way, there is only one lock object persisting throughout the transaction lifetime - although the database engine still acquires a lock on the row being modified, as well as the page containing this row, these locks are released immediately after updating the data, and the total number of locks never grows to the escalation threshold.

This model is similar to the locking architecture used in the Oracle Database, which keeps lock information in the data block containing the record itself and never escalates locks.


At the time of writing of this article, optimized locking is enabled only in 5 Azure regions. In a while, it will be rolled out to other regions as well, but so far, not every Azure SQL database can benefit from the new feature. The list of regions where the new locking model is available, is published in the Optimized Locking article on Microsoft Learn (the link is a few lines above).

You can also run a simple query to check if the optimized locking is enabled for your database.

SELECT DATABASEPROPERTYEX('BCDEMO', 'IsOptimizedLockingOn');

Same article mentioned above describes prerequisites for the database to be able to use optimized locking.


Optimized locking requires accelerated database recovery (ADR) to be enabled on the database.
For the most benefit from optimized locking, read committed snapshot isolation (RCSI) should be enabled for the database.

It also confirms that both options are enabled by default in Azure SQL, so we don't have to worry. I checked and reconfirmed - indeed both prerequisites are satisfied.


The Azure SQL instance which I used in the test in deployed in the UK South region, which is listed as supported in the Optimized Locking article.

Sandbox SaaS instance could not work for two reasons. First of all, because this kind of low-level tests require access to SQL system management views, completely hidden from developers in a SaaS environment. And the second reason is that the UK sandbox environment which I created, was deployed in the UK West region. Cardiff datacenter is not one of those where the optimized locking is enabled, so it could not work anyway.


So this was my testing configuration after all:

  • Business Central 21 on-prem Docker image

  • Azure SQL instance in the UK South region

  • Database from the container uploaded to the Azure SQL

  • BC service running in the container reconfigured to connect to the Azure database


Updates with optimized locking


Before switching to AL statements, I am running same SQL queries as in my local Docker SQL Server instance. The same table MyTestTable (see the table definition in the previous post AL updates and locking) with 200 000 records, now taken to the cloud.


In Azure SQL, no matter if I update all rows in a single statement:


BEGIN TRANSACTION
	UPDATE [MyTestTable] SET [Description] = 'abcd'

	WAITFOR DELAY '00:00:30'
ROLLBACK

...or split the transaction into multiple smaller updates:


DECLARE @I int = 1, @Count int = (SELECT COUNT(*) FROM MyTestTable)
BEGIN TRANSACTION
	WHILE @I < = @Count
	BEGIN
		UPDATE MyTestTable SET [Description] = 'abcd'
		    WHERE No_ BETWEEN @I AND @I + 999
		SET @I = @I + 1000
	END
	WAITFOR DELAY '00:00:30'
ROLLBACK

sys.dm_db_index_operational_stats and sys.dm_tran_locks demonstrate the same statistics in both cases, unlike SQL Server on-premises, where a single statement was triggering an escalation, whilest multiple smaller updates did not.


There is an exclusive lock on a resource type XACT held throughout the transaction, and no escalation attempts. X lock on a key and IX lock on a page are released immediately once the row is updated, so at any moment during the update, there is always only one rowlock and one page lock held by the transaction.

This is the optimized SQL locking behaviour - the database engine does not keep multiple atomic locks, saving server resources. At the same time, there is no need for escalation, so the rows not updated by the current transaction, are never locked, and this reduces the risk of update conflicts and deadlocks.


How is this related to Business Central?


After competing the introduction and the series of tests illustrating the lock escalation principles, I am going to relate these principles to Business Central,

repeating the tests from an AL application.


Similar to SQL update queries, I'm going to run an update of a large recordset, and do another test updating records in a loop. Although, here I'm adjusting the tests to AL update patterns - usually we either use a loop to call Modify on each record, all use ModifyAll to run an update on a filtered subset in one go.

Let's start with a loop, modifying one record at a time.


trigger OnAction()
var
    MyTestTable: Record MyTestTable;
    I: Integer;
begin
    MyTestTable.FindSet();

    for I := 1 to MyTestTable.Count() do begin
        MyTestTable.Description := '0' + MyTestTable.Description;
        MyTestTable.Modify();
        MyTestTable.Next();
    end;

    Sleep(30000);
    Error('');
end;

First, I'm running the statement on a server instance connected to an on-prem SQL Server instance. Unlike the query executed from the SQL Server Management Studio, BC activity leaves a more visible trace. Number of locked rows and pages received from sys.dm_db_index_operational_stats is much higher this time. This is also an interesting observation, worth another deep dive into the communication between BC and SQL, but the important thing right now is that despite the high number of locks, which are not released until the transaction is rolled back, the DB engine did not escalate the lock.


When executed on Azure SQL, the result looks the same.


ModifyAll in BC - with and without optimized locking


For a test addressing a batch modification in a single statement, I will mimic the SQL query I ran in the first set of tests on my local SQL Server instance:

UPDATE [MyTestTable] SET [Description] = 'abcd' WHERE No_ <= 6100

In AL code, this query can be translated to a SetRange or SetFilter with a ModifyAll statement.

MyTestTable.SetRange("No.", 1, 6100);
MyTestTable.ModifyAll(Description, 'abcd');

Again, at first I run this code on a container with a local database.

Exclusive lock is escalated on the table level - just as expected.

And now I re-run the statement on the Azure SQL database.



This time the result does look unexpected because the still gets escalated despite the previous demonstrations of the optimized locking capabilities. Although Azure SQL places a partitioned lock (don't confuse it with a partition lock, the table does not get automatically partitioned), it's still locking the whole table.


What happens


If we refer back to the optimized locking overview, we can find one more condition for the database engine to use TID lock:

When using stricter isolation levels like repeatable read or serializable, the Database Engine is forced to hold row and page locks until the end of the transaction, for both readers and writers, resulting in increased blocking and lock memory.

The new locking model works with more loose transaction isolation levels - read uncommitted and read committed, but as the level grows to stricter repeatable read or serializable, it has to fall back to traditional locking. Why is it important for BC, you may ask. BC runs transactions under the repeatable read isolation, thus not allowing Azure SQL to use the full force of its new locking engine.

So the proper way of imitating BC queries in SQL Server Management Studio would be to run them in repeatable read transactions.


SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
GO
BEGIN TRANSACTION
	UPDATE SET [Description] = 'abcd' WHERE No_ <= 7000
	
	WAITFOR DELAY '00:00:30'
ROLLBACK

Once the isolation level for the test query is aligned with BC, Azure SQL demonstrates the same locking pattern, escalating the lock in scenarios where it was holding a TID lock without the escalation.


Will BC22 locking control make it work?


Release plan for Business Central v.22 (2023 release wave 1) includes a new property on record variables that allows developers control locking isolation for table reads:


The new transaction isolation control is a powerful tool for query optimisation, but in the context of this overview of BC updates, this is still not going to prevent escalations and enable Azure SQL to use optimized locking. The new property is called ReadIsolation and, according to the release plan note,


AL developers can now explicitly control the database isolation level on individual reads on a record instance.

So, this property only applies to database reads and can't change the overall transaction isolation for the BC session.

What does this mean on the SQL query level? We know very well that by default, AL FindSet/FindFirst/FindLast statements add a table hint with(readuncommitted) to the queries they issue to the SQL database. The ReadIsolation property will allow the developers to change the table hint and force the selected isolation level on the table. But this does not affect updates, which are still executed under the repeatable read isolation imposed on the session.

This is a preliminary conclusion, though, and of course isolation control is BC 22 needs a good testing and another deep dive post and a detailed review, once the April release is out.

480 views0 comments

Comments


bottom of page