Continuing performance testing of queries on extended Business Central tables, today I want to look at the same dataset from a slightly different angle. If you read my previous post, you may remember the conclusion from the first test run: partial records are not a silver bullet to solve all performance problems in the application, but applying SetLoadFields on record variables can definitely help to protect the application from performance degradation in case when the application table is extended. In this post, I want to highlight another situation when a significant performance gain can be achieved through partial records - and this can work for an extension, as well as the extended application.
Selecting the full record with filters on extension fields
For the starter, I'd like to repeat the series of tests I described in my previous post, but with one modification to the test scenario - now I will apply filters on the extension fields instead of the base table, as I did in previous episode.
Since this is a completely new series of tests that cannot be directly compared to the one before, I allowed myself to take the liberty of adding another field to the table extension. This is a Date field which I, following my unsophisticated naming conventions, called My Date.
tableextension 50100 "Det. Cust. Ledg. Entry"
extends "Detailed Cust. Ledg. Entry"
{
fields
{
field(50000; "My Amount"; Decimal)
{
}
field(50001; "My Code"; Code[20])
{
}
field(50002; "My Date"; Date)
{
}
}
}
I put this field in the extension just to make it is easier to generate an evenly distributed data set and filter the result. The values assigned to the new field follow the same rules (and are basically generated by the same procedure) as the Posting Date in earlier examples. Dates are based on the work date and can take values from WorkDate - 45 days to WorkDate + 45 days.
Filtering on the whole month, as I did in previous tests, yields around one third of the whole dataset if no other filters are applied. This amounts to over 300 000 records, the number of a different scale compared to previous results. To keep the records number at the same order of magnitude, I am limiting the date range to one day, which brings the resulting recordset down to around 10 000 rows.
local procedure SelectDetCustLedgEntryAllFieldsFilterOnExtension()
var
DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
DetCustLedgEntry.SetRange("My Date", WorkDate());
DetCustLedgEntry.FindSet();
end;
The execution plan for the query generated from this code suggests a clustered index scan on the filtered table (the extension one) and an index seek on the base table.
Although index seeks are usually considered a faster and preferable option as opposed to scans, this seek seems to be rather slow.
When the query is executed, server statistics show that both operations are quite heavy, and the index seek is not much lighter than the full scan of the extension.
(10827 rows affected)
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972'.
Scan count 0, logical reads 55563, physical reads 1, page server reads 0,
read-ahead reads 10449, page server read-ahead reads 0...
Table 'SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b'.
Scan count 1, logical reads 18836, physical reads 3, page server reads 0,
read-ahead reads 18832, page server read-ahead reads 0...
SQL Server Execution Times:
CPU time = 250 ms, elapsed time = 888 ms.
Table allocation shows that that the Detailed Cust. Ledg. Entry with 1 000 000 records takes 210 086 pages, so compared to the total table size, mere 10000 pages is a relatively small fraction, but the overall query run time is high due to the high number of physical page reads.
SetLoadFields on the extension only
Now I will start limiting the list of table fields, while keeping the same table filter on the My Date field. The first test with the partial record and the filter on the extension table retrieves only the extension fields. Or at least, this is what it looks like from the AL code perspective.
local procedure SelectDetCustLedgEntryExtFieldsFilterOnExtension()
var
DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
DetCustLedgEntry.SetRange("My Date", WorkDate());
DetCustLedgEntry.SetLoadFields("My Amount", "My Code", "My Date");
DetCustLedgEntry.FindSet();
end;
In fact, as we know, the fields included int he SetLoadFields parameters are not the only ones that are loaded in the Record variable. Primary key fields and system fields like timestamp, "System ID", "System Create At", etc are always included in the query, so the base table and the extension still must be joined.
SELECT
"379"."timestamp",
"379"."Entry No_",
"379_e1"."My Amount",
"379_e1"."My Code",
"379_e1"."My Date",
"379"."$systemId",
"379"."$systemCreatedAt",
"379"."$systemCreatedBy",
"379"."$systemModifiedAt",
"379"."$systemModifiedBy"
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
WITH(READUNCOMMITTED)
JOIN "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b" "379_e1"
WITH(READUNCOMMITTED)
ON ("379"."Entry No_" = "379_e1"."Entry No_")
WHERE ("379_e1"."My Date"='2025-01-23 00:00:00')
ORDER BY "Entry No_" ASC OPTION(FAST 50)
In this case, SetLoadFields does not affect the number of reads - neither physical nor logical, since SQL Server still has to read the same data pages. The total execution time seems to be somewhat lower, but a reliable statistical analysis will require much more than a single run. Server statistics still show the same numbers as in the previous run. I will skip the statistics here - all results can be found in the consolidated table at the end of the article.
SetLoadFields on Base Application fields
For the following test, I am moving the selected fields from the extension table to the base application, once again still keeping the filter on the My Date field added by the extension.
local procedure SelectDetCustLedgEntryBaseAppFieldsFilterOnExtension()
var
DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
DetCustLedgEntry.SetRange("My Date", WorkDate());
DetCustLedgEntry.SetLoadFields("Document Type", "Document No.");
DetCustLedgEntry.FindSet();
end;
And here is the SQL query that will be executed to retrieve the data for the FindSet.
SELECT
"379"."timestamp",
"379"."Entry No_",
"379"."Document Type",
"379"."Document No_",
"379_e1"."My Date",
"379"."$systemId",
"379"."$systemCreatedAt",
"379"."$systemCreatedBy",
"379"."$systemModifiedAt",
"379"."$systemModifiedBy"
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
WITH(READUNCOMMITTED)
JOIN "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b" "379_e1"
WITH(READUNCOMMITTED)
ON ("379"."Entry No_" = "379_e1"."Entry No_")
WHERE ("379_e1"."My Date"='2025-01-23 00:00:00')
ORDER BY "Entry No_" ASC OPTION(FAST 50)
The SQL query, as well as the execution plan, don't spring any surprise. Although the partial record is limited to the base application fields, the filter applied on an extension field results in a join query. In my previous post, I showed that SetLoadFields that includes only base app fields can secure the application performance by eliminating table joins on extension tables. But this does not work if any filters are set on extension fields - join cannot be avoided in this situation.
So, the SQL query optimizer generates the same execution plan, requiring a clustered index scan on the extension table, but now it suggests to create a new index on this table:
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b]
([My Date])
Let's try and do this. I modify my table extension and add the recommended index.
keys
{
key(MyDate; "My Date") { }
}
With this index implemented as the optimized suggested, the query execution plan changes from the clustered index scan to a non-clustered index seek on the newly created index. Number of physical reads on the extension table drops significantly from nearly 19 000 pages to just 28.
But despite this considerable difference in the data access operations, query execution time does not change dramatically - it goes down from around 660 ms to approximately 550 ms with cold cache and still fluctuates around 140 ms after the warm-up, not showing any improvement compared to the unindexed table.
This table summarizes all the test runs described above.
We can see that all executions require the same table scans and the same number of page reads (except the last case of a non-clustered index seek). Query time is slightly lower for all cases where SetLoadFields is used, compared to selecting the full record. Surprisingly, the index seek is not a breakthrough as well - the query still take more or less the same time to run.
Filters on both base and extension tables
And the last test I want to run for today stands a little aside from the rest, because now I am going to change the filters on the recordset, so the results cannot be directly compared to previous test runs. But this test is needed to demonstrate another situation where partial records can be especially beneficial for performance.
For this one, I apply filters on fields belonging to both the base application table and the extension table. With this query shape, SQL Server cannot scan one of tables and then use the output as the seek predicate for the the other table. Now it has to produce an execution plan to actually search the required records in both tables before joining them.
local procedure SelectDetCustLedgEntryAllFieldsFilterBothTables()
var
DetCustLedgEntry: Record "Detailed Cust. Ledg. Entry";
begin
DetCustLedgEntry.SetRange("Customer No.", '10000');
DetCustLedgEntry.SetRange(
"My Date",
CalcDate('<-CM>', WorkDate()), CalcDate('<CM>', WorkDate()));
DetCustLedgEntry.SetLoadFields("Document Type", "Document No.");
DetCustLedgEntry.FindSet();
end;
This is the AL data access, and here comes the SQL query produced from the FindSet with its respective filters.
SELECT
"379"."timestamp",
"379"."Entry No_",
"379"."Entry Type",
"379"."Document Type",
"379"."Document No_",
"379"."Customer No_",
"379_e1"."My Date",
"379"."$systemId",
"379"."$systemCreatedAt",
"379"."$systemCreatedBy",
"379"."$systemModifiedAt",
"379"."$systemModifiedBy"
FROM "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972" "379"
WITH(READUNCOMMITTED)
JOIN "CRONUS".dbo."SLFTest$Detailed Cust_ Ledg_ Entry$1554f031-75cc-44eb-933a-4c284c07b96b" "379_e1"
WITH(READUNCOMMITTED)
ON ("379"."Entry No_" = "379_e1"."Entry No_")
WHERE (
"379"."Customer No_"='10000' AND
"379_e1"."My Date">='2025-01-01 00:00:00' AND
"379_e1"."My Date"<='2025-01-31 00:00:00')
ORDER BY "Entry No_" ASC OPTION(FAST 50)
The query returns 4926 rows, scanning both tables, as expected. It takes 98 797 physical page reads on the base table and 18 835 on the extension.
And like in the previous case, the query optimizer suggests an index that can improve the query performance.
Missing index details according to the query optimizer:
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[SLFTest$Detailed Cust_ Ledg_ Entry$437dbf0e-84ff-417a-965d-ed2bb9650972]
([Customer No_])
INCLUDE
([timestamp],
[Entry Type],
[Document Type],
[Document No_],
[$systemId],
[$systemCreatedAt],
[$systemCreatedBy],
[$systemModifiedAt],
[$systemModifiedBy])
What stands out in this index though, is the INCLUDE clause.
Covering index
Now we approaching the subject of a covering index. A covering index is always defined in relation to a specific query. It is an index that includes all columns selected by the query, so that the database engine can extract all necessary data from the non-clustered index directly without referring to the clustered index (data pages of the table itself). Index suggested by the optimizer above is covering for the query defined earlier. This index includes the fields Document Type and Document No. explicitly requested by SetLoadFields, but it also contains system fields implicitly added by the Business Central data access provider.
Let's go and add the new index.
keys
{
key(CustomerNo; "Customer No.")
{
IncludedFields =
SystemRowVersion,
SystemId,
SystemCreatedAt,
SystemCreatedBy,
SystemModifiedAt,
SystemModifiedBy,
"Entry Type",
"Document Type",
"Document No.";
}
}
When the query is run again, with the new index in place, we can see that it does indeed make a difference. Number of reads on the base table dips. With the new index, SQL Server needs only 232 page reads to select all required data from the base table. Total execution time for the query with covering index goes down as well.
Conclusion
I will repeat the conclusion I already made in the pervious post: the use of SetLoadFields in AL code will not solve all performance problems in the application, but there are certain situation where partial records can significantly improve performance or prevent possible performance issues caused by extensions. My previous post "Who benefits from partial records" positions the partial recors as a protection against a performance impact from table extensions. Now I am showing how SetLoadFields can be used in combination with a covering index to achieve performance improvement.
This approach can be used only for a limited number of most expensive queries. Limit the fields being selected and declare an index that includes all required fields. Remember that the following system fields are always implicitly included in the query and must be present in the index as well:
SystemRowVersion
SystemId
SystemCreatedAt
SystemCreatedBy
SystemModifiedAt
SystemModifiedBy
תגובות