Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all articles
Browse latest Browse all 3160

Columnstore Indexes part 93 (“Batch Mode Adaptive Memory Grant Feedback”)

$
0
0

Continuation from the previous 92 parts, the whole series can be found at http://www.nikoport.com/columnstore/ .

Given that currently the Batch Execution Mode is an exclusive feature of the Columnstore Indexes, and that the new developments in the query optimisation area are mainly targeting In-Memory & Columnstore Features, in SQL Server vNext (post 2016) we shall have some incredible automatic Query Processing optimisations that will be working for the first phase exclusively for the Batch Execution Mode.

The project name of this improvement for the SQL Server vNext & Azure SQL Database is called “Adaptive Query Execution”.

The first 3 announced feature of the Adaptive Query Execution are

Interleaved Execution, Batch Mode Execution Adaptive Join & Batch Mode Adaptive Query Memory Feedback. In this blog post I will focus on the available and accessible information on the last one the atch Mode Adaptive Query Memory Feedback, and the other 2 features will be taken to the test as soon as they become available.

As already described by Joe Sack in Introducing Batch Mode Adaptive Memory Grant Feedback , the idea behind this feature is to improve the memory grants for the 2nd and consecutive executions of the same query queries by adjusting the sizes (the total and the memory fractions of the individual iterators as well).

This adjustment might take place based of the inedequacy of the estimated number of rows (which is based on the statistics available at the time of the execution time generation) related to the real execution number of rows that the iterator/query is processing.

The 2 possible adjustment scenarios are:

when estimated number of rows is too high and the memory is granted to the query, even though the query itself will not use it.

when estimated number of rows is too low and the memory operations such as hashing or sorting will not have enough space to fit the complete data sets, thus making them spill on to the TempDB (temporary storing the data while doing the work, based on the lacking of the available memory to the query).

Correcting the cardinality estimates that were compiled at the time of the execution plan generation is the ultimate goal of the Batch Mode Adaptive Memory Grant Feedback.

The idea is not to change the generated execution plan or to create a new one, but to adjust the size of the memory grants making query running faster and more efficient, in the way that for the 1st scenario the actual memory grant will be lowered making more queries to execute at the same time (a good question here is if a query can pass through a lower gateway if the memory grant will be lowered that much),

while for the 2nd scenario the overall performance of the query shall be improved because with more memory to work with, the query will be spilling (writing and reading) to TempDB less thus finishing faster.

The Batch Mode Adaptive Memory Grant Feedback will be triggered only if the granted memory is more than 2 times bigger than the actually used memory for the queries with memory grants bigger than 1 MB. Should there be not enough memory, than an Extended Event spilling_report_to_memory_grant_feedback will provide a starting point for the Query Optimiser.

This sounds like a very exciting feature, but how does it look ? How does it work in practice and how can we determine if our query was adapted (adjusted) or not ?

For the tests, I will use 2 different Virtual Machines with SQL Server 2016 Service Pack 1 and the currently available SQL Server vNext CTP Version 1.1.

As in the most blog posts before, ContosoRetailDW free sample database from Microsoft will be serving as the base for the examples.

In the script below, I am restoring a copy of this database from the backup, that is located in C:\Install folder, updating it’s compatibility level to 140 (vNext) or 130 (SQL 2016 SP1 you will need to update this script yourself) and remove the Primary Clustered Index on the FactOnlineSales table before creating a Clustered Columnstore Index on it:

USE [master] alterdatabaseContosoRetailDW setSINGLE_USERWITHROLLBACKIMMEDIATE; RESTOREDATABASE [ContosoRetailDW] FROMDISK = N'C:\Install\ContosoRetailDW.bak' WITHFILE = 1, MOVE N'ContosoRetailDW2.0' TO N'C:\Data\ContosoRetailDW.mdf', MOVE N'ContosoRetailDW2.0_log' TO N'C:\Data\ContosoRetailDW.ldf', NOUNLOAD,STATS = 5; alterdatabaseContosoRetailDW setMULTI_USER; GO GO ALTERDATABASE [ContosoRetailDW] SETCOMPATIBILITY_LEVEL = 140 GO ALTERDATABASE [ContosoRetailDW] MODIFYFILE ( NAME = N'ContosoRetailDW2.0', SIZE = 2000000KB , FILEGROWTH = 128000KB ) GO ALTERDATABASE [ContosoRetailDW] MODIFYFILE ( NAME = N'ContosoRetailDW2.0_log', SIZE = 400000KB , FILEGROWTH = 256000KB ) GO use ContosoRetailDW; -- DropthePrimaryKeyand theClusteredIndexwithit ALTERTABLEdbo.[FactOnlineSales] DROPCONSTRAINT [PK_FactOnlineSales_SalesKey]; -- CreatedtheClusteredColumnstoreIndex createclusteredcolumnstoreIndexPK_FactOnlineSales ondbo.FactOnlineSales;

I have taken a slightly modified version of one of my traditional test queries that I am using in my workshops and ran it against our star-schema database:

selectprod.ProductName, sum(sales.SalesAmount) fromdbo.FactOnlineSalesMemsales innerjoindbo.DimProductprod onsales.ProductKey = prod.ProductKey innerjoindbo.DimCurrencycur onsales.CurrencyKey = cur.CurrencyKey innerjoindbo.DimPromotionprom onsales.PromotionKey = prom.PromotionKey wherecur.CurrencyName = 'USD' and prom.EndDate >= '2004-01-01' groupbyprod.ProductName;

Below you will find the execution plan for the query, which looks exactly the same for the SQL Server 2016 Service Pack 1 and for the SQL Server vNext:


Columnstore Indexes   part 93 (“Batch Mode Adaptive Memory Grant Feedback”)
Columnstore Indexes   part 93 (“Batch Mode Adaptive Memory Grant Feedback”)
The original memory grant for the both SQL Server versions is 40196 KB plus the memory fractions for the individual iterators are equal, showing that for the first time execution query right now the SQL Server vNext has no significant changes for the query estimation for this specific scenario.

But what about the consequent executions of the same query ? The promise of the Batch Mode Adaptive Memory Grant Feedback is that not the first, but the second and further executions of the query might show some changes for the total memory grant and for some iterator memory fractions.

Let’s execute the same query for the second time and even better let’s execute the following script, which will reset the clean memory buffers, recompile the original query (option (recompile)) and execute the same query again but without recompilation in the original form.

DBCCFREEPROCCACHE selectprod.ProductName, sum(sales.SalesAmount) fromdbo.FactOnlineSalessales innerjoindbo.DimProductprod onsales.ProductKey = prod.ProductKey innerjoindbo.DimCurrencycur onsales.CurrencyKey = cur.CurrencyKey innerjoindbo.DimPromotionprom onsales.PromotionKey = prom.PromotionKey wherecur.CurrencyName = 'USD' and prom.EndDate >= '2004-01-01' groupbyprod.ProductName OPTION ( RECOMPILE ); selectprod.ProductName, sum(sales.SalesAmount) fromdbo.FactOnlineSalessales innerjoindbo.DimProductprod onsales.ProductKey = prod.ProductKey innerjoindbo.DimCurrencycur onsales.CurrencyKey = cur.CurrencyKey innerjoindbo.DimProm

Viewing all articles
Browse latest Browse all 3160