Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all articles
Browse latest Browse all 3160

#0380 SQL Server Basics Specify Scale and Precision when defining Decim ...

$
0
0

I had some interesting conversation during a code review that I was asked to conductfor a simple query that a team had written to support their project monitoring. Theteam was specializing in quality assurance andhad minimal development experience. The team had usedvariables of decimal data types in their script, but they were declared without any precision or scale. When I gave a review comment onthe declaration of variables, I was askedthe question:

Does it make a difference if we do not specify scale and precision when defining variables of decimal or numeric datatypes?

I often look forward to such encounters for two reasons:

When I answer their questions, the processreinforces my concepts andlearnings It helps mecontribute to the overall communityby writing a blog about my experience

When the question was asked, I honestly admitted that I did not have a specific answer other than it was the best practice to do so from a long-termmaintainability standpoint. Post lunch, I did a small testwhich I showed the team and will be presenting today.

The Problem

In the script below, I take a decimal variable (declared without a fixed scale or precision) with value (20.16)and multiply it by a constant number (100) and then by another constant decimal (100.0). If one uses a calculator, the expected result is:

20.16 * 100 = 2016 20.16 * 100.0 = 2016
#0380   SQL Server   Basics   Specify Scale and Precision when defining Decim ...

Expected results when we multiply a Decimal with another number using the calculator

However, when we perform the sametestvia SQL Server, we are in for a surprise:

DECLARE @dVal1 DECIMAL = 20.16; SELECT (@dVal1 * 100) AS DecimalMultipliedByAnInteger, (@dVal1 * 100.0) AS DecimalMultipliedByADecimal; GO

As can be seen from the seen from theresults below, wedo not get the expected results, but we find that the decimal value was rounded off before the multiplication took place.


#0380   SQL Server   Basics   Specify Scale and Precision when defining Decim ...

Although the test input value is declared as a decimal, the result appears to be based only on the significand, not the mantissa part of the input.

Root Cause

The reason behind this behaviour is hidden in the following lines ofthe SQL Server online documentation on MSDN (formerly known as “Books-On-Line”) for decimal and numeric data-types available here: https://msdn.microsoft.com/en-us/library/ms187746.aspx .

…s (scale)

The number of decimal digits that will be stored to the right of the decimal point….Scale can be specified only if precision is specified. The default scale is 0…

The real reason however isa few lines below rounding .

Converting decimal and numeric Data

…By default, SQL Server uses rounding when converting a number to a decimal or numeric value with a lower precision and scale….

What SQL Server appears to be doing here is that when avariable of DECIMAL datatype is declared without a precision and scale value, the scale is taken to be zero (0). Hence, the test value of 20.16 isroundedto the nearest integer, 20 .

To confirm thatrounding is indeed taking place, I swapped the digitsin theinput value from 20.16 to 20.61 and re-ran the same test.

DECLARE @dVal1 DECIMAL = 20.61; SELECT (@dVal1 * 100) AS DecimalMultipliedByAnInteger, (@dVal1 * 100.0) AS DecimalMultipliedByADecimal; GO

Now, the result was 2100 instead of 2000 because the inputtest value of 20.61 was rounded to 21 before the multiplication took place.


#0380   SQL Server   Basics   Specify Scale and Precision when defining Decim ...

Because the test input value was declared as a decimal without precision and scale, rounding took place, resulting in a different result.

By this time, my audience was struck in awe as they realized the impactthis behaviour would have had on their project monitoring numbers.

The Summary A Best Practice

We can summarize the learning into a single sentence:

It is abest practice for ensuring data quality to always specify a precision and scale when working with variables of the numeric or decimal data types.

To confirm, here’s a version of the same test as we saw earlier. The only difference is that this time, we have explicitly specified the precision and scale on our input values.

DECLARE @dVal1 DECIMAL(19,4) = 20.16; SELECT (@dVal1 * 100) AS DecimalMultipliedByAnInteger, (@dVal1 * 100.0) AS DecimalMultipliedByADecimal; GO

When we look at the results, we see that the output is exactly what we wanted to see, i.e. 2016.


#0380   SQL Server   Basics   Specify Scale and Precision when defining Decim ...

Because the test input value was declared as a decimal with precision and scale, no rounding took place and we got the expected result.

Further Reading decimal and numeric datatypes [ MSDN Documentation ]

Until we meet next time,

Be courteous. Drive responsibly.

Viewing all articles
Browse latest Browse all 3160

Trending Articles