Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all 3160 articles
Browse latest View live

Enhancements To Polybase In SQL Server 2019

$
0
0

Rajendra Gupta has a multi-part series on Polybase enhancements with SQL Server 2019. Part one covers installation of SQL Server 2019 and Azure Data Studio :

You need to install Oracle JRE 7 update 51 or higher to install Polybase. If it is not installed, you will get below error message while checking the rules for installation.

To fix this error, go to ‘ Java SE Runtime Environment 8 Downloads ‘ and download Java SE Runtime Environment 8u191E. Double click on the setup file to install it.

Part two shows us how to install Oracle Express Edition and query it via SQL Server :

As discussed, so far below are the requirements to access Oracle database using PolyBase with Azure Data Studio SQL Server 2019 preview 4 Azure Data Studio with SQL Server 2019 extension Oracle Data Source Polybase services should be running with SQL Server database services

Part three is forthcoming, as Rajendra mentions at the end of part 2.


Machine Learning Services Configuring R Services in SQL Server

$
0
0

The R language is one of the most popular languages for data science, machine learning services and computational statistics. There are several IDEs that allow seamless R development. Owing to the growing popularity of the R language, R services have been included by Microsoft in SQL Server 2016 onwards. In this article, we will briefly review how we can integrate R with SQL Server 2017. We will see the installation process and will also execute the basic R commands in SQL Server 2017.

Environment Setup

To run R scripts in SQL Server, you have to install Machine Learning Services in SQL Server, which can be done in two different ways. You can install machine learning services to an existing installation of SQL Server or you can configure to install these services with a fresh installation of SQL Server. In this article we will see the second approach where we will download a new version of SQL Server 2017 with machine learning services enabled. To do so, follow these steps:

Go to the SQL Server 2017 download link , and select the Developer version of SQL Server for downloading as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Once the download is complete, open the “downloaded” executable file. You should see the following options:


Machine Learning Services   Configuring R Services in SQL Server

Machine Learning Services is an optional feature which is not installed by default in SQL Server Management Studio. To install these services manually, click the Custom installation from the three options you see in the above screenshot.

A new window will appear where you have to specify the installation path.


Machine Learning Services   Configuring R Services in SQL Server

Specify the installation path and click “Install” button. The download will take some time before the installation window appears.

From the window that appears select “Installation” option from the left. You will see several options on the right. Select the first one which reads “New SQL Server stand-alone installation or add features to an existing installation”. This is shown in the following figure:


Machine Learning Services   Configuring R Services in SQL Server
Select free “Developer” edition from the window that appears and click “Next” button. Accept the License Agreement and click “Next” button again. Walk through each step until you reach the “Feature Selection” window.

From the feature selection window select “Database Engine Services.” Under the “Database Engine Service” option, you should see “Machine Learning Services (In Database)” option, which further contains R and python options. Select both R and Python options as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Click “Next” button.

Give name to your SQL Server instance in the window that appears. You can also use default name and then click “Next” button.


Machine Learning Services   Configuring R Services in SQL Server

Walk through each step until you reach the “Database Engine Configuration” option as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Here you can click “Add Current User” button to add yourself as the database administrator. Click “Next” button.

A window will appear prompting you to give consent to install “Microsoft R Open” as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Click “Accept” button and then click “Next” button.

Repeat Step 9 to give consent for installing Python Services. “Feature Configuration Rules” window will appear. Click “Next” button.

Finally in the “Ready to Install” window, click “Install” button as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Depending upon your processor speed and the internet, the installation process can take some time. Once the installation is complete, you should see the following window:


Machine Learning Services   Configuring R Services in SQL Server

If you see the above window, the installation is successful.

Enabling Machine Learning Services

In the previous section, we installed the machine learning services required to run R scripts in SQL Server. However, the services are not enabled by default.

To enable the machine learning services, go to SQL Server Management Studio. If you have not already installed SQL Server Management Studio, you can download it from this link .

In the SQL Server Management Studio, open a new query window and type the following script:

EXEC sp_configure 'external scripts enabled', 1 RECONFIGURE WITH OVERRIDE

The script above enables execution of any external scripts in SQL Server. If the above script executes successfully, you should see the following message.

Configuration option ‘external scripts enabled’ changed from 0 to 1. Run the RECONFIGURE statement to install.

Before the R scripts can be executed, we need to restart the SQL Server. To do so, open the SQL Server Configuration Manager from the windows start menu. From the options on the left, select “SQL Server Services”. You will see list of all the SQL Server Instances, running on your system as shown below:


Machine Learning Services   Configuring R Services in SQL Server

Right Click the SQL Server Instance that you installed along with machine learning services and click “Restart”.

Executing R Scripts

We have installed and enabled the services that are required to run R scripts in SQL Server. Now is the time to run our R script in SQL Server. Execute the following script:

EXEC sp_execute_external_script @language =N'R', @script=N'print("Welcome to R in SQL Server")' GO

In the first line, we call the “sp_execute_external_script” store procedure; as a parameter we pass it the “language” that the script belongs to and the actual “script”. Notice we passed N‘R’ as language. In the script we simply print a message on the screen. In the console window, you should see the following output when the above script is executed:

STDOUT message(s) from external script:[1] “Welcome to R in SQL Server”

If the corresponding services are installed, the process for running any external script remains the same.

Executing Python Scripts

During the installation of machine learning services, we also selected Python. Let’s modify our script to see how Python can be executed inside SQL Server.

Execute the following script:

EXEC sp_execute_external_script @language =N'Python', @script=N'print("Welcome to Python in SQL Server")' GO

You can see, the only thing we changed here is the language and the text inside the string (which is optional). The output looks like this:

STDOUT message(s) from external script:Welcome to Python in SQL Server

Conclusion In this article we saw how we can configure SQL Server in order to run R scripts along with the changes we need to make during installation for enabling machine learning services that are required to run R in SQL Server. Finally we ran a simple R script to print the text on screen. By running a Python script, we also proved that the process of running external scripts in SQL s

SQL Server error log file, missing certain dates

$
0
0

I have SQL Server 2005 and I noticed some missing dates from my SQL server error log files.

Example I am missing dates from 13-21 Jan 2011? Where is it.... and I have all the dates before and after.

I check the Win event file - Security and see that there was activity on 19 Jan and I know that they were connecting to SQL but SQL log didn't record it or it did?....

The [error] log files are created stamped at the date SQL Server service started, they just keep adding to the same file instead of creating one for each date.

Check here http://msdn.microsoft.com/en-us/library/ms187885.aspx

A new error log is created each time an instance of SQL Server is started, although the sp_cycle_errorlog system stored procedure can be used to cycle the error log files without having to restart the instance of SQL Server.

Script to Create and Update Missing SQL Server Columnstore Indexes

$
0
0
Problem

SQL Server columnstore indexes are helpful when it comes to retrieving large volumes of data. However, there are instances where columns are added to the table after the columnstore index was created. This means the new columns are missing in the columnstore index. Also, there can be cases where new tables are added and a columnstore index was not created. In this tip we will look at how we can identify these cases and create scripts to create the index.

Solution

A columnstore index is an index that was designed mainly for improving the query performance for workloads with very large amounts of data (e.g. reading data warehouse fact tables and processing OLAP cubes). This type of index stores the index data in a column-based format rather than row based as is the case with traditional indexes.

Columnstore indexes provide a very high level of compression, up to 10x, due to the fact that the data across columns is usually very similar and will compress quite well. The second reason to use columnstore indexes is to improve performance of queries.

Columnstore indexes were introduced with SQL Server 2012 as non-clustered columnstore indexes. Also, in the SQL Server 2012 version, data cannot be modified after the columnstore index was created. With later versions of SQL Server we now have more options when using columnstore indexes.

Typically, all the columns are added to the columnstore index. However, there can be cases where columns are added to the table after the columnstore index is created. Also, there are cases where some tables are added later and a columnstore does not exist.

In this tip, we will show how to add columns to a non-clustered columnstore index.

Missing Columns in a SQL Server Columnstore Index

Let's look at this SQL Server table as an example.


Script to Create and Update Missing SQL Server Columnstore Indexes

In the above table, only five columns are included in the columnstore index and there are many other columns that could be included in the non-clustered columnstore index.

The following query will generate scripts to create indexes. Depending on the filters you use at the end of the query, it will generate the create index statements for those tables.

SELECT DISTINCT 'CREATE NONCLUSTERED COLUMNSTORE INDEX [' + i.NAME + '] ON dbo.' + tbl.NAME + ' (' + IndexColumns.IndexColumnList + ') WITH (DROP_EXISTING = ON) '
FROM sys.tables AS tbl
INNER JOIN sys.indexes AS i
ON (
i.index_id > 0
AND i.is_hypothetical = 0
)
AND (i.object_id = tbl.object_id)
INNER JOIN sys.index_columns AS ic
ON (
ic.column_id > 0
AND (
ic.key_ordinal > 0
OR ic.partition_ordinal = 0
OR ic.is_included_column != 0
)
)
AND (
ic.index_id = CAST(i.index_id AS INT)
AND ic.object_id = i.object_id
)
INNER JOIN (
SELECT object_id,
(
STUFF((
SELECT ',' + NAME
FROM sys.columns
WHERE object_id = C.object_id
FOR XML PATH(''),
TYPE
).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
) AS IndexColumnList
FROM sys.columns AS C
GROUP BY C.object_id
) AS IndexColumns
ON IndexColumns.object_id = ic.object_id
WHERE
tbl.NAME LIKE 'fact%'
AND tbl.NAME NOT LIKE '%OLD%'
AND tbl.NAME NOT LIKE '%BACK%'
AND i.type_desc LIKE '%NONCLUSTERED COLUMNSTORE%'

In the above query, tables that start with "fact" with non-clustered columnstore indexes are selected. The tables are identified by the table prefix “fact”. You might have different ways of identifying the tables. Also, in any environment you may have tables with OLD, BACK, etc. in the name, so those are filtered out as well.

The WITH (DROP_EXISTING = ON) option is added to the index, so the index does not need to be dropped separately.

When I ran the above script, it generated the following create index script:

CREATE NONCLUSTERED COLUMNSTORE INDEX [NonClusteredColumnStoreIndex-20181003-234642] ON dbo.FactCallCenter (
FactCallCenterID,
DateKey,
WageType,
Shift,
LevelOneOperators,
LevelTwoOperators,
TotalOperators,
Calls,
AutomaticResponses,
Orders,
IssuesRaised,
AverageTimePerIssue,
ServiceGrade,
DATE
)
WITH (DROP_EXISTING = ON)

After executing the create index script, the remaining columns were added to the non-clustered columnstore index. Unlike in a standard row-based index, the column order does not matter for columnstore indexes.


Script to Create and Update Missing SQL Server Columnstore Indexes
Next Steps Test this out on your test servers to see if this generates the needed scripts for your environment. Learn more about SQL Server Columnstore Indexes

Last Update: 2018-10-26


Script to Create and Update Missing SQL Server Columnstore Indexes
Script to Create and Update Missing SQL Server Columnstore Indexes
About the author
Script to Create and Update Missing SQL Server Columnstore Indexes
Dinesh Asanka is a 10 time Data Platform MVP and frequent speaker at local and international conferences with more than 12 years of database experience. View all my tips

Related Resources

More SQL Server DBA Tips...

Batch Mode For Row Store: Does It Fix Parameter Sniffing?

$
0
0
Snorting The Future

SQL Server 2019 introduced batch mode over row store, which allows for batch mode processing to kick in on queries when the optimizer deems it cost effective to do so, and also to open up row store queries to the possibility of Adaptive Joins , and Memory Grant Feedback .

These optimizer tricks have the potential to help with parameter sniffing, since the optimizer can change its mind about join strategies at run time, and adjust memory grant issues between query executions.

But of course, the plan that compiles initially has to qualify to begin with. In a way, that just makes parameter sniffing even more frustrating.

I Hate Graphic Tees

But I like this demo. It’s got some neat stuff going on in the plan, and that neat stuff changes depending on what you look for.

I also like it because it returns a small number of rows overall. I’ve gotten complaints in the past that queries that return lots of rows are unrealistic.

Moving on.

Here’s The Procedure CREATE OR ALTER PROCEDURE dbo.DisplayNameSearch ( @DisplayName NVARCHAR(40) ) AS BEGIN SELECT TOP 1000 u.Id, u.DisplayName, SUM(u.Reputation) AS Reputation, SUM(CASE WHEN p.PostTypeId = 1 THEN p.Score END ) AS QuestionsScore, SUM(CASE WHEN p.PostTypeId = 2 THEN p.Score END ) AS AnswersScore FROM dbo.Users AS u JOIN dbo.Posts AS p ON p.OwnerUserId = u.Id WHERE u.DisplayName LIKE @DisplayName AND EXISTS (SELECT 1/0 FROM dbo.Badges AS b WHERE b.UserId = p.OwnerUserId) GROUP BY u.Id, u.DisplayName ORDER BY Reputation DESC END; GO Here’s The Indexes CREATE INDEX ix_posts_helper ON dbo.Posts (OwnerUserId, Score) WITH (SORT_IN_TEMPDB = ON); CREATE INDEX ix_users_helper ON dbo.Users (DisplayName, Reputation) INCLUDE (PostTypeId) WITH (SORT_IN_TEMPDB = ON); CREATE INDEX ix_badges_helper ON dbo.Badges (UserId) WITH (SORT_IN_TEMPDB = ON); Parameter Sniffing Problem

My favorite user in the Users table isEggs McLaren. If I wanted to find Eggs, and users like Eggs, I could run my proc like this:

EXEC dbo.DisplayNameSearch @DisplayName = N'Eggs%'

It finishes pretty quickly.

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Posts'. Scan count 11, logical reads 48, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Badges'. Scan count 17, logical reads 72, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Users'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 0 ms,elapsed time = 2 ms.

The plan for it is what you’d expect for a small number of rows. Nested Loops. Kinda boring. Though it has a Batch Mode operator way at the end, for some reason.


Batch Mode For Row Store: Does It Fix Parameter Sniffing?

Flight of the Navigator

I say “for some reason” because I’m not sure why batch mode is a good option for one batch of 9 rows. It might just be in there as a safeguard for memory adjustments.

But hey, I’m just a bouncer.

If the next thing we look at is for users who didn’t register a proper handle on the site, we can run this query:

EXEC dbo.DisplayNameSearch @DisplayName = N'user[1-2]%'

We might even find this guy :


Batch Mode For Row Store: Does It Fix Parameter Sniffing?

Hello, new friend!

The plan doesn’t change a whole lot, except that now we have a couple spills, and they’re not even that bad. If we run the users query a couple more times, the memory grant stuff will iron out. Kind of. We’ll look at that in a minute.


Batch Mode For Row Store: Does It Fix Parameter Sniffing?

Tripped, fell

The metrics for this one are way different, though. We do a ton more reads, because we get stuck in Nested Loops hell processing way more rows with them.

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 6249, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Posts'. Scan count 200450, logical reads 641628, physical reads 0, read-ahead reads 2, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Badges'. Scan count 507356, logical reads 1619439, physical reads 0, read-ahead reads 1, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Users'. Scan count 1, logical reads 2526, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 3593 ms,elapsed time = 5051 ms. Five Seconds Doesn’t Seem Bad

To most people, it wouldn’t be, but it depends on your expectations, SLAs, and of course, which set of parameters more closely resembles typical user results.

Especially because, if we recompile and run it for users first, we do much better. Far fewer reads, and we trade 30ms of CPU time for about 3.5 seconds of elapsed time.

Table 'Users'. Scan count 1, logical reads 2526, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Badges'. Scan count 5, logical reads 43941, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Posts'. Scan count 5, logical reads 105873, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 3626 ms,elapsed time = 1601 ms.

Even Eggs is in okay shape using the ‘big’ plan. Yes, metrics are up a bit compared to the small plan, but I still consider a quarter of a second pretty fast, especially since we do so much better with the users plan.

Table 'Users'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Badges'. Scan count 5, logical reads 43941, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Posts'. Scan count 17, logical reads 94, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 985 ms,elapsed time = 261 ms. Most of the reason for

Build a standard import and export process with SQL Server 2017 and R

$
0
0

By: Gauri Mahajan || Related Tips:More >SQL Server 2017

Problem

In any organization, data is stored in various locations and in several formats. Importing and exporting files and data is a very routine task in SQL Server. Developers use different mechanisms and tools as well as customized code to import and export different formats of data programmatically into and out of SQL Server either temporarily or permanently. TypicallyETL tools are employed for this purpose.

But during the development phase, developers may not have ETL packages in place for importing and exporting data . Therefore, at times, they resort to manual methods or develop custom code for this. Maintaining different types of codebase to import/export data from various file formats requires more resources for developing, maintaining and operating the codebase. A better solution would be to have a common data gateway in the form of a library that can facilitate import/export of data into SQL Server, which would hide the underlying implementation complexity of importing/exporting data for each type of data source.

Solution

The rio package in R-services integrated with SQL Server 2017 provides an ability to address the problem. We will demonstrate how simple it is to work with various data formats with R services in T-SQL in this tip.

R comprises of a powerful tool of open source libraries and 'rio' is one amongst them. rio simplifies the process of importing and exporting data into and out of R in SQL Server. It allows almost all common data formats to be read or exported with the same function. Besides making import-export easy, it also uses R as a simple data parsing and conversion utility.

Before we begin, ensure system has the required software components likeSQL Server 2017,SSMS, SQL Server R services (in-database), etc. Subsequently, to ensure rio is fully functional, we need to install this package. If you are new to R and installing R packages on SQL Server, consider following this link on how to install packages on R server .

Once the package is installed, open a new query window on SSMS and execute the code mentioned below. Let's go over the steps of the script.

EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
library(rio)
print (packageVersion("rio"))
'

The sp_execute_external_script is the store procedure that helps to execute external scripts (R in this case) inside the database engine.

@language indicates the script language, which is R. @script specifies the R script that will be executed and its data type is nvarchar(max).

The library function references the rio package and function “packageVersion” returns the version of the rio package and is printed using the print function.

Successful execution of the code validates that the package has been installed correctly and is accessible to SQL Server.


Build a standard import and export process with SQL Server 2017 and R

Since the required environment has been set, lets go ahead and see how it works.

Import the Data into SQL Server

One of the popular file formats is the flat file format like CSV, TSV, delimited, etc. Let’s get started with the CSV file format. You can download a sample CSV file fromhere or feel free to use any CSV file that you have created or you already have in your system.

Below is the CSV file that we will be using in this tip. It is named CSVFile.csv and is saved in directory C:\temp\CSVFile.csv.


Build a standard import and export process with SQL Server 2017 and R

Once the CSV file is ready, execute the below code in SSMS.

EXECUTE sp_execute_external_script
@language = N’R’,
@script = N’
Library(rio)
csvfile <- import(“C:\\temp\\CSVFile.csv”)
print(csvfile)

In this code, apparently apart from referencing the rio package, we are using the 'import' function to read the CSV file followed by printing it in the output section using the print function.

Note: We need to use '\\'instead of '\'while providing the location of the CSV file. This is because '\'alone is considered an escape character.


Build a standard import and export process with SQL Server 2017 and R
Export Data from SQL Server

Now let’s try to output the same CSV data that we read in above into a better format than just printing it on the output console. In the below code, we are defining the output schema by using the 'WITH RESULT SETS' option and emitting output from the R-Script to T-SQL code by assigning the imported data to OutputDataSet data frame.

EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
library(rio)
CSVFile <- import("C:\\temp\\CSVFile.csv")
OutputDataSet <- CSVFile
'
WITH RESULT SETS((SepalLength numeric, SepalWidth numeric, PetalLength numeric,
PetalWidth numeric, Species nvarchar(10)))
Build a standard import and export process with SQL Server 2017 and R
Import an Excel File

Let’s switch to one another popular file format - Excel. We can read an Excel file same way as a CSV file. Execute the below code in order to import the Excel file and data can be seen in the result window. The Excel file is named ExcelFile.xlsx and is saved in directory C:\temp\ExcelFile.csv in the system. You can use any sample Excel data or can download the file fromhere.

EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
library(rio)
excelfile <- import("C:\\temp\\ExcelFile.xlsx")
print(excelfile)
'
Build a standard import and export process with SQL Server 2017 and R
Import JSON File

Now let’s try to work with one of the leading formats for storing and exchanging data typically over web, which is JSON. You can refer to any sample JSON file or can download one fromhere. Below is the JSON data file that we will be referring to in this tip.


Build a standard import and export process with SQL Server 2017 and R

Once the below code is executed, it will import the JSON data and structure and the JSON data can be queried using JSON$columns.

EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
library(rio)
JSONFile <- import("C:\\temp\\JSONFile.JSON")
print(JSONFile$columns)
'
Build a standard import and export process with SQL Server 2017 and R
Import Data from a GZip File

The rio package not only helps to read uncompressed/unzipped files, but it also allows direct import from compressed (.gzip) files. Zip files makes data storage and transfer way more efficient and faster by keeping related files together.

You can take any sample CSV file and zip it in the format of <filename>.csv.gz or can download it fromhere. Execute below code to import zip file in R.

EXECUTE sp_execute_external_script
@language = N'R',
@script = N'
library(rio)
gzipFile <- import("C:\\temp\\CSVFile.csv.gz")
print(gzipFile)
'
Build a standard import and export process with SQL Server 2017 and R
These were a few examples of h

Audit Information not getting retrieved using SSIS/KingswaySoft Adapter

$
0
0

A few days back I wrote about how we can use CDS/CRM Source component of KingswaysSoft Adapter to get the audit information.

https://nishantrana.me/2018/10/08/using-kingswaysofts-cds-crm-source-component-to-get-audit-information-in-dynamics-365-ce-ssis/

Recently while writing a package for getting audit details against one of the entities, we realized that the records were not getting retrieved and also, we were not getting any error or exception. The package kept on running.


Audit Information not getting retrieved using SSIS/KingswaySoft Adapter

Setting timeout in the CRM Connection Manager also didn’t stop the package, it kept running without retrieving the records.


Audit Information not getting retrieved using SSIS/KingswaySoft Adapter

After spending a good amount of time, we realized the silly mistake that we had made. We were running the package against the entity for which Audit was not enabled.

Thanks to KingswaySoft Support team for their relentless support.

Ideally, we should have been more careful before running the package against that entity, and also it will be helpful if the tool can also check if the audit is enabled for an entity before beggning the execution and if the audit is not enabled inform back the user or at least return 0 rows and complete its execution successfully.

Hope it helps..

Using Table-Valued Parameters In SQL Server

$
0
0

Ben Richardson has a post showing how to create user-defined table types and pass them into stored procedures :

Table-valued parameters were introduced in SQL Server 2008. Before that, there were limited options to pass tabular data to stored procedures. Most developers used one of the following methods: Data in multiple columns and rows was represented in the form of a series of parameters. However, the maximum number of parameters that can be passed to a SQL Server stored procedure is 2,100. Therefore, in the case of a large table, this method could not be used. Furthermore preprocessing is required on the server side in order to format the individual parameters into a tabular form. Create multiple SQL statements that can affect multiple rows, such as UPDATE. The statements can be sent to the server individually or in the batched form. Even if they are sent in the batched form, the statements are executed individually on the server. Another way is to use delimited strings or XML documents to bundle data from multiple rows and columns and then pass these text values to parameterized SQL statements or stored procedures. The drawback of this approach was that you needed to validate the data structure in order to unbundle the values.

The .NET framework then makes it easy to pass in an IEnumerable as a table-valued parameter.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

$
0
0

Deploying packages to SQL Server from SSDT is straightforward. We can either deploy the project or an individual SSIS Package i.e. Project Deployment or Package Deployment. (SQL Server 2016 Onwards).

Here we will see the package deployment.

Right-click the package that we would like to deploy and select Deploy Package.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

This opens the Integration Services Deployment Wizard


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

Click on Next and Specify Server Name and the credentials to connect to the SQL Server an click on Connect.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

For the Path specify the existing Project or create a new Project in SSID where we would like to deploy our packages.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

Review the details and if all the information is correct, click on Deploy.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

The result page would show the status of the deployment.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

Inside Integration Service Catalogs we can see our Package Deployed.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

Another option for deploying the package is through the Deploy Project option of the Project inside SQL Server Management Studio’s SSIDB node.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

It will open the Integration Service Deployment Wizard.


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

And after selecting the Package folder we can click on Next and follow the wizard as we had done earlier to deploy the package.

We can also run the Wizard directly from windows Explorer or command prompt IsDeploymentWizard.exe


Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...
Deploying Package to SQL Server Integration Services Catalog (SSISDB) from Visua ...

Through Stored Procedure à

catalog.deploy_packages

[catalog].[deploy_packages] [ @folder_name = ] folder_name, [ @project_name = ] project_name, [ @packages_table = ] packages_table, [ @operation_id OUTPUT ] operation_id OUTPUT ]

Through Management Object Model API.

Hope it helps ..

Slow Connections with Sql Server

$
0
0

On this page:

Argh... just fought with a small issue where connections to SQL Server were very slow on a new development box. Everytime I make a new SQL Connection there's a 2 second delay for the connection to occur. It's not only the first request, but any connection request including what should otherwise be pooled connections.

As you might imagine in Web applications that's a major problem even on a dev machine - it makes for some excruciatingly slow Web requests.

This is a local dev machine and I have a local SQL Server Developer installed.

TCP/IP is Disabled By Default

It turns out that a new SQL Server installation does not enable TCP/IP by default and if you're connecting to the server it uses Named Pipes which on this machine at least results in very slow connection times with what looks like a 2 second delay for every connection.

I was able to duplicate this behavior on two other machines when explicitly disabling TCP/IP and connections appear to take about the same amount of time which is somewhere around 2 seconds.

Enabling TCP/IP solved the problem and made local connections as fast as they should be: Nearly instant.

Enabling TCP/IP

To enable TCP/IP we'll need to set the protocols in the Sql Server Configuration Manager. With recent versions of SQL Server it looks like Microsoft is not longer installing shortcuts for the SQL Server Configuration Manager, so I had to launch it using the mmc and adding the snap-in:

Start the mmc.exe Add the Sql Server Configuration snap-in (ctrl-m -> Sql Server)

Once you're there navigate to the Sql Server Network configuration and enable TCP/IP:


Slow Connections with Sql Server

Just for good measure I also turned off the other two, but that's optional. Connections are still fast even if the other two are enabled, but TCP/IP is the default protocol so connections are still fast.

Why are Named Pipe Connections so slow?

I'm pretty sure that I've used Named Pipes in the past and didn't see this type of slow connection times, but I've verified this on several machines so it's not just a fluke with my new dev box. Each machine I've removed TCP/IP from takes about 2 seconds to connect with either Named Pipes or Shared Memory, while with TCP/IP Connections enabled connections are nearly instant on a local machine. As well it should be.

Anybody have any ideas why Named Pipes are so slow for SQL connections?

Summary

I'm writing this up because I know I'll run into this again next time I install a dev machine or even a new server and hopefully by then I'll remember that I wrote this blog post :smiley:. Maybe it'll help you find the issue this way as well.

SQL Server Execution Plan Operators Part 2

$
0
0

In theprevious article, we talked about the first set of operators you may encounter when working with SQL Server Execution Plans. We described the Non Clustered Index, Seek Execution Plan operators, Table Scan, Clustered Index Scan, and the Clustered Index Seek. In this article, we will discuss the second set of these SQL Server execution plan operators.

Let us first create the below table and fill it with 3K records to use it in the examples of this article. The table can be created and filled with data using the T-SQL script below:

CREATE TABLE ExPlanOperator_P2 ( ID INT IDENTITY (1,1), EmpFirst_Name VARCHAR(50), EmpLast_name VARCHAR(50), EmpAddress VARCHAR(MAX), EmpPhoneNum varchar(50) ) GO INSERT INTO ExPlanOperator_P2 VALUES ('AB','BA','CB','123123') GO 1000 INSERT INTO ExPlanOperator_P2 VALUES ('DA','EB','FC','456456') GO 1000 INSERT INTO ExPlanOperator_P2 VALUES ('DC','EA','FB','789789') GO 1000 SQL Server RID Lookup Operator

Assume that we have a Non-Clustered index on the EmpFirst_Name column of the ExPlanOperator_P2 table, that is created using the CREATE INDEX T-SQL statement below:

CREATE INDEX IX_ExPlanOperator_P2_EmpFirst_Name ON ExPlanOperator_P2 (EmpFirst_Name)

If you try to run the below SELECT statement to retrieve information about all employees with a specific EmpFirst_Name values, after including the Actual SQL Server execution plan of that query:

SELECT * FROM ExPlanOperator_P2 WHERE EmpFirst_Name = 'BB'

Checking the SQL Server explain plan generated after executing the query, you will see that the SQL Server Query Optimizer will use the Non-Clustered index to seek for all the employees with the EmpFirst_Name values equal to ‘BB’, without the need to scan the overall table for these values. On the other hand, the SQL Server Engine will not be able to retrieve all the requested values from that Non-Clustered index, as the query requests all columns for that filtered records. Recall that the Non-Clustered index contains only the key column values and a pointer to the rest of the columns for that key in the base table.

To dive deeply into the Non-Clustered index subject, see the article Designing effective SQL Server non-clustered indexes .

Due to the fact that this table contains no Clustered index, the table will be considered a heap table that has no criteria to sort its pages and sort the data within the pages.

For more information about the heap table structure, check SQL Server table structure overview .

Because of this, the SQL Server Engine will use the pointers from the Non-Clustered index, that points to the location of the rest of the columns on the base table, to locate and retrieve the rest of columns from the underlying table, using a Nested Loops operator to join the Index Seek data with the set of data retrieved from the RID Lookup operator, also known as a Row Identifier operator, as shown in the SQL Server execution plan below:


SQL Server Execution Plan Operators   Part 2

A RID is a row locator that includes information about the location of that record such as the database file, the page, the slot numbers that helps to identify the location of the row quickly. If you move the mouse to point to the RID Lookup on the generated SQL Server execution plan to view the tooltip of that operator, you will see in the Output List, the list of columns that are requested by the query and returned by this operator, as these columns are not located in the Non-Clustered index, as shown below:


SQL Server Execution Plan Operators   Part 2

If you look at the RID Lookup operator in the SQL Server execution plan, you will see that the cost of that operator is high related to the overall weight of the plan, which is 50% in our example. This is due to the additional I/O overhead of the two different operations that are performed instead of a single one, before combining it with a Nested Loops operation. This overhead can be neglected when processing small number of rows. But if you are working with huge number of records, it is better to tune that query, rewrite the query by limiting the retrieved columns or create a covering index for that query. If the RID Lookup is eliminated by creating a covering index, the Nested Loops operator would not be needed in this SQL Server execution plan. To dig deeply in the covering index concept, check Working with different SQL Server indexes types .

SQL Server Key Lookup Operator

The Key Lookup operator is the Clustered equivalent of the RID Lookup operator described in the previous section. Assume that we have the below Clustered index that is created on the ID column of the EmpFirst_Name table, using the CREATE INDEX T-SQL statement below:

CREATE CLUSTERED INDEX IX_ExPlanOperator_P2_ID on ExPlanOperator_P2 (ID)

If you try to run the previous SELECT statement that retrieves information about all employees with a specific EmpFirst_Name values, after including the Actual SQL Server execution plan of that query:

SELECT * FROM ExPlanOperator_P2 WHERE EmpFirst_Name = 'BB'

Checking the SQL Server Execution Plan generated after executing the query, you will notice that the SQL Server Engine performed a seek operation on the Non-Clustered index to retrieve all the employees with the EmpFirst_Name column values equal to ‘BB’. But again, not all the columns can be retrieved from that Non-Clustered index. Therefore, the SQL Server Engine will derive benefits from the pointers existing on that Non-Clustered index that point to the rest of columns in the underlying table. And also due to the fact that this table is a Clustered table, that has a Clustered index that sorts that table’s data, the Non_Clustered index pointers will point to the Clustered index instead of pointing to the underlying table. The rest of columns will be retrieved using a Nested Loops operator to join the Index Seek data with the data retrieved from the Key Lookup operator, as the SQL Server Engine is not able to retrieve the rows in a shot. In other words, the SQL Server Engine will use the clustered index key as a reference to look up for the data that is stored in the clustered index using the clustered key values stored in the Non-Clustered index, as shown in the SQL Server execution plan below:


SQL Server Execution Plan Operators   Part 2

Similar to the RID Lookup operator, the Key Lookup is very expensive as it requires additional I/O overhead, depending on the number of records. In addition, the Key Lookup operator is an indicator that a covering or included index is required and may enhance the performance of that query by eliminating the need of the Key Lookup and Nested Loops operators. For example, if we include all the required columns in the existing Non-Clustered index, using the CREATE INDEX T-SQL statement below:

CREATE INDEX IX_ExPlanOperator_P2_EmpFirst_Name ON ExPlanOperator_P2 (EmpFirst_Name) INCLUDE (ID,EmpLast_name, EmpAddress, EmpPhoneNum) WITH (DROP_EXISTING = ON) Next run the previous SELECT statement with including the Actual Execution Plan. You will see that the Key Lookup and the Nested Loops operators are no longer used as the SQL Server Engine will retrieve all the requested data by seeking the Non-Clustered index, as shown be

Insert data and if already inserted then update to sql

$
0
0
Insert or update T-SQL

I have a question regarding performance of SQL Server. Suppose I have a table persons with the following columns: id, name, surname. Now, I want to insert a new row in this table. The rule is the following: If id is not present in the table, then ins

Solutions for INSERT OR UPDATE on SQL Server

Assume a table structure of MyTable(KEY, datafield1, datafield2...). Often I want to either update an existing record, or insert a new record if it doesn't exist. Essentially: IF (key exists) run update command ELSE run insert command What's the best

Is the insertion or update of SQL Server timestamp column explicitly possible?

Is there any way to provide an explicit value for time stamp column in a table in SQL server? I am aware it is not datetime column but I want to know whether there is any way to insert or update it explicitly.You cannot insert/update to timestamp col

Conversion failed when converting the date and / or time of the string to SQL

I have the following columns in my table: year decimal(4,0) month decimal(2,0) day decimal(2,0) and I am trying to convert them as below: SELECT (CAST (CAST(year AS varchar(4)) +CAST(month AS varchar(2)) +CAST(day AS varchar(2) ) AS date ) ) AS xDate

Trigger to select data from another table, and then update them in the current SQL Server table

Please help me, I have a simple case that makes me little crazy, I have a table called PRODUCTS here is the structure create table PRODUCT ( ID_PRODUCT CHAR(5) primary key not null, NAME_PRODUCT CHAR(30), PRICE_PRODUCT integer ) then the second table

Triggering mysql that triggers on INSERT or UPDATE?

Is there a way to create MySQL trigger which triggers on either UPDATE or INSERT? Something like CREATE TRIGGER t_apps_affected BEFORE INSERT OR UPDATE ... Obviously, the above don't work. So, any workarounds without creating two separate triggers? I

Compare date and; Time class in another JAVA class

so earlier this year I was given an assignment at university. The assignment was to create a Car park management system by using OOP procedures. For example, we learnt how to use inheritance, abstract classes and instances. I have already completed a

How to get the default value for the date and time of Formtastic?

I am using Formtastic 1.2.3. I want the current date and time already to be selected when the form is loaded. I tried many combinations with :default or :selected, but none worked. Even from the github page of Formtastic I can't get information on th

Terminology: how to say that a program has more data than instructions when the notion of data and instruction can not be separated?

This is a question about terminology. I use ARM as an example since it's the only assembly I'm familiar with but am looking for more general answers. Basically I'm having trouble distinguishing between an instruction, data and opcodes. I red that GPU

How to enter the date and time in the log file

I have one daemon written in C. I am logging the events in a log file, but now I want to add date and time while writing event to log file. How can I achieve that? Current log file:- Event one occurred: result: Event two occurred: result: I want the

Concatenation of date and time fields

I have a table invoices with this fields: invDate -> a date field invTime -> a time field I need to do querys like SELECT top 10 * from invoices WHERE DATETIME(invDate+invTime) BETWEEN DATETIME('2013-12-17 17:58') AND DATETIME() or something like th

comparing data from two different mysql tables inserts new data and updates the data that does not match

i'm trying to compare data from different tables, insert new data, and update the data that don't match. Example: I have table1 ------------------------------------ | ITEMNO | DESCRIPTION | FORSALE | ------------------------------------ | 123456 | De

Insert in temporary table and then update

I use a temp table to insert data that will be later on updated. For example: SELECT Name, Address, '' as LaterTobeUpdateField INTO #MyTempTable FROM OriginalTable Then I update the temp table UPDATE #MyTempTable SET LaterTobeUpdateField = 'new text'

Insert and if there is already update the SQL columns INSERT INTO users (firstname, lastname, email, mobile) VALUES ('Karem', 'Parem', '[emailprotected]', '123456789'); This what i would like to do, but if a row with the same email[emailprotected] already exists, then it should update that row with these val

SQL Server捕获发生The query processor ran out of internal resources and could no ...

$
0
0

最近收到一SQL Server数据库服务器的告警邮件,告警内容具体如下所示:

DATE/TIME:10/23/2018 4:30:26 PM

DESCRIPTION: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.

COMMENT: (None)

JOB RUN: (None)

关于 “ 8623 The query processor ran out of internal resources and could not produce a query plan” 这个错误,这篇文章不分析错误产生的原因以及解决方案。这里仅仅介绍如何捕获产生这个错误的SQL语句。因为出现这个错误,具体对应的SQL语句不会写入到错误日志。不能定位到具体SQL语句,很难解决这错误。所以解决问题的前提是先定位SQL语句。我们可以通过扩展事件或服务器端跟踪两种方式来定位SQL语句。

扩展事件(Extended Events)捕获

如下所示,脚本只需根据实际情况修改 filename 、 metadatafile 参数对应的值。就会创建扩展事件(Extented Events) overly_complex_queries

CREATE EVENT SESSION overly_complex_queries ON SERVER ADD EVENT sqlserver.error_reported ( ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username) WHERE ([severity] = 16 AND [error_number] = 8623) ) ADD TARGET package0.asynchronous_file_target (set filename = 'D:\DB_BACKUP\overly_complex_queries.xel' , metadatafile = 'D:\DB_BACKUP\overly_complex_queries.xem', max_file_size = 10, max_rollover_files = 5) WITH (MAX_DISPATCH_LATENCY = 5SECONDS) GO -- Start the session ALTER EVENT SESSION overly_complex_queries ON SERVER STATE = START GO

然后我们测试,使用网上一个脚本测试验证,如下所示,执行这个脚本就会报 “ 8623 The query processor ran out of internal resources and could not produce a query plan” 错误,如下所示:


SQL Server捕获发生The query processor ran out of internal resources and could no ...

选中扩展事件(Extented Events)overly_complex_queries,单击右键 “ Watch Live Data"就能查看是那个SQL语句出现这个错误(sql_text),当然,也可以通过选项 “ View Target Data ” 查看所有捕获的数据。


SQL Server捕获发生The query processor ran out of internal resources and could no ...

注意:这个扩展事件只能运行在SQL Server 2012及后续版本,如果是SQL Server 2008的相关版本部署,就会报下面错误:

Msg 25706, Level 16, State 8, Line 1

The event attribute or predicate source, "error_number", could not be found.

Msg 15151, Level 16, State 1, Line 18

Cannot alter the event session 'overly_complex_queries', because it does not exist or you do not have permission.

服务器端跟踪(Server Side Trace)捕获

如上所示,刚好我们这台数据库服务器的版本为SQL Server 2008 R2,我们只能采取Server Side Trace来捕获这个错误的SQL语句。设置Server Side Trace脚本如下(相关参数需根据实际情况等设定):

-- 定义参数 declare @rc int declare @TraceID int declare @maxfilesize bigint set @maxfilesize = 1024 -- 初始化跟踪 exec @rc = sp_trace_create @TraceID output, 0, N'D:\SQLScript\trace_error_8623', @maxfilesize, NULL --此处的D:\SQLScript\trace_error_8623是文件名(可自行修改),SQL会自动在后面加上.trc的扩展名 if (@rc != 0) goto error -- 设置跟踪事件 declare @onbit set @on = 1 --trace_event_id=13 SQL:BatchStarting trace_event_id=22 ErrorLog exec sp_trace_setevent @TraceID, 13, 1, @on exec sp_trace_setevent @TraceID, 13, 3, @on exec sp_trace_setevent @TraceID, 13, 6, @on exec sp_trace_setevent @TraceID, 13, 7, @on exec sp_trace_setevent @TraceID, 13, 8, @on exec sp_trace_setevent @TraceID, 13, 11, @on exec sp_trace_setevent @TraceID, 13, 12, @on exec sp_trace_setevent @TraceID, 13, 14, @on exec sp_trace_setevent @TraceID, 13, 15, @on exec sp_trace_setevent @TraceID, 13, 35, @on exec sp_trace_setevent @TraceID, 13, 63, @on exec sp_trace_setevent @TraceID, 22, 1, @on exec sp_trace_setevent @TraceID, 22, 3, @on exec sp_trace_setevent @TraceID, 22, 6, @on exec sp_trace_setevent @TraceID, 22, 7, @on exec sp_trace_setevent @TraceID, 22, 8, @on exec sp_trace_setevent @TraceID, 22, 12, @on exec sp_trace_setevent @TraceID, 22, 11, @on exec sp_trace_setevent @TraceID, 22, 14, @on exec sp_trace_setevent @TraceID, 22, 14, @on exec sp_trace_setevent @TraceID, 22, 35, @on exec sp_trace_setevent @TraceID, 22, 63, @on -- 启动跟踪 exec sp_trace_setstatus @TraceID, 1 -- 记录下跟踪ID,以备后面使用 select TraceID = @TraceID goto finish error: select ErrorCode=@rc finish: GO

上面SQL会生成一个服务器端跟踪事件,并返回对应的id,如下查看所示:


SQL Server捕获发生The query processor ran out of internal resources and could no ...

注意:上面捕获 SQL:BatchStarting 事件( trace_event_id=13 ),是因为捕获 ErrorLog ( trace_event_id=22) 等事件时,都

无法捕获到对应的 SQL( 对应的 trace column 没有捕获 SQL 语句,暂时还没有找到一个好的解决方法)。这里也有个弊端,就是会捕获大量无关的 SQL 语句 。

测试过后,你可以使用SQL Profile工具打开D:\SQLScript\trace_error_8623.trc找到错误信息,对应的SQL语句(在这个时间点附近的SQL语句,一般为是错误信息后面的第一个SQL语句,需要做判断),如下截图所示:


SQL Server捕获发生The query processor ran out of internal resources and could no ...

也可以使用脚本查询,如下所示,也是需要自己判断定位SQL语句,一般都是 “ 8623 The query processor ran out of internal resources and could not produce a query plan ” 出现后紧接着的SQL。

SELECT StartTime,EndTime, TextData, ApplicationName,SPID,Duration,LoginName FROM ::fn_trace_gettable(N'D:\SQLScript\trace_error_8623.trc',DEFAULT) WHERE spid=64 ORDERBY StartTime
SQL Server捕获发生The query processor ran out of internal resources and could no ...

参考资料:

https://www.mssqltips.com/sqlservertip/5279/sql-server-error-query-processor-ran-out-of-internal-resources-and-could-not-produce-a-query-plan/

https://www.mssqltips.com/sqlservertip/1035/sql-server-performance-statistics-using-a-server-side-trace/

Run more than 100 SSIS packets in parallel from a parent package

$
0
0

I have 100+ child packages and I need to run them in parallel from a parent package. For this I will have to create 100+ Execute Package tasks and then 100+ File Connections. This doesn't look appealing to me and it is repetative and error prone. Is there any other way to do this. Keep two things in mind.

Child package Execution should be in parallel (so no For loop and stuffs)

I am using CheckPoint based restart-ability and hence need control flow items at compile time (no script component based solutions too)

UPDATE: Even if you have massive hardware, windows limits the number of concurrent tasks you can start simultaneously due to an inherent design issue. Though I achieved parallel execution using jobs, I had to limit it to 25 parallel packages at a time to avoid random failures due to the windows issue.

Does it have to be file connections? Have you looked at the options of having the packages stored in the SSIS package store and referencing it from there.

You would still have your 100+ components, but not your 100+ file connections.

Interesting Stuff - Week 43

$
0
0

Throughout the week, I read a lot of blog-posts, articles, and so forth, that has to do with things that interest me:

data science data in general distributed computing SQL Server transactions (both db as well as non db) and other “stuff”

This blog-post is the “roundup” of the things that have been most interesting to me, for the week just ending.

Data Science Announcing automated ML capability in Azure Machine Learning . Somehow I must have missed this post announcing Azure Automated Machine Learning . What is it? Well, it is a way for the Azure Machine Learning Service to automatically pick an algorithm for you, and generate a model from it. It sounds really interesting, and this is something I need to take a look at. Streaming

That’s a Wrap! Kafka Summit San Francisco 2018 Roundup . The San Francisco Kafka Summit ran October 16 - 17, and this blog post is a summary of the conference. It also has links to some interesting sessions, and out of those, these are my three favorites:

Zen and the Art of Streaming Joins―The What, When and Why . Kafka Security 101 and Real-World Tips . Breaking Down a SQL Monolith with Change Tracking, Kafka and KStreams/KSQL .

Stateful Stream Processing: Apache Flink State Backends . This post explores stateful stream processing and more precisely the different state backends available in Apache Flink. It presents the 3 state backends of Apache Flink, their limitations and when to use each of them depending on case-specific requirements.

Apache Kafka on Kubernetes Could you? Should you? . This is a post discussing whether you should run Kafka on Kubernetes or not. Atwork, we are in the process of rolling out our first Kafka deployments, (not on Kubernetes), and this post definitely gives us “food for thought”.

WIND (What Is Niels Doing)

Yeah, I kind of ask myself that question as well (what am I doing): at the moment I have a hard time getting any blog posts out, as I am busy at work as well as trying to get to grips with SQL Server 2019 Big Data Clusters . I hope to be able to publish something in a week (or twos) time.

In the meantime, if you are interested in SQL Server 2019 , go and have a read at these two posts:

What is New in SQL Server 2019 Public Preview . SQL Server 2019 for linux in Docker on windows . ~ Finally

That’s all for this week. I hope you enjoy what I did put together. If you have ideas for what to cover, please comment on this post or ping me.

Blog Feed: To automatically receive more posts like this, please subscribe to my RSS/Atom feed

in your feed reader!


Buscar trigger en sql server eu 40 shoe size in cm

$
0
0

Buscar trigger en sql server eu 40 shoe size in cm

Hola leandro, por favor me podrias decir si en el script puedo mandarle variables de code behind (en mi caso una clase en vb), tengo un ex.message que. severidad: artículos ssis espaol 21.09.2017 pdf en busca de un sueo silvio rodriguez lyrics files that contain the buscar trigger en sql server visual studio 2005 documentation para todos los drivers, busco un chollo madrid hoteles cuando se activa la opcion persist o persistent, se forza una conexionpersistente; asi mismo, cuando se activa new, entonces la. en %2! el problema es que en esta tabla esta almacenada la fecha en formato 2009-03-04 12:34:45.923 osea en. yo no he trabajado en oracle, pero lo de ‘optimizar’ una vista depende de la experiencia. 21: instancias del motor de base de datos (sql server) database engine instances (sql server) 03/14/2017; tiempo de lectura: artículos sql server espaol. tengo una tabla con una columna buscar trigger en sql server datosfecha char (24). sex contactos putas swinger barcelona .


Buscar trigger en sql server eu 40 shoe size in cm
Buscar trigger en sql server eu 40 shoe size in cm

Embassy antananarivo alerts u.s. error: artículos ssis espaol 21.09.2017 pdf files that contain the visual studio 2005 documentation para todos los drivers, cuando hombres venezolanos famosos se activa la opcion persist o persistent, se forza una conexionpersistente; asi mismo, cuando se activa new, entonces la. el siguiente trigger permite auditar los cambios (inserciones, actualizaciones y eliminaciones) a una tabla. artículos sql server integration services ssis castellano. artículos sql aura cristina geithner follando server castellano. advertencia: en este buscar trigger en sql server artículo.

[Video] Office Hours 2018/10/24 (With Transcriptions)

$
0
0

This week, Brent, Tara, and Richie discuss increasing a table’s varchar field from 1 to 2, predicting how long CHECKDB should take, SQL Server configuration options, reporting, reducing high waits, analysis services for SQL Azure, advice to first time PASS attendees, compatibility levels, cardinality estimator, PowerShell, and Tara’s hairdo.

Here’s the video on YouTube:

You can register to attendnextweek’s Office Hours , or subscribe to our podcast to listen on the go.

If you prefer to listen to the audio:
[Video] Office Hours 2018/10/24 (With Transcriptions)

Podcast: Play in new window | Download

Enjoy the Podcast?

Don’t miss an episode, subscribe via iTunes , Stitcher or RSS .

Leave us a review in iTunes

Office Hours Webcast 2018-10-24 Should I use SSIS to manage AG backups?

Brent Ozar: Let’s see, Josh asks, “We have a suggestion for backup jobs to use an SSIS package around to move the jobs to whatever node is the primary in an Availability Group. Have you seen this setup and can it work well?”

Tara Kizer: I haven’t seen that for the Availability Groups that I’ve supported. We had two different methods at three companies that I had Availability Groups. But at the last one, we just had a server that would run the backups and it would just point to the listener name. So all of our backups, index maintenance, anything that needed to connect to a specific replica, we would just put the jobs on that server instead and it would be a SQL CMD and it would point to the listener name. And we did that for the backups because we wanted the backups to be on the primary replica. I don’t necessarily agree with offloading such a critical task.

Brent Ozar: And why not?

Tara Kizer: Well even on a synchronous replica, it’s not completely up to date. So that is just a good idea if your RPO goal is so low.

Brent Ozar: Between that advice too, and if you think about an SSIS package that would move stuff around, there’s going to be a delay there. Like, your backups won’t be up to date until the SSIS package moves a job around. And if that job process breaks, you’re not getting backups. I’d be like, I can’t unsubscribe fast enough from that plan. What I would do there instead is Ola Hallengren’s stuff. Ola Hallengren’s backup scripts will automatically run on every replica and just backup wherever you tell it to; like, if you want to prefer a secondary or if you want to prefer the primary. Configuration’s not super simple on Always On Availability Groups, but it works.

I need to go from VARCHAR(1) to VARCHAR(2)…

Brent Ozar: Pablo says, “Hello, friends…” Hello Pablo, “I have an 800GB table that needs to increase the size of a varchar field from one to two. What way would you recommend doing that to reduce impact?” Isn’t that an instant change?

Tara Kizer: Yeah, I was going to say, I don’t know that this is going to take a while. Run that on a test box.

Brent Ozar: Yeah, I’m pretty sure that that’s going to be instant.

Tara Kizer: Yeah, and if it isn’t, I would probably just move the column to another table. I would cheat because your table’s just so big. Remove the column I realize that there’s a huge impact there, but…

Brent Ozar: Somebody had a really good post just recently about a switching cups way to do it. What you do is add a new column with the new data type that you want, but with a different name. You put in a trigger so that whenever an update happens, it sets the new column over to the old value’s name. Then you go through and roll through in batches of like 4000 rows at a time, gradually updating the new column. Then you switch column names.

Tara Kizer: Oh, I like it. [crosstalk] I mean, it’s 800GB. I mean, we don’t even know the row count, because maybe it’s 800GB because of data, you know, the size of a row.

Brent Ozar: Yeah, let’s try it and see. And I’m just going to grab, out of the Stack Overflow post table, I’m just going to grab one of the columns post-only because oh, you know what I’ll do, find out if votes has a varchar in it. No, it doesn’t. Posts has a varchar and posts has a decent number of rows in this. I’m going to change title from nvarchar 250 to nvarchar251. Alter column, and then I’m going to freeze because I get a spinning beach ball. Oh, that’s excellent, c-c-c-c-column. It’s like some kind of pop song.

What did I say I was going to do? Title nvarchar 251, and let’s see if that happens instantly. So the next thing I’m going to do is switch into the right database, then I’m going to run it. There you go. Yeah, that’s a metadata change. It should happen instantly, regardless of the number of rows in the table. Love it when we can give people good news instead of bad news.

Richie Rump: It doesn’t happen often.

Brent Ozar: It does not happen often.

How long will CHECKDB take?

Brent Ozar: So next up, Brian says Brian, this is a great question, “Is there a good way to predict how long CHECKDB should take? I’ve tried to space out my various jobs, but sometimes they step on each other’s toes with my jobs running into business hours on the next morning.” Any ideas, Tara, Richie?

Tara Kizer: I mean, you know, CHECKDB, I don’t even like to run that on a production server. I offload that task to another box which, obviously, you have to license that other box. But I like a box that maybe doesn’t have a lot of hardware, but it can churn through just running CHECKDB in all the databases. Eventually it completes, and maybe it’s doing the work for more than one server, but it might have lesser cores for licensing reasons, because you do need to license it.

Richie Rump: I mean, do you need to license that?

Tara Kizer: You do.

Richie Rump: Wouldn’t the development work?

Tara Kizer: No, because you’re offloading a production task. That’s the key there.

Richie Rump: Semantics.

Brent Ozar: If you just happened to be restoring it every night into your development environment and you wanted to make sure that your development environment was okay, that’s totally still not legal, but there we go.

Brent Ozar: Marci says, “Hey, did you get my question?” Yes.

What should my setup defaults be?

Brent Ozar: Jason says, “When you helped us configure our SQL Server 2012 database a few years ago, you recommended that we start with MAXDOP 8, cost threshold at 50, and eight tempdb data files and then tweak from there. We’re not switching to 2016. Do you recommend we start those same settings?”

Tara Kizer: Yeah.

Brent Ozar: Assuming you’re going to the same size or bigger of a box. Heaven forbid you’re going to a two core box or something like that.

Tara Kizer: I don’t know if this is in our set of checklists. I don’t think it is. But on 2014 and

Microsoft SQL Server Containers

$
0
0
I’m a Database Administrator, why am I learning about containers?

So, it seems containers are here to stay. Before the release of SQL Server 2017, I really hadn’t paid attention to this space. The thinking was I’d never need to know about any other OS except windows Server. This all changed when Microsoft decided to release SQL Server on linux. I began to panic. I’ve intentionally focused on the Windows platform, and of course SQL Server, over the past 11 years.

Resistance is futile Instead of waiting for SQL Server on Linux (or containers) to become widely adopted I decided to begin learning both Linux and containers. I’ll be creating a series of blog posts as I start down this journey. Each will be related to SQL Server, docker , and Linux.

What is a container?

Docker describes a container as being “a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.” They are lightweight and do not require their own OS. For a full description, see https://dockr.ly/2PYAbPy .

SQL Server and Containers

Microsoft supports running SQL Server within Linux containers beginning with SQL Server 2017. There are several use cases for running SQL Server within containers (a major one being CI/CD for dev and QA environments). When you add on container orchestration platforms, such as Kubernetes, things get real interesting (ability to move containers to new hosts, scale up, scale down etc). As I progress through this path I’ll be setting an Openshift ( OKD ) cluster.

Getting started

I’m using Red Hat Enterprise Linux 7.5. If you choose to use this distro, start with a Red Hat Developer subscription. For more details, see https://developers.redhat.com/.

Proxmox is the hypervisor used within my lab environment. Each of the commands listed below are being run while logged in as root for simplification (I know, this isn’t best practice).

Setup a Red Hat Developer subscription. Download RHEL. Create a VM, or install on bare metal, using the RHEL iso. Configure networking and the hostname. I used the ntmui utility to do this. See here for details. Add the server to your subscription. This allows you to utilize Red Hat repositories, pull updates, etc using yum. As root, or sudo, run the following: # subscription-manager register \ # --username DevSubscriptionUserName \ # --password DevSubscriptionPassword --auto-attach Once registered, pull down the latest updates using yum. (What is yum? https://access.redhat.com/solutions/9934 ) # yum -y update Install Docker. # yum install yum-utils -y # yum-config-manager --enable rhel-7-server-extras-rpms # yum -y install docker Start the docker daemon, set to run automatically on boot, and check the status of docker. # systemctl start docker.service # systemctl enable docker.service # systemctl status docker.service Check the docker version # docker version Example output from docker version.
Microsoft SQL Server Containers
Docker is installed, now what?

With docker installed we can pull an image of SQL Server using “docker pull.” We’ll grab both a SQL Server 2017 image and SQL Server 2019 (CTP 2.0).

SQL Server 2017 (dev edition) # docker pull mcr.microsoft.com/mssql/server:2017-latest Check the available docker images: # docker images
Microsoft SQL Server Containers
Create a container using the 2017-latest image # docker run \ # -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=YourStrongP@ssword' \ # -p 1433:1433 --name mssql2017 \ # -d mcr.microsoft.com/mssql/server:2017-latest Verify the container is running by entering “docker ps” at the prompt and pressing enter. You should see the following information returned: CONTAINER_ID Unique ID IMAGE Image used to create the container COMMAND running command within the container CREATED Time the container was created STATUS PORTS Defined by the -p argument used within the docker run command. We’ve defined the port as 1433:1433 (host is listening on port 1433 and passing traffic to port 1433 within the container). NAMES Container name. Connect to the SQL Server instance using SSMS, Azure Data Tools, or sqlcmd. You’ll need to connect to the host IP over port 1433 for this example.
Microsoft SQL Server Containers
SQL Server 2019 CTP 2.0 (RHEL image) # docker pull mcr.microsoft.com/mssql/rhel/server:vNext-CTP2.0 Check the available docker images: # docker images
Microsoft SQL Server Containers
Create a container using the vNext-CTP2.0 image # docker run \ # -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=YourStrongP@ssword' \ # -p 5000:1433 --name mssql2019 \ # -d mcr.microsoft.com/mssql/rhel/server:vNext-CTP2.0 Verify the container is running by entering “docker ps” at the prompt and pressing enter. You should see the following information returned: CONTAINER_ID Unique ID IMAGE Image used to create the container COMMAND running command within the container CREATED Time the container was created STATUS PORTS Defined by the -p argument used within the docker run command. We’ve defined the port as 5000:1433 (host is listening on port 5000 and passing traffic to port 1433 within the container). NAMES Container name. Connect to the SQL Server instance using SSMS, Azure Data Tools, or sqlcmd. You’ll need to connect to the host IP over port 5000 for this example.
Microsoft SQL Server Containers
Docker Logs

docker logs fetches the logs available within a particular container. Very useful if you can’t connect via SSMS and you need to review the SQL Server error log.

# docker logs mssql2017 Cleanup

To cleanup our host we’ll stop each container, remove the container, and then remove the images. One thing to note, once you remove a container all associated data is also removed. By default, a container does NOT persist data if it is removed (docker rm). I’ll revisit this in an upcoming post in which we can ensure any databases created are persisted in the event a container is removed.

# docker stop mssql2017 # docker stop mssql2019 # docker rm mssql2017 # docker rm mssql201

Solved Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...

$
0
0

While trying to run one of our SSIS Packages from SQL Server Job, which hadscript component in it, we got the below error. It was running fine within the SSDT in our Dev Machine. In fact, the other packages deployed to SSISDB were also running fine, the ones which were not using the Script Component.


Solved   Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...

To fix it, we updated the project version of SSDT to match the SQL Server where we were deploying the package inside Project Properties.


Solved   Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...

To find the SQL Server version.


Solved   Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...

Anddeployed

only that packageinstead of the project in SSISDB.

However, on running the package from within the SQL Server again, threw the same error. We tried a few other things like opening the same package in SSDT in the Server and then try deploying that particular package from there. We also tried by deleting the existing the script component in the package and using the script component that was available in the toolbox in SSDT in the Server.

The package clearly showed the difference in the version for the Script Component. However again deploying that single updated package gave the same mismatch exception.


Solved   Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...
Solved   Microsoft.SqlServer.Dts.Pipeline.ComponentVersionMismatchException: Th ...

Eventually, we then deployed the Project (after updating the Target SQL Server version to the Server’s SQL version) and not the individual package . And it ran successfully this time.

So basically we need to make sure our target version is correct and deploy the entire Project to fix this issue in our case.

Hope it helps..

Is there a maximum table size after the T-SQL IN operator&quest;

$
0
0

I am tasked with the following: The customer wants to read a number (possibly thousands) of values from a csv file. I have to use these values in an SQL select statement like this:

SELECT * FROM myTable WHERE myTable.Value IN (cvsValue1, csvValue 2, ..., csvValueN)

The question is: Will this work for an arbitrary number of csv values, and will it perform well for large number of values?

I will need to save the SQL as a string internally in my C# application for later use. (if that makes a difference for alternative solutions)

You really don't want to do that. It would be better to dump those values into an indexed table and use IN as a subquery (which typically implements a SEMI JOIN ( more info ) vs the array of strings (which is typically implement as a series of OR operations.

from BOL :

Including an extremely large number of values (many thousands) in an IN clause can consume resources and return errors 8623 or 8632. To work around this problem, store the items in the IN list in a table. Error 8623: The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information. Error 8632: Internal error: An expression services limit has been reached. Please look for potentially complex expressions in your query, and try to simplify them.

Viewing all 3160 articles
Browse latest View live