Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all 3160 articles
Browse latest View live

sql-server SQL Server:具有WHERE子句的多表连接

$
0
0

我正在使用SQL Server,我很难想从我想要的SELECT查询中获取结果.我尝试加入不同的订单和使用子查询,但没有什么是我想要的方式.拿这个具有不同版本级别的软件应用程序的例子,可能安装在人民币计算机上.

我需要执行一个附加的地方,但由于某种原因,我无法得到我想要的结果.

也许我正在看我的数据错误,我不太确定为什么我不能让这个工作.

申请表

ID Name 1 Word 2 Excel 3 Powerpoint

软件表(包含不同应用程序的版本信息)

ID ApplicationID Version 1 1 2003 2 1 2007 3 2 2003 4 2 2007 5 3 2003 6 3 2007

软件计算机连接表

ID SoftwareID ComputerID 1 1 1 2 4 1 3 2 2 4 5 2

电脑桌

ID ComputerName 1 Name1 2 Name2

我想要一个查询,我可以运行我选择一个特定的计算机来显示什么软件版本和应用程序,但我也希望它显示什么应用程序它没有(版本将是一个空的,因为它没有那个软件就可以了)

SELECT Computer.ComputerName, Application.Name, Software.Version FROM Computer JOIN Software_Computer ON Computer.ID = Software_Computer.ComputerID JOIN Software ON Software_Computer.SoftwareID = Software.ID RIGHT JOIN Application ON Application.ID = Software.ApplicationID WHERE Computer.ID = 1

我想要以下结果集

ComputerName Name Version Name1 Word 2003 Name1 Excel 2007 Name1 Powerpoint NULL

但我只是得到

Results ComputerName Name Version Name1 Word 2003 Name1 Excel 2007

我认为RIGHT JOIN将包括应用程序表中的所有结果,即使它们与计算机无关.我缺少/做错了什么?

使用LEFT JOIN或RIGHT JOIN时,无论您将滤镜放在“WHERE”还是“JOIN”中,都有所不同.

看到这个答案我刚才写的一个类似的问题:

What is the difference in these two queries as getting two different result set?

简而言之:

>如果把它放入WHERE子句(就像你所做的那样),与该计算机无关的结果被完全过滤掉

>如果你把它放在JOIN中,那么与查询结果不相关的结果就出现在查询结果中,只有NULL值

>这是你想要的

http://stackoverflow.com/questions/8758223/sql-server-multiple-table-joins-with-a-where-clause


An Incompatible SQL Server Version Was Detected

$
0
0
Database Projects: Helping Find Obsolete References

Kevin Feasel

2017-11-15

SQL Server Data Tools

Jan Mulkens explains some of those “unresolved reference” warnings in SQL Server Data Tools database projects: if you’re developing databases in SSDT, like you should, you’re probably getting a lot of build warnings. One of the warnings you’ll see the most often is the “unresolved reference”. Usually you solve these by adding either the master, […] Read More CI With SQL Server And Jenkins

Kevin Feasel

2017-04-26

Deployment , Powershell , Source Control , SQL Server Data Tools

Chris Adkin shows how to auto-deploy SQL Server Data Tools projects to a SQL Server instance using Jenkins: The aim of this blog post is twofold, it is to explain how: A“Self building pipeline” for the deployment of a SQL Server Data Tools project can be implemented using open source tools Abuild pipeline can be […] Read More

RedHat7.4上安装SQL Server 2017

$
0
0

版权声明:本文为Buddy Yuan原创文章,未经允许不得转载。原文地址: REDHAT7.4上安装SQL SERVER 2017

今天测试了一下在RedHat7.4上安装SQL Server 2017,整个安装过程并不复杂。以下是安装过程的详细步骤。

1.检查CPU和内存,CPU至少2个核心,内存2GB以上,磁盘空间大于6GB,如图所示:


RedHat7.4上安装SQL Server 2017
[root@10 ~]# grep -i --color "model name" /proc/cpuinfo
model name : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
model name : Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
[root@10 ~]# grep -i --color "MemTotal" /proc/meminfo
MemTotal: 4046612 kB
[root@10 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 42G 4.0G 38G 10% /
devtmpfs 2.0G 0 2.0G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 2.0G 9.5M 2.0G 1% /run
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/sda1 497M 153M 344M 31% /boot
tmpfs 396M 4.0K 396M 1% /run/user/992
tmpfs 396M 48K 396M 1% /run/user/1000
tmpfs 396M 0 396M 0% /run/user/0

2.下载Microsoft SQL Server 2017 Red Hat存储库配置文件:

[root@10 ~]# sudo curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/7/mssql-server-2017.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 232 100 232 0 0 778 0 --:--:-- --:--:-- --:--:-- 781

3.运行yum安装sql server

[root@10 ~]# yum install -y mssql-server
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
packages-microsoft-com-mssql-server-2017 | 2.9 kB 00:00:00
packages-microsoft-com-mssql-server-2017/primary_db | 16 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package mssql-server.x86_64 0:14.0.3038.14-2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================================================================
Installing:
mssql-server x86_64 14.0.3038.14-2 packages-microsoft-com-mssql-server-2017 169 M
Transaction Summary
================================================================================================================================================================================================================================================================
Install 1 Package
Total download size: 169 M
Installed size: 169 M
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/packages-microsoft-com-mssql-server-2017/packages/mssql-server-14.0.3038.14-2.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID be1229cf: NOKEY====================================== ] 5.7 MB/s | 168 MB 00:00:00 ETA
Public key for mssql-server-14.0.3038.14-2.x86_64.rpm is not installed
mssql-server-14.0.3038.14-2.x86_64.rpm | 169 MB 00:00:30
Retrieving key from https://packages.microsoft.com/keys/microsoft.asc
Importing GPG key 0xBE1229CF:
Userid : "Microsoft (Release signing) <gpgsecurity@microsoft.com>"
Fingerprint: bc52 8686 b50d 79e3 39d3 721c eb3e 94ad be12 29cf
From : https://packages.microsoft.com/keys/microsoft.asc
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : mssql-server-14.0.3038.14-2.x86_64 1/1
+--------------------------------------------------------------+
Please run 'sudo /opt/mssql/bin/mssql-conf setup'
to complete the setup of Microsoft SQL Server
+--------------------------------------------------------------+
SQL Server needs to be restarted in order to apply this setting. Please run
'systemctl restart mssql-server.service'.
Verifying : mssql-server-14.0.3038.14-2.x86_64 1/1
Installed:
mssql-server.x86_64 0:14.0.3038.14-2

4.运行SQL Server配置脚本(/opt/mssql/bin/mssql-conf),这里选择安装的版本,因为我们是测试,就选择了Developer版。同时还需要设置SA的密码,密码规则是强密码,需要最少8个字符,包括大写和小写字母、数字和/或非字母数字符号。

[root@10 bin]# pwd
/opt/mssql/bin
[root@10 bin]# ./mssql-conf setup
Choose an edition of SQL Server:
1) Evaluation (free, no production use rights, 180-day limit)
2) Developer (free, no production use rights)
3) Express (free)
4) Web (PAID)
5) Standard (PAID)
6) Enterprise (PAID)
7) Enterprise Core (PAID)
8) I bought a license through a retail sales channel and have a product key to enter.
Details about editions can be found at
https://go.microsoft.com/fwlink/?LinkId=852748&clcid=0x409
Use of PAID editions of this software requires separate licensing through a
Microsoft Volume Licensing program.
By choosing a PAID edition, you are verifying that you have the appropriate
number of licenses in place to install and run this software.
Enter your edition(1-8): 2
The license terms for this product can be found in
/usr/share/doc/mssql-server or downloaded from:
https://go.microsoft.com/fwlink/?LinkId=855862&clcid=0x409
The privacy statement can be viewed at:
https://go.microsoft.com/fwlink/?LinkId=853010&clcid=0x409
Do you accept the license terms? [Yes/No]:Yes
Enter the SQL Server system administrator password:
The specified password contains an invalid character. Valid characters include uppercase letters, lowercase letters, numbers, symbols, punctuation marks, and unicode characters that are categorized as alphabetic but are not uppercase or lowercase.
Enter the SQL Server system administrator password:
Confirm the SQL Server system administrator password:
Configuring SQL Server...
ForceFlush is enabled for this instance.
ForceFlush feature is enabled for log durability.
Created symlink from /etc/systemd/system/multi-user.target.wants/mssql-server.service to /usr/lib/systemd/system/mssql-server.service.
Setup has completed successfully. SQL Server is now starting.

5.查看mssql服务的状态,目前是running的。

systemctl status mssql-server
[root@10 bin]# systemctl status mssql-server
● mssql-server.service - Microsoft SQL Server Database Engine
Loaded: loaded (/usr/lib/systemd/system/mssql-server.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-10-15 23:02:15 CST; 3min 27s ago
Docs: https://docs.microsoft.com/en-us/sql/linux
Main PID: 4375 (sqlservr)
CGroup: /system.slice/mssql-server.service
├─4375 /opt/mssql/bin/sqlservr
└─4414 /opt/mssql/bin/sqlservr
Oct 15 23:02:18 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:18.81 spid11s Polybase feature disabled.
Oct 15 23:02:18 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:18.82 spid11s Clearing tempdb database.
Oct 15 23:02:18 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:18.84 spid6s 8 transactions rolled forward in database 'msdb' (4:0). This is an informational message only. No user action is required.
Oct 15 23:02:18 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:18.90 spid6s 0 transactions rolled back in database 'msdb' (4:0). This is an informational message only. No user action is required.
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.13 spid11s Starting up database 'tempdb'.
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.34 spid11s The tempdb database has 1 data file(s).
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.36 spid22s The Service Broker endpoint is in disabled or stopped state.
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.36 spid22s The Database Mirroring endpoint is in disabled or stopped state.
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.40 spid22s Service Broker manager has started.
Oct 15 23:02:19 10.0.2.15 sqlservr[4375]: 2018-10-15 23:02:19.41 spid6s Recovery is complete. This is an informational message only. No user action is required.

6.默认SQL Server通过1433端口连接,此时要检查防火墙状态,如果是打开的,需要把1433端口在防火墙上做放开设置,允许连接。或者直接关闭Linux防火墙。我在这里直接关闭了防火墙。

[root@10 bin]# firewall-cmd --state
running
[root@10 bin]# systemctl stop firewalld.service
[root@10 bin]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

7.接下来需要安装sqlcmd和bcp等客户端工具。和安装服务器一样,首先也要下载配置文件。

[root@10 bin]# sudo curl -o /etc/yum.repos.d/msprod.repo https://packages.microsoft.com/config/rhel/7/prod.repo
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 193 100 193 0 0 682 0 --:--:-- --:--:-- --:--:-- 681

接下来如果有旧版的mssql工具,需要删除。

[root@10 bin]# yum remove unixODBC-utf16 unixODBC-utf16-devel
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
No Match for argument: unixODBC-utf16
No Match for argument: unixODBC-utf16-devel
No Packages marked for removal

删除完成之后安装新的工具,在安装过程中出现下列的问题:

[root@10 bin]# yum install -y mssql-tools unixODBC-devel
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
packages-microsoft-com-prod | 2.9 kB 00:00:00
packages-microsoft-com-prod/primary_db | 144 kB 00:00:00
No package unixODBC-devel available.
Resolving Dependencies
--> Running transaction check
---> Package mssql-tools.x86_64 0:17.2.0.2-1 will be installed
--> Processing Dependency: msodbcsql17 < 17.3.0.0 for package: mssql-tools-17.2.0.2-1.x86_64 --> Processing Dependency: msodbcsql17 >= 17.2.0.0 for package: mssql-tools-17.2.0.2-1.x86_64
--> Running transaction check
---> Package msodbcsql17.x86_64 0:17.2.0.1-1 will be installed
--> Processing Dependency: unixODBC >= 2.3.1 for package: msodbcsql17-17.2.0.1-1.x86_64
--> Processing Dependency: libodbcinst.so.2()(64bit) for package: msodbcsql17-17.2.0.1-1.x86_64
--> Running transaction check
---> Package msodbcsql17.x86_64 0:17.2.0.1-1 will be installed
--> Processing Dependency: unixODBC >= 2.3.1 for package: msodbcsql17-17.2.0.1-1.x86_64
---> Package unixODBC-utf16.x86_64 0:2.3.1-1 will be installed
--> Processing Conflict: msodbcsql17-17.2.0.1-1.x86_64 conflicts unixODBC-utf16
--> Finished Dependency Resolution
Error: msodbcsql17 conflicts with unixODBC-utf16-2.3.1-1.x86_64
Error: Package: msodbcsql17-17.2.0.1-1.x86_64 (packages-microsoft-com-prod)
Requires: unixODBC >= 2.3.1
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

下列问题主要是因为软件的yum仓库不可用, 百度了办法 修改成了163的centos的yum仓库就可以了。

[root@10 run]# yum install -y mssql-tools unixODBC-devel
Loaded plugins: fastestmirror, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package mssql-tools.x86_64 0:17.2.0.2-1 will be installed
--> Processing Dependency: msodbcsql17 < 17.3.0.0 for package: mssql-tools-17.2.0.2-1.x86_64 --> Processing Dependency: msodbcsql17 >= 17.2.0.0 for package: mssql-tools-17.2.0.2-1.x86_64
---> Package unixODBC-devel.x86_64 0:2.3.1-11.el7 will be installed
--> Processing Dependency: unixODBC(x86-64) = 2.3.1-11.el7 for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libtemplate.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libtdsS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libsapdbS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: liboraodbcS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: liboplodbcS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbctxtS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbcnnS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbcminiS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbcdrvcfg2S.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbcdrvcfg1S.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libodbccr.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libnn.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libmimerS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Processing Dependency: libesoobS.so.2()(64bit) for package: unixODBC-devel-2.3.1-11.el7.x86_64
--> Running transaction check
---> Package msodbcsql17.x86_64 0:17.2.0.1-1 will be installed
---> Package unixODBC.x86_64 0:2.3.1-11.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================================================================
Installing:
mssql-tools x86_64 17.2.0.2-1 packages-microsoft-com-prod 254 k
unixODBC-devel x86_64 2.3.1-11.el7 base 55 k
Installing for dependencies:
msodbcsql17 x86_64 17.2.0.1-1 packages-microsoft-com-prod 4.3 M
unixODBC x86_64 2.3.1-11.el7 base 413 k
Transaction Summary
================================================================================================================================================================================================================================================================
Install 2 Packages (+2 Dependent packages)
Total download size: 5.1 M
Installed size: 6.0 M
Downloading packages:
warning: /var/cache/yum/x86_64/$releasever/base/packages/unixODBC-devel-2.3.1-11.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for unixODBC-devel-2.3.1-11.el7.x86_64.rpm is not installed
(1/4): unixODBC-devel-2.3.1-11.el7.x86_64.rpm | 55 kB 00:00:00
(2/4): unixODBC-2.3.1-11.el7.x86_64.rpm | 413 kB 00:00:00
(3/4): mssql-tools-17.2.0.2-1.x86_64.rpm | 254 kB 00:00:00
(4/4): msodbcsql17-17.2.0.1-1.x86_64.rpm | 4.3 MB 00:00:01
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 3.4 MB/s | 5.1 MB 00:00:01
Retrieving key from http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
Importing GPG key 0xF4A80EB5:
Userid : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
From : http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
** Found 1 pre-existing rpmdb problem(s), 'yum check' output follows:
rhn-check-2.0.2-17.el7.noarch has missing requires of yum-rhn-plugin >= ('0', '1.6.4', '1')
Installing : unixODBC-2.3.1-11.el7.x86_64 1/4
The license terms for this product can be downloaded from
https://aka.ms/odbc172eula and found in
/usr/share/doc/msodbcsql17/LICENSE.txt . By entering 'YES',
you indicate that you accept the license terms.
Do you accept the license terms? (Enter YES or NO)
YES
Installing : msodbcsql17-17.2.0.1-1.x86_64 2/4
The license terms for this product can be downloaded from
http://go.microsoft.com/fwlink/?LinkId=746949 and found in
/usr/share/doc/mssql-tools/LICENSE.txt . By entering 'YES',
you indicate that you accept the license terms.
Do you accept the license terms? (Enter YES or NO)
YES
Installing : mssql-tools-17.2.0.2-1.x86_64 3/4
Installing : unixODBC-devel-2.3.1-11.el7.x86_64 4/4
Verifying : unixODBC-devel-2.3.1-11.el7.x86_64 1/4
Verifying : unixODBC-2.3.1-11.el7.x86_64 2/4
Verifying : msodbcsql17-17.2.0.1-1.x86_64 3/4
Verifying : mssql-tools-17.2.0.2-1.x86_64 4/4
Installed:
mssql-tools.x86_64 0:17.2.0.2-1 unixODBC-devel.x86_64 0:2.3.1-11.el7
Dependency Installed:
msodbcsql17.x86_64 0:17.2.0.1-1 unixODBC.x86_64 0:2.3.1-11.el7

8.接下来做一些测试连接,先查看用的是否是1434端口

[root@10 run]# netstat -tulpn | grep sqlservr
tcp 0 0 0.0.0.0:1433 0.0.0.0:* LISTEN 4414/sqlservr
tcp 0 0 127.0.0.1:1434 0.0.0.0:* LISTEN 4414/sqlservr
tcp6 0 0 :::1433 :::* LISTEN 4414/sqlservr
tcp6 0 0 ::1:1434 :::* LISTEN 4414/sqlservr
[root@10 run]# sqlcmd -S 127.0.0.1 -U SA -P 'Passwd!123*'
bash: sqlcmd: command not found...

这里没配置好环境变量。先设置一下。

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc

再一次连接,如果成功,就会显示sqlcmd命令提示符:1>

[root@10 run]# sqlcmd -S 127.0.0.1 -U SA
Password:
1> SELECT Name from sys.Databases
2> GO
Name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
TestDB
(5 rows affected)
1> USE TestDB
2> CREATE TABLE test (id INT, name VARCHAR(50))
3> INSERT INTO test VALUES (1, 'test install');
4> GO
Changed database context to 'TestDB'.
(1 rows affected)
1> select * from test
2> GO
id name
----------- --------------------------------------------------
1 test install
(1 rows affected)

至此整个安装过程结束,参考微软官方文档: https://docs.microsoft.com/zh-cn/sql/linux/quickstart-install-connect-red-hat?view=sql-server-2017

View active session in SQL Server 2005

$
0
0
Searching for Active Directory from SQL Server 2005

How can I query Active Directory from SQL Server 2005?Pretty general question but here are some pointers. You need a linked server creating on the SQL Server that points to ADSI (Active Directory Service Interface) something like this will do it. EXE

How to determine the total number of open / active connections in ms sql server 2005

My php/MS Sql Server 2005/win 2003 Application occasionally becomes very unresponsive, the memory/cpu usage does not spike. If i try to open any new connection from sql management studio, then the it just hangs at the open connection dialog box. how

Text of System Views in SQL Server 2005

I am looking for viewing the text of the system views and procedures in SQL Server 2005 using the object explorer or using sp_helptext. actually i am coming from the SQL Server 2000 background, where we have the feature of retreiving the code of the

Find Dependencies in SQL Server 2005

Is there a dependable way to find dependencies among views and tables in SQL Server 2005? sys.sql_dependencies doesn't list all of my dependencies. (I thought I saw a similar thread here but can't find it now. Sorry if this is a dup).You can try thes

Backup SQL Server 2005: View or restore yourself without interfering with the existing database?

I have SQL Server 2005 set up and I backed up a database a year ago, and now need a few views from it. Backup is rather large, 6GB, and the database is up and running 24/7 and I cannot meddle with it, I just need these views. Creating a new database

SQL Server 2005: Packing tables by views - Advantages and disadvantages

Background I am working on a legacy small-business automation system (inventory, sales, procurement, etc.) that has a single database hosted by SQL Server 2005 and a bunch of client applications. The main client (used by all users) is an MS Access 20

how to view the job in text in sql server 2005

Ex: for store procedure we use sp_helptext .is there any keyword for viewing jobs in text in sql server 2005 regards kumarFor stuff like this where I know how to do something in Management Studio but aren't sure of the way to do it by code I sometime

How can the last modification date of a table be returned in SQL Server 2005?

How can a Table's Last Modified date be returned in SQL Server 2005? I did see one on the Table Properties page. There is a Created Date but no Modified date. If it is not available, what would be some other ways to add this functionality? Here are a

Manually editing / adding records to sql server 2005

Is there any option in sql server 2005 management studio to change columns in a table by hand and by the sql commands alter table or insert into. If yes, then could someone please show how or link to some instructions?Sure you can. If you want to ren

Where to set permissions on all servers for the sql server 2005 logon trigger

I need to keep track of the last login time for each user in our SQL Server 2005 database. I created a trigger like this: CREATE TRIGGER LogonTimeStamp ON ALL SERVER FOR LOGON AS BEGIN IF EXISTS (SELECT * FROM miscdb..user_last_login WHERE user_id =

Intercept and rewrite queries in SQL Server 2005

We have an application built on top of SQL server 2005 that we have no control over. We've recently discovered that this application is sending some very inefficient SELECT queries to SQL that is causing serious capacity issues on the database. I kno

Dynamic selection of SQL Server 2005

I'm very much new to SQL and struggling to learn, so I'll warn everyone up front my mistakes might very well be the obvious ones - don't assume I know what I'm doing! This is in SQL Server 2005. EDIT: The code below is accidentally misleading; my_tab

How can you do a complete external join in sql server 2005?

How can you do a full outer join in sqlserver 2005? Seems like there is full outer join in sqlserver 2008 but I need to do this in sqlserver 2005. In other words, I am merging two views based on the ACCTNUM col in both views (The views show aggregate

Problem SQL Server 2005 SP Deadlock

I have a scheduled job with a SP running on daily basis (SQL Server 2005). Recently I frequently encounter deadlock problem for this SP. Here is the error message: Message Executed as user: dbo. Transaction (Process ID 56) was deadlocked on thread |

Data Flow Transformations in SSIS

$
0
0

SSIS has various data flow transformation components that make it easy to manipulate the source data before it can be sent to the destination for processing.

Below are some of the most frequently used one

Data Conversion

For changing the data type of the source column.

Simply select the input column and then specify the data type for it.

For e.g., if our source has one of the date columns as a string, we can apply Data Conversion transformation to convert its data type to Date.


Data Flow Transformations in SSIS

Derived Column

Can be used if we want to apply the expression to the source column and create a new column or update the existing column.

For e.g. we are interested in the Year part of the Modified Date column. For this we can use the Year Date/Time function, get the year part and add it as a new column.


Data Flow Transformations in SSIS
Data Flow Transformations in SSIS

Percentage Sampling and Row Sampling

To reduce the number of rows in the pipeline.

Percentage Samplingallows us to define the percentage of rows for sampling. It generates two output, one output for the rows that are selected and the other for the unselected.


Data Flow Transformations in SSIS
Data Flow Transformations in SSIS

Similarly, Rows Sampling allows us to specify the number of rows directly instead of a percentage. It also gives the option to select the columns.


Data Flow Transformations in SSIS
Data Flow Transformations in SSIS
Data Flow Transformations in SSIS

Multicast

Multicast can be used to pass the same data to multiple destination as shown below.


Data Flow Transformations in SSIS

Lookup Transformation.

Conditional Split Transformation

Sort and Aggregate Transformation

Sort transformation can be used to specify sorting for the data. In the case of SQL, we can specify sort in the query itself however in case of working with flat files, this function can be handy.


Data Flow Transformations in SSIS

To perform aggregation on the source data we can use Aggregation transformation.


Data Flow Transformations in SSIS

Union All

Union All allows us to combine data from multiple sources. Here the column structure should be identical of the sources. The data from the first source is passed to the destination, followed by the data from the second.


Data Flow Transformations in SSIS

We’d covered some of the common transformation components in this post, in the next post we’d try to cover the remaining frequently used transformation.

Hope it helps..

Query Memory Grants and Resource Semaphores in SQL Server

$
0
0
(Be sure to checkout the FREE SQLpassion Performance Tuning Training Plan - you get a weekly email packed with all the essential knowledge you need to know about performance tuning on SQL Server.)

In today’s blog posting I want to talk about Query Memory in SQL Server, and want to show you how fast it can degrade the performance of your queries. Before we dive into the details about how SQL Server is managing Query Memory, I want to talk briefly about what Query Memory actually is.

Query Memory

SQL Server has to allocate Query Memory for various Execution Plans based on their used operators. The following Execution Plan operators need Query Memory:

Sort Operators Sort Sort (TOP N Sort) Hash Operators Hash Match Hash Match Aggregate Exchange Operators Parallelism (Distribute Streams) Parallelism (Repartition Streams) Parallelism (Gather Streams)

How much Query Memory is allocated to these operators depends on various factors, like the Estimated Number of Rows, and the Row Size itself. In the worst case, one of the above mentioned operators can spill over to TempDb , and your query performance degrades because of the additional introduced physical TempDb overhead.

Resource Semaphores

The Query Memory that is requested by an Execution Plan (the so-called Query Memory Grant) is taken from Buffer Pool Memory (with a maximum of 75%), and the used Resource Governor Workload Group defines the maximum per each query (up to 25% by default).

The Query Memory itself is a limited resource in SQL Server, and is therefore protected by a so-called Resource Semaphore . A semaphore itself is nothing more like a synchronization object that is used to control the access to a shared resource in our case to the Query Memory. The nice thing about a semaphore is that it can allow the access to the shared resource to multiple threads in parallel. It’s not like a Spinlock that is a simple Mutex: you hold the Spinlock, or not!

In the context of SQL Server multiple queries can request Query Memory through the Resource Semaphores. But when the Query Memory is exhausted (everything is currently in use), queries have to wait until other queries are releasing Query Memory back to SQL Server. Then they can get their Query Memory.

Let’s have now a more detailed look on these Resource Semaphores that SQL Server is using here. In my case I have configured SQL Server with a Maximum Server Memory setting of only 500 MB to make it easy to reproduce some performance problems. Therefore the whole available Query Memory is about 375 MB large 75% of the Buffer Pool Memory. For each Resource Governor Resource Pool you get 2 Resource Semaphores:

Small Resource Semaphore: 5% of the whole Query Memory Large Resource Semaphore: 95% of the whole Query Memory

You can see and monitor these Resource Semaphores in the DMV sys.dm_exec_query_resource_semaphores . As you can see from the following picture, my SQL Server instance has 4 Resource Semaphores: 2 Resource Semaphores for the default Resource Pool, and 2 Resource Semaphores for the internal Resource Pool.


Query Memory Grants and Resource Semaphores in SQL Server

This DMV exposes you the following important columns for performance troubleshooting:

resource_semphore_id : 0: small Resource Semaphore, 1: large Resource Semaphore max_target_memory_kb : How much Query Memory one query can get total_memory_kb : How much Query Memory is held and managed by that Resource Semaphore available_memory_kb : How much Query Memory is currently available by that Resource Semaphore granted_memory_kb : How much Query memory is currently granted by that Resource Semaphore Resource Semaphore Queues

To make now things a little bit more complicated, each Resource Semaphore also has multiple queues available. And a submitted query which needs some Query Memory has to use the corresponding queue based on the query plan cost factor:

Query Cost < 10: queue_id of 5 Query Cost between 10 and 99: queue_id of 6 Query Cost between 100 and 999: queue_id of 7 Query Cost between 1000 and 9999: queue_id of 8 Query Cost >= 10000: queue_id of 9

Which Resource Semaphore Queue a query is currently using can be seen through the DMV sys.dm_exec_query_memory_grants . This DMV can also tell you if a query has already successfully allocated Query Memory, or if the query is still waiting on the Query Memory Grant. Let’s have a more detailed look on this DMV. The following picture shows the output from this DMV while I was running a query that has requested some Query Memory.


Query Memory Grants and Resource Semaphores in SQL Server

This DMV exposes you the following important columns for troubleshooting:

request_time : The time when the request for Query Memory was made grant_time : The time when the request for Query Memory was fulfilled by SQL Server requested_memory_kb : How much Query Memory the query requested granted_memory_kb : How much Query Memory the query got from SQL Server query_cost : The costs of the Execution Plan queue_id : The used queue based on the cost factor resource_semaphore_id : The used Resource Semaphore (small or large)

The most important column for me here is the column grant_time . If you see here a NULL value for a given query, it means that the query is waiting for Query Memory. In that case the query reports you a wait type RESOURCE_SEMAPHORE , which is more or less very terrible. Because a query can only start when the requested Query Memory was granted to the query! Therefore a wait type RESOURCE_SEMAPHORE means that the query is also not yet started! Just think about that…

With the column queue_id you can also see in which Resource Semaphore Queue the query was put by SQL Server. As I have described above, the query gets put into a queue based on the cost factor of the query plan. A cheap query from an OLTP workload uses a different queue than a very expensive query from a reporting or DWH workload.

And now we are coming to the most important point of this blog posting:

A query which is waiting in a queue can be only executed when ALL lower-cost queues do not contain *any* other waiting queries!!!

And this can lead to serious performance problems. Imagine you have your large DWH query, which is currently waiting in queue_id 9. And in addition you have some recurring small queries from an OLTP workload that are always put into queue_id 5. Your DWH query will wait a very long time until it can be finally executed. I have tried to illustrate that behaviour with the following picture.


Query Memory Grants and Resource Semaphores in SQL Server

Let’s work now with a few different queries to demonstrate this problem. In the first step I have created a simple Stored Procedure in the ContosoRetailDW database that generates a query plan with a cost factor of 295.

CREATE PROCEDURE ReportingWorkload AS BEGIN SELECT TOP 10000 * FROM ( SELECT TOP 1000000 * FROM FactOnlineSales ) AS s ORDER BY ProductKey OPTION (MAXDOP 1) END GO

Afterwards I have executed this stored procecure with 10 parallel users through the Stress Testing Tool ostress.exe that is part of the RML Utilities . As mentioned before my Maximum Server Memory setting is configured with only 500 MB.

ostress.exe -S"sqlag-node1" -Q"EXEC ContosoRetailDW.dbo.ReportingWorkload" -n10

When you run that Stored Procedure with 10 parallel users, you can already see in sys.dm_exec_query_memory_grants that a lot of queries are waiting on outstanding Query Memory Grants: for almost all queries the column grant_time is NULL, and only one query got a Query Memory Grant.


Query Memory Grants and Resource Semaphores in SQL Server

When you concurrently look into sys.dm_exec_requests , you can see that these waiting queries are reporting a wait type RESOURCE_SEMAPHORE back to SQL Server:


Query Memory Grants and Resource Semaphores in SQL Server

As you can see from the output of sys.dm_exec_query_memory_grants the query with a cost factor of around 295 was put into the Resource Semaphore Queue 7. Let’s try to run now concurrently some other queries which are dependent on Query Memory. The following listing shows 2 queries: one query with a cost factor of 2.97, and another one with a cost factor of 3064.

-- Query Cost Factor: 2.97 -- queue_id: 5 SELECT TOP 10000 * FROM ( SELECT TOP 40000 * FROM FactOnlineSales ) AS s ORDER BY ProductKey OPTION (MAXDOP 1) GO -- Query Cost Factor: 3064 -- queue_id: 8 SELECT TOP 10000 * FROM ( SELECT TOP 10000000 * FROM FactOnlineSales ) AS s ORDER BY ProductKey OPTION (MAXDOP 1) GO

The query with the cost factor of 2.97 is put into the Resource Semaphore Queue 5, and is executed almost immediately. But the query with the cost factor of 3064 is put into the queue 8, and must wait until all lower queues don’t contain any queries anymore. So it must wait a much longer time and reports a longer time the wait type RESOURCE_SEMAPHORE .

Query Memory Waits

How long a query waits for Query Memory depends on its cost factor. By default it waits (in seconds) 25 times of the cost factor with a maximum of 86400 seconds (24 hours). And after that wait time the query is finally executed by SQL Server.

You can override that default query wait behaviour with the instance setting query_wait (s) . The following listing shows how you can change that setting to 20 seconds.

sp_configure 'query wait (s)', 20 RECONFIGURE GO

When you have done the required RECONFIGURE , queries are only waiting 20 seconds for a Query Memory Grant. And afterwards SQL Server schedules them for execution, but with a much smaller Query Memory Grant. Therefore the query will spill over to TempDb . And this of course also introduces some physical I/O overhead which leads to a longer query execution time.

Besides the instance setting query_wait (s) , you can also configure the query wait time on the Resource Governor Workload Group through the option request_memory_grant_timeout_sec . The following listing shows how you can accomplish the same thing by changing the Memory Grant Timeout on default Workload Group to 20 seconds.

ALTER WORKLOAD GROUP [default] WITH ( request_memory_grant_timeout_sec = 20 ) GO ALTER RESOURCE GOVERNOR RECONFIGURE GO

The outcome is the same: the query waits for a maximum of 20 seconds, and is executed afterwards with a smaller Query Memory Grant…

Summary

As you have seem from this blog posting, Query Memory Grants and Resource Semaphores can be really dangerous in SQL Server. It can get really problematic when you have a mixed workload (OLTP and DWH) on the same SQL Server instance. Because then your large DHW queries can be slowed down by your small OLTP queries, when they are dependent on Query Memory Grants.

Therefore I always suggest to make your OLTP queries as simple as possible, and you should make sure that these query plans don’t use operators which are dependent on Query Memory Grants. And this can be accomplished by working on your Indexing Strategy. If you have a good Indexing Strategy in place, there is no need for Sort/Hash operations, and for no parallel Execution Plans. Please keep that in mind.

Thanks for your time,

-Klaus

T-SQL Tuesday #107 Round Up: Death March Project

$
0
0

Last week I invited you to share a story of a project gone mad a death march project that you participated in or were impacted by.

If you haven’t already read the invitation please read it first: T-SQL Tuesday #107 Invitation: Death March

Now let’s see a summary of this month’s submissions!

Death March Round-Up Roll Call
T-SQL Tuesday #107 Round Up: Death March Project
Image / License

This month we had 15 post submissions about this daunting topic. Two of the posts were from people who had never posted before. To them I say welcome and I hope you enjoyed the experience.

I think, in general, you were all brave to write about this sensitive topic. I know a lot of you are consultants (which is a great way to expose yourself to a death march project) and must be careful about telling stories that could be misconstrued by clients. Nonetheless, with enough obfuscation and redaction you have brought forth some truly horrifying posts! I am going to group the submissions by the most terrible themes that many posts shared.

The Masque of the Migration Red Death
T-SQL Tuesday #107 Round Up: Death March Project
Image / License

Planning a migration is like organizing a hug masquerade ball. Hidden surprises lurk in the chamber rooms of the castle. Read them before the clock strikes midnight when darkness, decay, and the red death hold illimitable dominion over all!

Kevin Chant( blog / post ) joined us in the fray this month for the first time. He tells us about a hardware upgrade / migration project that went south. It was lies and insidious omissions that led them down a dark path. He was told the software was fine…but it was not. They assumed it had been tested…clearly it had not. Slips and oversights like this can add exponentially more time to an already long project. Tread carefully in the Western Woods where brave souls fear to venture.

Eugene Meidinger( blog / post ) reminds us that big projects can sometimes result in big failures. What could be more catastrophic than migrating to a new ERP system ? The software was created by an outside company who specialty included heaping piles of poisonous code! Our main character, Eugene, is thrown head first into the fire. As the flames licked his body he pondered weak and weary about the assumptions they had made. He concludes his tale: “Even to this day, I’ve got a certain amount of skittishness around the idea of large projects.”…be warned!

Ron Wheeler( blog / post ) is another brave first time participant who gathers us around the campfire. You see out there in those woods there lurk SQL Server upgrade and migration projects. When the wind blows they call us saying “your project is doooooomed!”. If you haven’t ever purchased server grade equipment you don’t know the long waits times for hardware. You will be in for a shocking surprise. This one involves obstacles and devious plots like funding surprises, team switch-a-roo by a VP, and other hazards. It does not bode well when the PM refuses to use MS Project and instead uses MS Excel.

Craig Porteous( blog / post ) spins a tale from long ago when he was a neophyte wizard with SQL Server. A deadly concoction is made of one-part MS Access, using Excel as the hammer to all nails, sprinkle in some unexpected downtime, and let’s start this at 5pm on a Friday. Once imbibed this potion will drain your life force. His team thought they would summon the migration within a few hours but miscast and realized it would run until Tuesday…and there was supposed to be no downtime ! Craig worked restlessly late in the evening, fingers stained by the whiteboard marker, and gained maniacal pleasure when they finally vanquished the evil.

Creeping Doom Scope Creep and Time Despair
T-SQL Tuesday #107 Round Up: Death March Project
Image / License

Innocent souls have been victim of scope creep and its impending doom. The Grim Reaper waits for no man and time is of the essence.

Bert Wagner( blog / post ) take us on a stroll through the graveyard of projects damned by scope creep! The road to hell is paved with good intentions. First you take your eyes off the focus then you write features nobody asked for or needs. The madness ultimately manifests as many blog entries dying as drafts never seeing the light of publication. Watch his video and be still when the lights go out…

Andy Levy( blog / post ) worked in the tower monolith making sacrifices to the mainframe. His attempts to modernize a workflow were met rigor mortis refusals. He knew there were more features that had to be implemented than other stakeholders thought and that they would not be pleased. His team then summoned advanced wizards from the

magic academy

big consulting company. Proposal after design proposal was rejected and cast out of the realm. The consultants could not counter this hex. His key colleague wizard had been shunned from the monolith tower leaving Andy all alone. Delays and slippage happened. Then happened again. In a final valiant effort Andy managed to save the princess and attend his wedding day. However, as a final hex, the system spent more time under development than it ever did in use.

Kenneth Fisher( blog / post ) writes about the business bullying you. It all started on a Friday at 4:30pm with a dark request to submit a big change into an ELT system without testing it. The developers protested this was wrong. Management gave the resounding chorus “but it has to be done now”. As is tradition, the DBA was brought in late in the game to help. It is wise to look before you leap lest you fall into a pit for which there is no escape.

Poisonous Politics
T-SQL Tuesday #107 Round Up: Death March Project
Image / License

Politics they are everywhere and seem to bring out the worst in people! They turn regular folks into monsters. Putrid politics have petrified many innocent projects…

Allen White( blog / post ) delves into the occult and spins a tale from the crypt about a telemetry data project taken down by the slings and arrows of politics. It is not for the faint of heart deep inside there lies such horrors as…dare I speak it?…a CUBE in a GROUP BY clause! The horror! The house of assumptions had to be burned down to the ground and rebuilt with dark sorcery.

Bob Pusateri( blog / post ) stirs the witches kettle and broods about a project that wasn’t totally through out. Business and management can run anything into the

ground grave. In this case they drive a custom linux file system to the pits of hell. In the darkness you could hear whispers of a dark incantation by management. I shall put the words here, but you must promise me your head if you ever speak these words: “Google created their own file system, so we can do it too”

. Bob tried to cast a spell on management that would counter their evil intent: “Google has scale requirements beyond our imagination, and has hundreds of employees with Ph.D.’s in computer science and related fields who can address these problems. We don’t!”. Years later word around the campfire was nobody from the company had ever heard of a custom Linux filesystem being used there…

Andy Leonard( blog / post ) looks into the crystal ball and cast tarot cards to inform us that 85% of all BI projects fail. This dystopian future is now! There is an ancient ritual known simply as “deltas”. These changes require self-awareness and meta thinking as Andy describes. Be carefully not to spiral down the hole of circular logic because if you do nobody will be able to save you. Add in a pinch of toad foot and bureaucracy and you conjure up TPS reports, too many meetings, and miscommunications.

Silent Death
T-SQL Tuesday #107 Round Up: Death March Project
Image / License

What happens when stakeholders remain silent about project requirements? How about internal sabotage? Carry on weary traveler and know these stories.

Rob Farley( blog / post ) is a frequent writer for T-SQL Tuesday and joined us again to extend the streak. Rob warns us of the danger and pitfalls of misaligned expectations and feedback loops. The story weaves an interesting post about demos to the idea of an iceberg. “Don’t make the demo look too done” because clients will judge the progression of the project by how polished the demo appears. This is the iceberg analogy they only see the tip and do not see the huge mass supporting it. Customers often won’t ask questions until they see what it looks like. Once you see how understanding expectations and requirements can cloud your eyes you will know the ancient and eclectic dark art of software estimations! Captains beware not to steer your vessel into the great ice glacier!

Steve Jones( blog / post ) hangs on to the scaffolding on the dark tower during an OOP project. A

hired mercenary

consultant was brought in to help. Little did Steve know that he was casting his backups to a local workstation and not the network file share. His company would not entertain a VCS for code. He clashed with the consultant over the progress of the work. They were ready to implement their efforts but first needed an artifact to harness the power. They bought a server with 2GB of memory and with much trepidation loaded the software onto the server. During user acceptance testing stakeholders were horrified that the performance was actually worse than before the project. At that time the SQL Server optimizer wasn’t as powerful as today. The time estimated as double the expected amount. Luckily no heads rolled but the possibility remained…

Unspeakable Horrors Do Not Utter Their Name!
T-SQL Tuesday #107 Round Up: Death March Project

Overview of the SQL Insert statement in SQL Server

$
0
0

This article on the SQL Insert statement, is part of a series on string manipulation functions, operators and techniques. The previous articles are focused on SQL query techniques, all centered around the task of data preparation and data transformation.

So far we’ve been focused on select statementto read information out of a table.But that begs the question; how did the data get there in the first place? In this article, we’ll focus on the DML statement, the SQL insert statement. If we want to create a data, we’re going to use the SQL keyword, “Insert”.

The general format is the INSERT INTO SQL statement followed by a table name, then the list of columns, and then the values that youwant to use the SQL insert statement to add data into those columns. Inserting is usually a straightforward task. It begins with the simple statement of inserting a single row. Many times, however, it is more efficient to use a set-based approach to create new rows. In the latter part of the article, let’s discuss various techniques for inserting many rows at a time.

Pre-requisite

The assumption is that you’ve the following the permission to perform the insert operation on a table

Insert operation is default to the members of the sysadmin fixed server role, the db_owner and db_datawriter fixed database roles, and the table owner. Insert with the OPENROWSET BULK option requires a user to be a member of the sysadmin fixed server role or of the bulkadmin fixed server role. Download AdventureWorks2014 here Rules: Typically we don’t alwaysprovide data for every single column.In some cases, the columns can be left blank and in some other provide their own default values. You also have situations where some columnsare automatically generating keys.In such cases, you certainly don’t want to try and insertyour own values in those situations. The columns and values must matchorder, data type and number If the column is of strings or date time or characters, they need to be enclosed in the in the single quotes.If they’re numeric, you don’t need the quotes. If you do not list your target columns in the insert statement then you must insert values into all of the columns in the table, also, be sure to maintain the order of the values How to perform a simple Insert

Let’s start inserting the data into thissimple department table. First, use the name of the table and then inside parenthesis, the name of the columnsand then type in the values. So, name the columns that we are going to type in the values.

CREATE TABLE department (dno INT PRIMARY KEY, dname VARCHAR(20) NOT NULL, loc VARCHAR(50) NOT NULL );

The following SQL Insert into statement inserts a row into the department. The columns dno, dname, and loc are listed and values for those columns are supplied. The order is also maintained in the same way as the columns in the table

INSERT INTO department (dno, dname, loc ) VALUES (10, 'ENGINEERING', 'New York' ); How to perform a simple Insert using SSMS

Inserting data into a table can be accomplishedeither using SQL Server Management Studio (SSMS), a GUI, or through Data ManipulationLanguage in the SQL Editor.Using GUI in SSMSis a quick and easy way to enter records directly to the table.

Let’s go ahead and browse department table and right-click and go to edit top 200 rows.


Overview of the SQL Insert statement in SQL Server

This will bring up an editor windowwhere we can interact directly with the data. To type in the new values, come down to the bottom and start typing the values.


Overview of the SQL Insert statement in SQL Server

It is very useful in some case to familiarize yourself with what data that you’re about to enter into the table.

SELECT * FROM department;
Overview of the SQL Insert statement in SQL Server
How to use an Insert into statement to add multiple rows of data

In the following SQL insert into statement, three rows got inserted into the department. The values for all columns are supplied and are listed in the same order as the columns in the table. Also, multiple values are listed and separated by comma delimiter.

INSERT INTO department (dno, dname, loc ) VALUES (40, 'Sales', 'NJ' ), (50, 'Marketting', 'MO' ), (60, 'Testing', 'MN' ); How to use an Insert into statement to add data with default values

Let us create a simple table for the demonstration. A table is created with integer column defined with default value 0 and another DateTime column defined with the default date timestamp value.

CREATE TABLE demo (idINT DEFAULT 0, hirdate DATETIME DEFAULT GETDATE() );

Now, let us insert default value into the table Demo using a SQL insert into statement

INSERT INTO demo DEFAULT VALUES; SELECT * FROM demo;
Overview of the SQL Insert statement in SQL Server

Note: If all the columns of the table defined with default values then specify the DEFAULT VALUES clause to create a new row with all default values

Next, override the default values of the table with a SQL Insert into statement.

INSERT INTO demo VALUES(1,'2018-09-28 08:49:00') SELECT * FROM demo;
Overview of the SQL Insert statement in SQL Server

Let us consider another example where the table is a combination of both default and non-default columns.

DROP TABLE IF EXISTS Demo; CREATE TABLE demo (idINT PRIMARY KEY IDENTITY(1, 1), NameVARCHAR(20), hirdate DATETIME DEFAULT GETDATE() );

In order to insert default values to the columns, you just need exclude the default columns from the insert list with a SQL insert into statement.

INSERT INTO demo (name) VALUES ('Prashanth'), ('Brian'), ('Ahmad'); SELECT * FROM demo;
Overview of the SQL Insert statement in SQL Server

The following example you can see that the keyword DEFAULT is used to feed a value to the table in the values clause with a SQL Insert into statement

INSERT INTO demo(name,hirdate) VALUES('Kiki',DEFAULT), ('Yanna',DEFAULT), ('Maya',DEFAULT);
Overview of the SQL Insert statement in SQL Server
How to use an Insert to add data to an identity column table

The following example shows how to insert data to an identity column. In the sample, we are overriding the default behavior (IDENTITY property of the column) of the INSERT with the SET IDENTITY_INSERT statement and insert an explicit value into the identity column. In this case, three rows are inserted with the values 100, 101 and 102

SET IDENTITY_INSERT Demo ON; INSERT INTO demo (id, name, hirdate ) VALUES (100, 'Bojan', DEFAULT ), (101, 'Milan', DEFAULT ), (102, 'Esat', DEFAULT ); SET IDENTITY_INSERT Demo OFF; SELECT * FROM demo;
Overview of the SQL Insert statement in SQL Server
How to use a SQL insert statement to add data from another dataset

In this section, we’ll see how to capture the results of a query (simple select or multi table complex select) into another table.

The following example shows how to insert

Using Containers To Build A Home Lab

$
0
0

Obviously, in the real life, we do not work with vanilla SQL Server installation. We need to customize it by changing SQL Server settings and logins, creating and/or restoring the databases and do other actions. There are a couple of ways how you can do that.

The first approach is customizing existing container manually and creating the image from it using docker container commit command . After that, you can start the new containers from created image the same way as we already discussed. We will cover a couple ways to move data to and from containers later.

There is the better way, however. You can automate this process by utilizing docker build command . The process is very simple. You just need to define DockerFile, which contains the reference to the main image and specifies the build actions. You can copy scripts and database backups into the image, run SQLCMD, BCP and PowerShell scripts there you, pretty much, have the full control. Internally, Docker runs every command inside deployment containers (creating and destroying them during the process) saving the final one as the target image.

Enter and Edit Data in SQL Server Reporting Services

$
0
0

By: Esat Erkec || Related Tips: > Reporting Services Report Builder

Problem

Power BI offers a data entry option which allows users to enter data directly and this data can then be used on reports and visuals. How can we create and use this type of data in SQL Server Reporting Services?

Solution

As you may know, Power BI and SQL Server Reporting Service (SSRS) are different platforms and offer different types of solutions to developers and end users. If you ask “which one should we use, the answer is often “it depends on your requirements”. Also, some Power BI or SSRS features are very useful and you may be wondering when this feature will be added to the other report development tool.

In this tip, we will cover the Enter Data feature in Power BI and also show how you can now do this in SQL Server Reporting Services.

Enter Data Option in Power BI

LaunchPower BI and click the Enter Data option in the Home tab as shown below.


Enter and Edit Data in SQL Server Reporting Services

A create table pop-up screen will appear. In this screen we can enter the rows and can add new columns.


Enter and Edit Data in SQL Server Reporting Services

If we double click the Column header we can change the column name. When we click the Load button, Power BI automatically creates a table for us.


Enter and Edit Data in SQL Server Reporting Services

This table can now be used in your reports just like any other table.

Now we will look at how we can do something like this in SSRS.

Enter and Edit Data Option in Report Builder

The SQL Server Reporting Service Team Blog made a new announcement which is the Enter Data feature is now available in SQL Server 2016 Report Builder. This feature allows us to enter data directly to Reporting Services. This feature is very similar to Power BI’s.

First we need to install the newer version of Report Builder and if you are using the previous version of Report Builder, you have to uninstall and install the new version. Now we will complete the steps for the SSRS enter data demonstration.

Launch Report Builder and click Blank Report.


Enter and Edit Data in SQL Server Reporting Services

Right click Data Sources in the Report Data menu and click Add Data Source.


Enter and Edit Data in SQL Server Reporting Services

Add a name to the data source.

Chose “Use a connection embedded in my report” and select ENTER DATA. Then click the OK button.


Enter and Edit Data in SQL Server Reporting Services

Right click Datasets which is in the Report Menu and click Add Dataset.


Enter and Edit Data in SQL Server Reporting Services

Add a name to the dataset.

Chose "Use a dataset embedded in my report" and select the Data source created in the previous step.


Enter and Edit Data in SQL Server Reporting Services

Click Query Designer and a pop-up screen appears which allows us to enter data manually. This screen is very similar to the Power BI screen. In this screen we will enter some constant data. We can change the column caption by double clicking the column header or when we right click in the columns, we can change the column caption and data type.


Enter and Edit Data in SQL Server Reporting Services

By default, all columns are created using the string data type. We can easily change the data type in the context menu.


Enter and Edit Data in SQL Server Reporting Services

At this point, I want to add a note about data type conversion. If you make an incorrect data conversion, you will see the below error in the Query Designer screen.


Enter and Edit Data in SQL Server Reporting Services

When we save the report in Report Builder, it creates a report file which has a file extension of RDL. In addition, Report Builder creates an XML type data source and adds this XML data source to the RDL (Report Definition Language) file. This XML data can be seen in the Dataset Properties below.


Enter and Edit Data in SQL Server Reporting Services

When we open the RDL file in Notepad, we can see the XML data source code.


Enter and Edit Data in SQL Server Reporting Services

If we want to edit data, we can click Query Designer and we can edit the data.


Enter and Edit Data in SQL Server Reporting Services

After all these steps, we will create a very basic report which will use this embedded data and then we will publish it to SQL Server Reporting Service.

Click Insert tab and select Insert Table in the design pane.


Enter and Edit Data in SQL Server Reporting Services

Drag and drop the data set columns to table and run the report.


Enter and Edit Data in SQL Server Reporting Services
Enter and Edit Data in SQL Server Reporting Services
Deploy Report to SQL Server Reporting Services

After we have finished our report design, we have to edit our RsReportServer.config file because this new data extension is not enabled in our current SSRS installation. You can find this file in these paths.

Native Installation of SSRS:

SQL Server 2016: “C:\Program Files\Microsoft SQL Server\MSRS13.mssqlSERVER\Reporting Services\ReportServer” SSRS 2017: “C:\Program Files\Microsoft SQL Server Reporting Services\RSServer\ReportServer” SharePoint mode SSRS: “C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\WebServices\Reporting”

This configuration file includes SSRS parameters and settings. For this reason, before we edit this file we should make a copy of this file.

Open theRsReportServer.config file with Notepad and then add the below extension settings to the configuration file.

<Extension Name="ENTERDATA" Type="Microsoft.ReportingServices.DataExtensions.XmlDPConnection,Microsoft.ReportingServices.DataExtensions">
<Configuration>
<ConfigName>ENTERDATA</ConfigName>
</Configuration>
</Extension>

The configuration file looks like the below image.


Enter and Edit Data in SQL Server Reporting Services

Save the RsReportServer.config file. After all these steps are finished, we will deploy the report to SSRS.

Click the Home menu and select the Save As option.


Enter and Edit Data in SQL Server Reporting Services

We will select Recent Sites and Servers and set the report server web URL in the Name field. Then click Save.


Enter and Edit Data in SQL Server Reporting Services

When we connect to the SSRS web URL we can find our report.

Compare PUSH vs PULL Data Copy Performance in SQL Server

$
0
0

By: Ben Snaidero || Related Tips:More >Import and Export

Problem

As SQL Server database professionals we are always tasked with moving data around. One of the methods I use quite often when copying data between instances (especially when it is just a one-off table copy between test environments) is to setup a linked server between the instances and copy the data using a single INSERT/SELECT command. A question that arises with this method is which is faster, to push (INSERT INTO remotetable SELECT FROM localtable) or pull (INSERT INTO localtable SELECT FROM remotetable) the data. This tip will try to answer that question.

Solution

To test the performance difference between push and pull we will also look at 3 different methods for creating the connection between the servers so we can see if the difference is also in any way based on the type of connection used.

The 3 different methods we will used are:

Linked server with 4-part name reference OPENROWSET with OLE DB data source OPENQUERY with linked server Test Setup

One thing to note for this test is that to rule out any network anomalies I setup two SQL Server instances on the same server so that the data does not have to go over a network when it is copied. Also, in order for OPENROWSET to work properly the 'Ad Hoc Distributed Queries' option must be enabled on each SQL Server instance. This can be done by running the following T-SQL commands.

sp_configure 'show advanced options',1
reconfigure
go
sp_configure 'Ad Hoc Distributed Queries',1
reconfigure
go

In order to run this test, we will first need to create a linked server. I won't go details on how to do this in this tip since it's pretty straight forward and depends on your environment, but if you need to you can read more on linked servershere.

For the actual data move we will simulate moving data from an online system to an archive table on another server. For my test case I used a source table that was loaded using a dump from aSQL Profiler trace that had approximately 28000 records in it. A similar table structure (minus the primary key) was created in the target instance as the archive table.

The T-SQL to create these tables is below.

-- run on source SQL instance
CREATE TABLE [dbo].[onlinedata](
[RowNumber] [int] NOT NULL,
[EventClass] [int] NULL,
[TextData] [ntext] NULL,
[ApplicationName] [nvarchar](128) NULL,
[NTUserName] [nvarchar](128) NULL,
[LoginName] [nvarchar](128) NULL,
[CPU] [int] NULL,
[Reads] [bigint] NULL,
[Writes] [bigint] NULL,
[Duration] [bigint] NULL,
[ClientProcessID] [int] NULL,
[SPID] [int] NULL,
[StartTime] [datetime] NULL,
[EndTime] [datetime] NULL,
[BinaryData] [image] NULL,
PRIMARY KEY CLUSTERED ([RowNumber] ASC));
-- run on target sql instance
CREATE TABLE [dbo].[archivedata](
[RowNumber] [int] NOT NULL,
[EventClass] [int] NULL,
[TextData] [ntext] NULL,
[ApplicationName] [nvarchar](128) NULL,
[NTUserName] [nvarchar](128) NULL,
[LoginName] [nvarchar](128) NULL,
[CPU] [int] NULL,
[Reads] [bigint] NULL,
[Writes] [bigint] NULL,
[Duration] [bigint] NULL,
[ClientProcessID] [int] NULL,
[SPID] [int] NULL,
[StartTime] [datetime] NULL,
[EndTime] [datetime] NULL,
[BinaryData] [image] NULL);

The actual test script used to test moving the data is outlined below. It is divided into two sections. The first section is run from the source (online) SQL Server instance and pushes the data to the target (archive) SQL Server instance. The second section is run from the target (archive) SQL Server instance and pulls the data from the source (online) SQL Server instance. Since this is purely a performance test I turned on the timing statistics option in SSMS using "SET STATISTICS TIME ON" so that the duration of each statement was output after it completed.

-- PUSH data from online system (source)
INSERT INTO [localhost\archive].master.dbo.archivedata SELECT * FROM onlinedata;
INSERT OPENROWSET('SQLNCLI', 'Server=localhost\archive;uid=sa;pwd=####;','SELECT * FROM master.dbo.archivedata')
SELECT * FROM onlinedata;
INSERT INTO OPENQUERY([localhost\archive],'SELECT * FROM master.dbo.archivedata')
SELECT * FROM master.dbo.onlinedata;
-- PULL data from archive system (target)
INSERT INTO archivedata SELECT * FROM [localhost].master.dbo.onlinedata;
INSERT INTO archivedata
SELECT a.* FROM OPENROWSET('SQLNCLI', 'Server=localhost;uid=sa;pwd=####;',
'SELECT * FROM master.dbo.onlinedata') AS a;
INSERT INTO archivedata
SELECT * FROM OPENQUERY([localhost],'SELECT * from master.dbo.onlinedata') b; Test Results

Below is a table that summarizes the script results. It's quite obvious that the pull method performs much faster than the push method in all 3 cases but why is this the case. Let's take a look at aSQL Profiler trace to see if we can answer that question.

PUSH (ms) PULL (ms) Linked Server 5964 1218 OPENROWSET 6067 1309 OPENQUERY 5907 1231 Test Results Explanation

Let’s first take a look at theSQL Profiler trace from the PULL method. Checking first the trace from the source (online) server we see that it is performing a simple SELECT of all columns just as we would suspect.


Compare PUSH vs PULL Data Copy Performance in SQL Server

On the target (archive) server the PULL method is also just doing a simple insert.


Compare PUSH vs PULL Data Copy Performance in SQL Server

Now let’s take a look at the slower PUSH option and see if we can find where/why we are getting this extra execution time. First looking atSQL Profiler trace from the source (online) server we see a similar query to what we saw on the target (archive) with the PULL method with only difference being the longer duration and no write activity (this would now happen on the target (archive) server.

Looking at theSQL Profiler trace from the target (archive) server we can now see where all the extra execution time is coming from for the PUSH method. The linked server is implicitly opening a cursor and running a separate cursor call for each record inserted. Note: I’ve only included a subset of the sp_cursor calls in the interest of saving space.


Compare PUSH vs PULL Data Copy Performance in SQL Server

Here is additional output from the trace.


Compare PUSH vs PULL Data Copy Performance in SQL Server

Based on this simple test it’s pretty easy to see why using the PULL method is definitely the best option for performance. Even with this small dataset we saw some really large gains in performance by pulling data to our archive server rather than pushing it from the online server.

Next Steps

Last Update:

Natural Connection in SQL Server

$
0
0
Connecting to SQL Server 2008 R2 Remotely Using VPN Modems

I am in the process of accessing a remote instance SQL server using VPN modems. The modems are in the same VPN network. I can ping each of the modems from the remote machines. I have already set the alias of the sql server instances to the ip address

Unable to connect to SQL Server via JDBC. No suitable drivers found for jdbc: sqlserver: //

I simply want to use SQL Server database in my HTTP Servlet program but my program can't seem to connect to the database. It gives me the following error: No suitable driver found for jdbc:sqlserver://localhost:1433;databaseName=Bookyard;integratedSe

Connecting to SQL Server 2008 from Java I am trying to connect to SQL Server 2008 server from Java here is a program import java.sql.*; public class connectURL { public static void main(String[] args) { // Create a variable for the connection string. String connectionUrl = "jdbc:sqlserver: Error connecting to SQL Server

When I want to connect to SQL Server 2008, I get this message: Cannot connect to server. Additional Information: Cannot open user default database. Login failed. Login fail for user 'sa'. (Microsoft SQL Server. Error:4064) How do I resolve this error

How to manage & ldquo; Unable to open a connection to SQL Server & rdquo; With a custom message in vb.net

Is there any way to provide your own message to the user in case SQL Server Database is offline or not accessible while loading a VB.NET form! and yeah I don't want to use Try/Catch because it will show all type of error that occurs for example: Try

Unable to connect to SQL Server after 'Web Deploy' - how to debug?

ASP.NET MVC 4 website, SQL Server 2012. When I upload all files to windows Server 2008 R2, source, web config etc, the app works fine. When I use 'Web Deploy' - which only uploads 'required files', ie strips out source files, web.config etc - it come

Unable to create remote connections to SQL Server 2008 Express

I have a SQL Server instance installed on my VPS I've followed all the usual steps to connect remotely including: Allowing Remote Connections Enabling TCP/IP + adding port 1433 to the IPAll range Creating a Firewall rule to enable the connection Stil

Force ASP.Net to use TCP / IP to connect to SQL Server instead of named pipes

How can I force my ASP.Net application to connect to a SQL Server using TCP/IP, not Named Pipes? I read somewhere putting "tcp:" in front of the server name in the connectionstring should do the trick, but it does not seem to change anything. Ed

Can not connect to SQL Server 2008 R2

Not sure what might have caused it, I ended task on some SQL processes in task manager then restarted the computer. Now I can no longer login to SQL Server 2008 R2. This is the error I get when attempting to login: ==================================

Unable to connect to SQL Server via pymssql

I am attempting to connect to SQL Server running on Windows XP system from a *nix system on a local server via pymssql. However, the connection fails as shown below db = pymssql.connect(host='192.168.1.102',user='www',password='test',database='TestDB

Connecting to SQL Server Express from C #

I tried to open a connection with SQL Server Express and assign a new record on a specific table in C#. This code is giving me this error ExecuteNonQuery requires an open and available Connection. The connection's current state is closed. And the fol

Connecting to SQL Server 2014 from Android Studio

I have a problem connecting to SQL-server database through from my android project. I have added sqljdbc41.jar file to my /app/libs directory and I have added it to dependencies in my android studio project. I use following code: package com.konrad.r

How to use php to connect to sql server I want to use PHP to connect to sql server database. I installed xampp 1.7.0(php 5.2) and SQLSRV20. I've added the extensions in php.ini and I get this error: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: 10.85.80.22 Configuring / troubleshooting the ODBC connection for SQL Server 2014

I've set up ODBC connections to databases before, but I'm currently having problems and can't seem to figure out what I'm missing. This isn't my area of expertise, and the Microsoft help/documentation is less than 'user friendly'. Appreciate any poin

SQL Server 更新统计信息出现严重错误,应放弃任何可能产生的结果

$
0
0

一台SQL Server 2008 R2版本(具体版本如下所示)的数据库,最近几天更新统计信息的作业出错,错误如下所示:

Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0 (X64)

Jun 28 2012 08:36:30

Copyright (c) Microsoft Corporation

Standard Edition (64-bit) on windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)


SQL Server 更新统计信息出现严重错误,应放弃任何可能产生的结果

第一次碰到这么奇葩的问题。查看错误日志,就会发现更新统计信息时出现异常,生成了dump文件。


SQL Server 更新统计信息出现严重错误,应放弃任何可能产生的结果

对表做DBCC CHECKTABLE发现正常,未有一致性错误。

DBCC CHECKTABLE ( 'TBusRetail' )


SQL Server 更新统计信息出现严重错误,应放弃任何可能产生的结果

后面查了一下资料,在官方文档看到有个Bug会导致这个问题,官方文档为:FIX: An access violation may occur when you update the statistics of a table after you enable and then disable conflict detection on a table in in SQL Server 2008 or in SQL Server 2008 R2。刚好我们这个环境的版本也在其中。具体参考下面:


SQL Server 更新统计信息出现严重错误,应放弃任何可能产生的结果

Cause

This issue occurs because the database engine is trying to load dangling statistics. When P2P conflict detection is enabled, an MDColumnIdP2pCdId system column is added to the base index rowset of the table. Replication-related queries may create statistics on the system column automatically. When P2P conflict detection is disabled, the system column is removed from the table. However, the corresponding statistics remain. Therefore, updating statistics causes the access violation exception to occur because the statistics cannot be added to the table.

SQL Server 2008 Service Pack 2

The fix for this issue was first released in Cumulative Update 3 for SQL Server 2008 Service Pack 2. For more information about this cumulative update package, click the following article number to view the article in the Microsoft Knowledge Base:

2498535 Cumulative update package 3 for SQL Server 2008 Service Pack 2

Note Because the builds are cumulative, each new fix release contains all the hotfixes and all the security fixes that were included with the previous SQL Server 2008 fix release. Microsoft recommends that you consider applying the most recent fix release that contains this hotfix. For more information, click the following article number to view the article in the Microsoft Knowledge Base:

2402659 The SQL Server 2008 builds that were released after SQL Server 2008 Service Pack 2 was released

Microsoft SQL Server 2008 hotfixes are created for specific SQL Server service packs. You must apply a SQL Server 2008 Service Pack 2 hotfix to an installation of SQL Server 2008 Service Pack 2. By default, any hotfix that is provided in a SQL Server service pack is included in the next SQL Server service pack.

The fix for this issue was first released in Cumulative Update package 6 for SQL Server 2008 R2. For more information about how to obtain this cumulative update package, click the following article number to view the article in the Microsoft Knowledge Base:

2489376 Cumulative Update package 6 for SQL Server 2008 R2

Note Because the builds are cumulative, each new fix release contains all the hotfixes and all the security fixes that were included with the previous SQL Server 2008 R2 fix release. We recommend that you consider applying the most recent fix release that contains this hotfix. For more information, click the following article number to view the article in the Microsoft Knowledge Base:

981356 The SQL Server 2008 R2 builds that were released after SQL Server 2008 R2 was released

但是等我打上补丁后,测试发现问题依然存在,也只有这个表存在这个问题。后面仔细检查,发现这个表有不少计算列( Computed Column ),刚好以前也遇到过由于计算列导致统计信息更新出现错误的情况,一检查,发现这表有大量的统计信息,遂生成删除统计信息的脚本后执行删除(排除了相关索引的统计信息)。然后再更新统计信息,OK,问题解决了。看来又是神奇计算列导致的统计信息更新异常!

SELECT 'DROP STATISTICS dbo.TBusRetail.' + QUOTENAME ( name ) + ';'

FROM sys . stats

WHERE object_id = OBJECT_ID ( 'dbo.TBusRetail' )

DROP STATISTICS dbo.TBusRetail.[PK_TBUSRETAIL]; -- 排除这个统计信息 DROP STATISTICS dbo.TBusRetail.[IdxRefNoOpDate]; -- 排除这个统计信息 DROP STATISTICS dbo . TBusRetail . [_WA_Sys_00000016_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_00000015_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_00000003_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_PayWay_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_Opr_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_Charger_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_RefNo_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_TicketAmount_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_ChargeDate_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_TicketFAmount_58BCECDB] DROP STATISTICS dbo . TBusRetail . [_WA_Sys_PayAmount_58BCECDB] DROP STATISTICS dbo .

Microsoft Ignite interview with Kevin Farlee on Azure SQL Database Hyperscale

$
0
0

Azure SQL Database is introducing two new features to cost-effectively migrate workloads to the cloud.SQL Database Hyperscale for single databases, available in preview, is a highly scalable service tier that adapts on demand to workload needs. It auto-scales up to 100 TB per database to significantly expand potential for app growth.

What does this mean? It’s one of the most fundamental changes to SQL Server storage since 7pm. So this is big: big news, and very big data stores. I am very lucky because I got to interviewe Kevin Farlee of the SQL Server team about the latest news, and you can find the video below.

I am sorry about the sound quality and I have blogged so that the message is clear. When I find the Ignite sessions published, I will add in a link as well.

What problem are the SQL Server team solving, with Hyperscale?The fundamental problem is how do you deal with very large databases in the cloud. VLDBs is the problems that people want to do with normal operations. All the problems with VLDBs occur due to the sheer size of data, such as backups, restores, maintenance operations, scaling. Sometimes these can take days to conduct these activities, and the business will not wait for these downtimes. If you are talking tens of terabytes, that takes day and ultimately Microsoft needed a new way to protect data and VLDBs. The SQL Team did something really smart and rethought very creatively on how they do storage, in order to take care of the issues with VLDBs in the cloud.

So, the Azure SQL Server team did something that is completely in line with one of the main benefits and key features of cloud architecture: they split out the storage engine from the relational engine. Storage implementation was completely rethought and remastered from the ground up. They took the viewpoint over how you would go about architecting, designing and building for these solutions in the cloud, if you were to start from scratch?

The Azure SQL Server database team did a smart thing: Azure SQL Server is using microservices to handle VLDBs.

The compute engine is one microservice which is taking care of it’s role, and then another microservice that is taking care of the logging, and then a series of microservices that handle data. These are called page servers, and they interface at the page level. The page servers host and maintain the data files. Each page server handles about a terabyte of data pages. You can add on as many as you need.

Ultimately, compute and storage are decoupled so you can scale compute without moving the data. This means it’s possible to keep adding more and more data, and it also means that you don’t have to deal with the movement of data. Moving data around when there are terabytes and terabytes of data isn’t a trivial task.The page servers have about a terabyte of data each, and the page servers have about a terabyte’s worth of SSD cache.

The ultimate storage is Azure Blob Storage, because blob storage is multiply redundant and it has features like snapshots, so this means that they can do simultaneous backups by just doing a snapshot across all of the blobs. This has no impact on workload.

Restores

Restores are just instantiating a new set of writeable disks from a set of snapshots, and works with the the page servers and the compute engine to take care of it, working in symphony. Since you’re not moving the data, it is faster.

I’m personally very impressed with the work that the team they’ve done, and I’d like to thank Kevin Farlee for his time. Kevin explains things exceptionally well.

It’s worth watching the video to understand it. As well as the video here, Kevin goes into detail in his Microsoft Ignite sessions, and I will publish more links when I have them.

Community

One advantage in doing the MIcrosoft Community Reporter role is that I get to learn from the experts, and I enjoyed learning from Kevin throughout the video.

It seems to me that the Azure SQL database team have really heard the voice of their technical audience and they’ve worked passionately and hard to tackle these real life issues. I don’t know if it is always very clear that Microsoft is listening but I wanted to blog about it, since I can see how much the teams take on board the technical ‘voice’ from the people who care about their solutions, and who care enough to share their opinions and thoughts so that Microsoft can improve their solutions.

From the Azure architecture perspective, it works perfectly with the cloud computing concept of decoupling the compute and the storage. I love watching the data story unfold for Azure and I’m excited by this news.

Master Data Services in SQL Server 2019

$
0
0

I have a very special relationship with MDS (Master Data Services), and even though for some reason I have never blogged about it, I feel like SQL Server 2019 is bringing a good reason to blog about.

To define my kind of relationship with the MDS, let’s put it to the perspective, that for a couple of months in 2017 I was spending with it many more months that I would spend with Columnstore Indexes. :) And let’s add to that, that I have had to learn the internals of the functioning of MDS way to much that I have wished to. :)

Before we continue, let me ask you one question, have you heard about Silverlight ?

Or in other words, and with a kind of evil voice “DID YOU EVER INSTALLED SILVERLIGHT ON A PRODUCTION SERVER”?.

If you have worked with MDS oh yes, you did! At least in order to check if everything is configured/upgraded correctly and nothing is broke, I will do a wild guess and claim that you did! So am I … :s


Master Data Services in SQL Server 2019
Because in order to make things work in MDS correctly, one needs this old, for a very long time deprecated framework, that is supported only in deprecated browser that is called Internet Explorer v.11 , and that pain-in-the-neck framework is called Silverlight and if you dare to work with any SQL Server versions before SQL Server 2019, the picture on the left will appear in front of you at the moment you will try to explore the master data in the MDS Explorer ensuring that unless you install a totally abandoned (and obviously unnecessary product, that represents another risk on your server) is a necessary thing. That is alone is the reason for some people would use some development VM in order to work with MDS, but that is not a good excuse to include that product in SQL Server 2016 or in SQL Server 2017. MDS in SQL Server 2019
Master Data Services in SQL Server 2019
As you can see on the picture above, this is pretty much a functional HTML interface for working with Entities, Entity Dependencies, Hierarchies, Collections and Changesets! This is a major step forward and I am so glad to be alive during this moment, because quite honest I tended to believe that this would take a good 10 years and a death of a product to change. Giving us now a choice of using any other browser besides the Internet Explorer 11 and allowing to work from different platforms or formats (even though not now, but hopefully in the future). For what it is worth, i have successfully tested a good number of functionalities in Firefox.

No more Silverlight Yahoo!


Master Data Services in SQL Server 2019
Do not think that by the initial public CTP 2.0 the interface would be bugless (and do not get me started on the bugs with silverlight, they were reported to the team enough times), but it is a start and if you are into the MDS you should definitely try it out and deliver the feedback to the SQL Server development team, so we all can get a better product by the RTM. On the picture on the left you can see a full screen (!) of my notebook which is very from being friendly or editable for that part and just think what would happened if there would be more than just 1 additional attribute assigned to the entity … Or if there would be a hundred … Yeah, I know that is why we all love Excel (unless you are on linux or Mac or using a tablet).
Master Data Services in SQL Server 2019
I do not like the look and feel of the implemented buttons instead of going for a good modern look, a decision of Silverlight emulation was probably taken making sure that the interface will keep on presenting itself as some old software from the last century.
Master Data Services in SQL Server 2019
There are no clear interactions showing that some elements does not exist yet (such as the hierarchies, showing a kind of submenu item, when there are none and the interface should be clear on that. And yes, I totally understand that it was like this before, but 2 wrongs does not make 1 right. :)

The lack of the horizontal scrollbars for the Master Data Management is still as irritating as it was many years ago.

In the basic tests of the Integration Management I was happy with what I saw, especially given that this is a pretty early release.

The version management bugs (lock a version and try to copy it) are still here the screen changes its design completely …

Do not take this feedback as a negative one, I am critical because I really would love this product to grow and to succeed, because right now anyone who is truly serious about Master Data Management is looking towards other vendors, such as Profisee .

Is this a promise of a brighter future for the Business Intelligence people working with Data on the Enterprise?

I do not think so, but I think it is a very first step on ensuring that not everything is lost.

Final Thoughts

I wish this would mean that MDS is not really dead. (and not like in maintenance mode, cleaning up the bugs from 2008)

I wish that there would be someone in the leadership position Microsoft who love Business Intelligence and more specifically Data Quality. (Those people would have to be from the leadership, cause I know enough great specialists who truly care, but they are not given decision rights, hopefully just yet)

I wish that the elimination of the Silverlight is not some kind of windows Server requirement and that we soon can see a presence of the Linux version of the MDs (on the Apache, for example) and eventually (or even before) a PaaS (Platform As A Service).

I will be looking and testing the new MDS, with no public blog posts anticipated but I will be closely watching where the MDS team will be taking the product forward.


Fix SQL Server with one click

$
0
0
Fix SQL Server with one click Randolph West Posted on 17 October 2018 Comments Tempting headline, isn’t it? It might even seem like clickbait, but that’s not the intention. The SQL Server default configuration is not recommended for production environments, and yet I have worked on many production environments that have been set up by people who don’t know that the default configurations are not recommended. These same people [...]

The post Fix SQL Server with one click appeared first on Born SQL .


Fix SQL Server with one click
Born SQL with Randolph West

Randolph West solves technology problems with a focus on SQL Server and C#. He is a Microsoft Data Platform MVP who has worked with SQL Server since the late 1990s. When not consulting, he can be seen acting on the stage and screen or doing voices for independent video games.

微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

$
0
0

微软System Center四大核心套件,SCCM、SCVMM、SCOM、SCDPM,接下来我们部署最后一个重要的套件SCCM,SCCM具有资产管理,终端管理,补丁、软件、操作系统分发,等诸多功能,从2012其,其还集成了SCEP(System Center Endpoint Protection)安全套件,可以用于终端安全检测、杀毒等。

常规套路,在安装SCCM 2016之前,我们先来看看SCCM的前提条件,并部署SCCM所需的数据库。

1. SCCM前置条件需求

SCVMM的前置条件需求如下:

计算机加入域

安装账户、服务账户具有本地管理员权限

SQL Server 20108 R2 SP3、2012 SP3、2014 SP1、SP2、2016、2016 SP1(本次部署采用2016)

排序规则要求SQL_Latin1_General_CP1_CI_AS

SQL Server要求的功能为: 数据库引擎服务功能为必须

windows身份验证需要

每个SCCM站点需要专用的SQL Server实例

SQL Server内存要求最少设置8GB

必须启用SQL嵌套触发器

要求启用SQL Server公共语言运行时(安装SCCM时会自动启用)

SQL Reporting报表服务(如需生成报表)

BITS服务

.Net 3.5、4.0或4.5或更新 (Windows Server 2016已内置4.6与3.5)

Windows Installer 4.5(或更高版本)(Windows Server 2016系统已内置)

Microsoft XML Core Services 6.0 (MSXML60)

IIS服务角色

WSUS服务

Windows ADK (Windows 部署工具组件)

部署工具

Windows 预安装环境(Windows PE)

图像处理和配置设计器(ICD)

用户状态迁移工具(USMT)

Windows Perforamance Tookit

Windows Assesment Service

进行正式安装之前,需进行如下步骤:

安装好操作系统

设置IP地址与计算机名(本POC测试中,IP地址为172.16.11.15、计算机名为SCDPM)

计算机加入域

将DPMadmin管理员用户、sqlservice服务启动账户加入到本地管理员组

安装好.Net 3.5 与 .Net 4.6

2. SQL Server与前置条件安装

1) 使用CMadmin登录到SCDPM服务器,并插入SQL Server 2014 安装光盘

2) 双击打开SQL安装程序

3) 在SQL Server安装界面,点击左侧的安装,然后点击右侧的“全新SQL Server独立安装或向现有安装添加功能”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

4) 在产品密钥页面,镜像已经预输入了密钥,直接点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

5) 在许可条款页面,勾选 “我接受许可条款”,然后点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

6) 在Microsoft更新页,直接点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

7) 在产品更新页,直接点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

8) 在安装规则页,所有规则检查通过后,点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

9) 在功能选择页面,勾选数据库引擎服务、全文语义提取搜索、Reporting Service-本机,然后点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

10) 在实例配置页面,选择默认实例,然后点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

11) 在服务器配置页面,将 SQL Serrver代理 、 SQL Server数据库引擎、SQL Server Reporting Service的服务启动账户更改为mscloud\sqlservice,并输入密码,勾选“授予SQL Server数据库引擎服务执行卷维护任务特权”,然后点击上方的排序规则选项卡


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

12) 将数据库引擎规则改为SQL_Latin1_General_CP1_CI_AS,然后点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

13) 在服务器引擎配置页,身份验证模式选择混合模式,然后将cmadmin、sqlservice、administrator等域用户添加为SQL Server管理员,然后点击上方的数据目录选项卡


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

14) 在数据目录选项卡,数据根目录选择D:\SQLDB,然后点击“下一步”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

15) 在Reporting Service页面,选择安装和配置,然后点击下一步


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

16) 在准备安装页面,检查SQL 安装设置是否有误,确认无误后,点击“安装”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

17) SQL Server安装完成,点击“关闭”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

18) 在SQL Server 2016安装包中,不再提供管理工具,需要单独下载SQL Server管理工具,下载地址 https://go.microsoft.com/fwlink/?LinkId=531355

19) 双击打开下载好的SSMS管理工具,在初始页面,点击“安装”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

20) 安装成功,点击重新启动,重启服务器


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

21) 重启完成后开始安装先决条件

首先打开我们下载好的ADK 10程序adksetup.exe

22) 打开后选择安装路径,然后点击下一步


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

23) 在工具包隐私页,点击下一步


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

24) 在许可协议页,点击“是”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

25) 按照下图所示勾选要添加的功能,然后点击安装


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

26) 安装完成点击关闭


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

27) 在AD中创建系统管理 容器

使用域管理员登录到任意一台域控制器,打开 控制面板 -> 管理工具-> ADSI编辑器


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

28) 右键点击连接到


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

29) 选择默认命名上下文,然后点击确定


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

30) 在域名上,右键点击新建->对象


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

31) 类选择Container,然后点击下一步


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

32) 值输入System Management,然后点击下一步


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

33) 点击“完成”


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

34) 右键点击新建的容器,然后点击 属性


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

35) 点击 安全 选项卡,然后点击 添加


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

36) 添加SCCM的主机名,并赋予 完全控制 权限,然后点击 高级


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

37) 选择该对象,点击 编辑


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

38) 将应用于设置为 这个对象及全部后代 ,然后点击 确定


微软超融合私有云测试31-SCCM2016部署之前提条件准备、SQL部署

在返回的界面一路点击确定

39) 按照上述步骤,添加mscloud\cmadmin账户权限

Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

$
0
0

By: Daniel Farina || Related Tips:More >SQL Server on linux

Problem

In aprevious tip we looked at how to use Cron to schedule tasks for SQL Server on Linux . In this tip we take a deeper look to help answer these questions: Is there is any security configuration that I can use? Is there a way to view a log of cron work? How I can check if a job was successful or failed? In this tip I will answer those questions.

Solution

In myprevious tip in this series I introduced you to the cron daemon and its configuration file, crontab. I omitted many items in order to make the introduction simple. My intentions with that tip were mostly to introduce you to the format of crontab file. Now I will cover more advanced topics.

Advanced Cron Topics

It is not enough to just know the crontab file format to use cron. Think about it. It is pointless to schedule a job if you wont be able to know whether it was successful or failed.

Here is a list of topics I will cover in this tip:

Crontab Variables Crontab Logging Crontab Security Crontab Shortcuts

Let’s start.

Crontab Variables

You can define and use variables in a crontab file. There are predefined variables where the name has a specific use. Take a look at the next table. Some of those variables have a default value which can be overridden by redefining it. Also, you can assign an empty string if you want to blank out the contents of a variable (i.e. VARIABLE="").

Variable Description Example Default Value SHELL The shell used to execute each command in the crontab file. SHELL=/bin/bash /bin/sh PATH Indicates the path to the directories in which cron will look for the command to execute. This path is different from the global path of the system or the user. PATH= /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin /usr/bin:/bin MAILTO The addresses (separated with a comma) that will receive by mail the log of failed tasks. When this variable is declared as empty (i.e. MAILTO="") no mail is delivered. [emailprotected] Mail is sent to the owner of the crontab. HOME This is the root directory for cron. HOME=/ Its value is taken from the /etc/passwd file for the user. LOGNAME This is the user name that the job is running from. Its value cannot be changed. N/A N/A

Something important to note is that cron doesn’t perform variable substitution. In layman’s terms, you can’t assign a variable the sum or composition of other variables. Let’s consider the following example:

HOME=/home/daniel
PATH=HOME:/bin/:/usr/bin

On the previous example the user who wrote the crontab lines wants the PATH variable to contain the directory specified in the HOME variable and additionally /bin and /usr/bin directories. But since there is no variable substitution in cron, the PATH will take "HOME:/bin/:/usr/bin" as a literal. The proper way to write the previous lines is as follows:

HOME=/home/daniel
PATH=/home/daniel:/bin/:/usr/bin

In the next image you will see a crontab file with SHELL and PATH variables declared.


Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

The next code fragment shows how to use a variable in a job definition. It is declaring a variable SQLDATA with the path to SQL Server backups and each day at midnight moves the backup files to an external device.

SQLDATA=/var/opt/mssql/data
0 0 * * * mv $SQLDATA/*.bak /mnt/ Cron Logging

Cron jobs are logged by default to the file /var/log/syslog, which is the file where all services log statuses and messages. There are different ways you can view its content.

The next script uses the grep command to search the pattern "cron" in the file /var/log/syslog (the -i is to make the search case insensitive).

grep -i cron /var/log/syslog
Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

Also, we can use a pipe to send the output to the tail command if we are interested only in the last 10 lines.

grep -i cron /var/log/syslog | tail
Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

The systemctl command shows the status of cron service as well as its last entries in syslog.

systemctl status cron
Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

Also, we can use journalctl (a command that queries the system journal) and pass the system unit cron as a parameter. It has the peculiarity that shows system reboots.

journalctl -u cron
Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics
Creating a cron.log File

We can create a single file that holds the cron messages by editing the /etc/rsyslog.d/50-default.conf file and uncommenting the line that starts with #cron.*. On the next image you can see that line.


Scheduling SQL Server Tasks on Linux Part 2: Advanced Cron Topics

After you uncomment that line and save the changes you should run the following command to restart the rsyslog service.

systemctl restart rsyslog Crontab Security

Cron has two files, /etc/cron.allow and /etc/cron.deny that we can use to configure and manage the users allowed to have a personal crontab. The system-wide crontab will work regardless of the contents of those files. Those files have the same format which is one user per line. Here is how they work.

cron.allow - defines the users that are able to have their crontab file and therefore use cron. cron.deny - contains all the users that are forbidden to have their crontab file and therefore use cron. Depending on your distribution you may not have one of those files but you can create them.

In order to protect the system-wide crontab you have to set the proper permissions to the /etc/crontab file. The following command will give read and write permissions only for the root user and no other user will be able to read the file.

chmod 600 /etc/crontab Crontab Shortcuts

Cron also has special folders that serve as shortcuts to schedule jobs at specific times. In the next table I enumerate those folders.

Folder Description /etc/cron.daily The scripts on this folder run daily. /etc/cron.hourly The scripts on this folder run hourly. /etc/cron.monthly The scripts on this folder run monthly. /etc/cron.weekly The scripts on this folder run weekly.

The system-wide crontab is what executes the content of each of those folders, so it may be useful to set those folders with root only permission to avoid the chance that a user puts malicious scripts in those folders.

If you want to know at which specific time the scripts

How to Get Started with SQL Server and .NET

$
0
0

By: Artemakis Artemiou || Related Tips:More > Application Development

Problem

SQL Server is one of the most powerful data platforms in the world and the .NET Framework is one of the most popular software frameworks for developing software that runs primarily on Microsoft windows. Imagine what you can do if you combine these two technologies. The possibilities are endless.

This tip helps you get started with SQL Server and .NET (C#). You will learn how you can connect from a C# application to SQL Server and how you can retrieve the version of the SQL Server instance, thus running a simple query.

Solution

In order to demonstrate a connection to SQL Server via a .NET application, in this case a C# application, we need to start a new project in Visual Studio. For this demo, I’m using Visual Studio 2017 .

Create New Visual Studio Project

So, in Visual Studio, I’m starting a new project and select Visual C# - Console App (.NET Framework) and calling it TestApp1.


How to Get Started with SQL Server and .NET

And this is the development environment I get, in order to work on my project:


How to Get Started with SQL Server and .NET

The purpose is to try to connect to a named SQL Server instance " SQL2K17 " on the local machine and retrieve the SQL Server version information.

Here’s a screenshot of the SQL Server instance, as it can be seen in SQL Server Management Studio (SSMS):


How to Get Started with SQL Server and .NET
Connecting Application to SQL Server

In order to connect to a database from a client (i.e. in this case, our C# application), we need to make use of a Data Provider . For example, there are ODBC drivers/providers, OLE DB providers, specific .NET Framework Data Providers, etc. Without the use of a Data Provider, you cannot connect to the database. Data Providers act like intermediaries between the database server and the application/client.

In this demo, we are going to use the " .NET Framework Data Provider for SQL Server ". We make use of this data provider by including in our project, the System.Data.SqlClient namespace with the below line of code at the top of our code class:

using System.Data.SqlClient;

Then we need to write the proper C# code that tries to establish a connection to the SQL Server instance, using the above data provider, along with executing the query that returns the SQL Server version.

At this point it must be mentioned that when writing data access code, you always need to include exception handling code/logic, in order to handle any issues that may have to do with the communication between the client (i.e. your C# application) and the database server.

In order to connect to SQL Server using the .NET Framework Data Provider for SQL Server and retrieve information, you will need to create the below objects:

SqlConnection - Connecting to SQL Server SqlCommand - Running a command against the SQL Server instance SqlDataReader - Retrieving your query's results) Creating a SQL Server Database Connection String

Also, in order to establish the connection to SQL Server, you will need to specify the connection string , in the format expected by the data provider you are using.

In the connection string, you can specify that you want to have either a trusted connection to SQL Server, that is using Windows Authentication, or a SQL authentication-based connection, by using a username/password.

Below, you can find examples of connection strings. The first one, uses a trusted connection, and the second one uses an SQL connection.

Trusted connection: string connString = @"Server=INSTANCE_NAME;Database=DATABASE_NAME;Trusted_Connection = True;"; SQL Authentication-based connection: string connString = @"Server=INSTANCE_NAME;Database=DATABASE_NAME;User ID=USERNAME;Password=PASSWORD";

In this demo, we are going to use a Trusted connection.

C# Code Blocks to Access SQL Server

The below code block, shows how you can set the connection string in the SqlConnection object, along with applying exception handling logic while trying to connect to the SQL Server instance:

try
{
using (SqlConnection conn = new SqlConnection(connString))
{
//access SQL Server and run your command
}
}
catch (Exception ex)
{
//display error message
Console.WriteLine("Exception: " + ex.Message);
}

The below code block, shows how you can make use of the SqlCommand object:

SqlCommand cmd = new SqlCommand(QUERY, conn);

Last, the below code block shows how you can make use of the SqlDataReader object for the purposes of this demo:

//execute the SQLCommand
SqlDataReader dr = cmd.ExecuteReader();
//check if there are records
if (dr.HasRows)
{
while (dr.Read())
{
//display retrieved record (first column only/string value)
Console.WriteLine(dr.GetString(0));
}
}
else
{
Console.WriteLine("No data found.");
}
dr.Close();

OK, now we can put everything together and write the proper C# code to access the named SQL Server instance ".\SQL2K17", retrieve version information, and display it on the screen via the command line.

Complete Code Listing

Here’s the full code for the Program.cs class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlClient;
namespace TestApp1
{
class Program
{
static void Main(string[] args)
{
string connString = @"Server =.\SQL2K17; Database = master; Trusted_Connection = True;";
try
{
using (SqlConnection conn = new SqlConnection(connString))
{
//retrieve the SQL Server instance version

A unique review of SQL Server index types

$
0
0
Introduction

One thing I have noticed in various places is that people still tend to use the traditional clustered and non clustered indexes. Yet I rarely see the filtered indexes being used even though they have been available since SQL Server 2008. Here’s the results of a quick poll I done this week about index preferences:

#sqlserver question of the day. Do you have a preference of which index type to use first to fix performance issues, if so what? #sqlpass #sqlfamily

― Kevin Chant (@kevchant) October 16, 2018

As shown rowstore non-clustered filtered index was the least popular choice. One of the votes you see might have been a sympathy vote.

So to help spread the love for them and newer index types I have decided to do a unique review of current SQL Server index types.

Rowstore Clustered Index Rowstore Non-Clustered Indexes Filtered Rowstore Indexes Clustered ColumnStore Indexes Memory Optimized Non-Clustered Indexes

But before I do I want to briefly discuss heaps.

Heaps

Heaps means that the data for a table does not have any form of clustered index. Traditionally they are know to have issues performance wise. Especially if a large amount of deletes and updates take place inside them.

See for yourself by downloading Brent Ozars First Responder toolkit here and run sp_blitzindex against a database that has heaps. Of course heaps have their uses. They work well with large imports if configured correctly.

Now the heaps are clarified we can go to the index types.

1: Rowstore Clustered Index

You might be wondering what I mean by rowstore. It’s a term somebody came up with to describe the traditional index structure that most people who work with SQL Server know.

It is based on a balanced tree index design. If you look on SQL Server Books online you see the diagram of this index. You see it has a tree structure and looks like an organisation chart. Clustered indexes define how the data in a table is ordered by stating what columns the index is ordered by when created. Index keys are another name for the columns used to order an index.

Traditionally these are created to convert the data inside heaps into a structured index. Because they contain the data itself only one can be created per table.

2: Rowstore Non-Clustered Indexes

These indexes are what some people also call covering indexes. These are indexes that tend to include a subset of the columns in your table to speed up the performance of queries. To reduce the size of these indexes and improve the performance even more you can add columns as “included” columns.

Those columns end up in the leaf level of the index only. It optimises the index in question and avoids SQL Server having to look at another index for that table. You should add columns which are only in the “Select” part of a SQL query as an “included” columns.

You can have a number of these indexes per table depending on which version of SQL you are using.

3: Filtered Rowstore Indexes

Some people will point out this is a type of non-clustered index. And they are completely right. I’m giving this type it’s own section to spread the love for it a bit. It’s being available since SQL Server 2008. I think it should be more popular because it can dramatically improve query performance.

It’s where you create a non-clustered index with a “Where” clause at the end after stating all the columns. Exactly like you do at the end of a normal query. Doing this makes the index a lot smaller and optimal. If you have a certain query causing you pain it can make it go a lot faster. I’ve seen filtered indexes change queries from taking minutes to seconds.

Way back in SQL Server 2005 the decision was made to introduce XML to SQL Server. Back then it was a big deal. I’ll be honest here I’ve not used them since Microsoft reduced their importance in the SQL exams.

Still I did say I’d cover them. You can have a Primary and an additional Secondary XML index. To create a Secondary XML index you must first create a Primary. Either can be used in different ways to access XML data from SQL faster. If you feel sorry for people who support this maybe spare a though for those who support XML in varchar(max) columns as well.

5: Columnstore Indexes

Columnstore indexes were introduced in SQL Server 2012 and are highly compressed non-clustered indexes. Based on a compression engine used in Excel these are ideal for reading large database tables that contains millions of rows.

I have to add that only certain data types are supported though and the larger varchar data types are not.

Because of their popularity they have improved with newer versions of SQL Server. Plus filtered columnstore indexes were introduced with SQL Server 2016. Another advantage of using these indexes are that they perform well with memory optimized tables.

6: Clustered Columnstore Indexes

First introduced in SQL Server 2014 a table with compatible data types can have a clustered columnstore index.

Be aware that when clustered columnstore index is created it does not reorder the data in a specified order. Therefore it is considered good practice to create a clustered index on a large table first. After that convert it to columnstore.

Like their rowstore counterparts you can only have one of these per table.

These index types were introduced to handle the data structure used by in-memory OLTP (aka Hekaton). Which first introduced with SQL Server 2014.

These are used by memory optimized tables. Designed to reduce contention during a large amount of transactions.I highly recommend you do your homework before using this. For more information about this I recommend downloading Kalen Delaneys Second Edition about In-Memory OLTP from here .

8: Memory Optimized Non-Clustered Indexes

What a mouthful this index name is! This index type is a version of a non-clustered index for Memory-Optimized tables. It’s based on the traditional B-Tree, or Balanced Tree, index design. It also uses the same top-down design that rowstore indexes have. Due it being optimized for In-Memory OLTP it’s known as a Bw-tree structure.

Final word

Well there’s my quick overview of current index types in SQL Server. A bit longer than expected but worth it. Look in SQL Server books online for more detailed explanations about all of them.

If you have questions or anything to add feel free to comment on this post.

Kevin Chant

Viewing all 3160 articles
Browse latest View live