Quantcast
Channel: CodeSection,代码区,SQL Server(mssql)数据库 技术分享 - CodeSec
Viewing all 3160 articles
Browse latest View live

Queries to inventory your SQL Server Agent Jobs

$
0
0

By: Rick Dobson || Related Tips:More >SQL Server Agent

Problem

I am a SQL developer who recently migrated to a team that relies heavily on SQL Server Agent Jobs. I need some help with inventorying the jobs on a SQL Server Agent installation. Please provide some code samples that illustrate how to inventory programmatically the jobs on a SQL Server Agent installation.

Solution

SQL Server offers a collection of tables in the msdb database that let you inventory the jobs on a SQL Server Agent installation. This tip will introduce you to some of these tables and demonstrate simple programming examples to inventory SQL Server Agent Jobs. The specific tables covered by this tip include:

msdb.dbo.sysjobs msdb.dbo.sysjobsteps msdb.dbo.sysjobschedules

The scripts in this tip present code samples for using the tables separately and jointly to discover such information as: when jobs were created, last used, and will be used next. These tables additionally support the discovery of other information about the jobs on a SQL Server Agent installation.

All code samples use a SQL Server Agent with jobs created within either of two prior mssqlTips.com articles: one on getting started with SQL Server Agent and another on creating multistep and dynamic jobs .

There are only about a handful of jobs on the SQL Server Agent used to demonstrate code samples within this article. Personal experience confirms that the tips apply to SQL Server Agent installations with at least scores of jobs. The actual maximum number of jobs to which the tips apply is very likely much larger than several score of jobs.

What jobs are in SQL Server Agent?

The sysjobs table in the msdb database stores selected top-line information about the jobs in SQL Server Agent. There is a single row in the sysjobs table for each job within a SQL Server Agent. The field values for each row identify or describe the jobs on a SQL Server Agent.

Here's a summary of the sysjobs fields used in the code sample for this table. None of these values can be NULL.

Job_id has a uniqueidentifier data type that is a unique id field value for a job on a SQL Server Agent; this field is especially useful for joining msdb tables with different information about the jobs for SQL Server Agent. Name is a string of Unicode characters designating a job. The field has a sysname data type, which is equivalent to a nvarchar(128) data type that does not allow NULL values. Consequently, every SQL Server Agent Job must have a name field value, and the value cannot exceed 128 Unicode characters. Enabled is a tinyint field denoting whether a job can be invoked by a schedule. A value of 1 allows the job to be started on a schedule. A value of 0 means the job cannot be invoked by a schedule, but a SQL Server Agent user can manually start the job. Even if a job is enabled, it is not necessary for the job to have a schedule, and the job can be started manually. Date_created and date_modified are two fields with datetime data types that indicate, respectively, when a job was first created and when it was last modified.

The following script lists five field values from the sysjobs table for each job on a SQL Server Agent. The result set for the script displays the jobs in ascending order by creation date.

-- list of jobs; selected info about jobs
SELECT
job_id
,name
,enabled
,date_created
,date_modified
FROM msdb.dbo.sysjobs
ORDER BY date_created

Here's the result set from the preceding script. Notice that the syspolicy_purge_history job has the earliest creation date on the SQL Server Agent. This is a Microsoft-supplied job that is created when you first install SQL Server Agent on a SQL Server instance. The other jobs listed are custom ones created for the two previously referenced MSSQLTips.com articles introducing how to create SQL Server Agent jobs.

Create a summary table is the first custom job created. It was created on March 25, 2017. This job is enabled (it has a value of 1) when the screen shot was taken. The next job created is named Populate the Inserts Per Day table. This job is not enabled so that it has to be launched manually or on demand. If there is a schedule assigned to the job, it will not be able to launch the job until the enabled field value is switched from 0 to 1. The last two jobs (Create a two-step reporting job and Four step job with error branching) are described in the preceding SQL Server Agent tip article within this series.

The date_modified column value is always later than the date_created column value. From the time of the initial creation of the Four step job with error branching job until the last modification date_time is nearly one full month. This job experienced an especially long development period with several rounds of tweaking and testing.


Queries to inventory your SQL Server Agent Jobs
When was the last run date and time for a SQL Server Agent Job?

The date_created and date_modified field values from the sysjobs table provide some information about how recently a job was created and when it was last modified to keep it current with new requirements. However, sometimes a job may not be created nor modified recently, but it can still be used on a regular basis. One way of getting a handle on if a job is still being used no matter how long ago the job creation date or last modified date is to check its last run date and time. Jobs with a last run date and time close to now are likely still being used on a regular basis.

The msdb.dbo.sysjobsteps table includes last_run_date and last_run_time fields. The sysjobsteps table has a separate row for each step within each job. If a job has just one step, then there is just one row for the job. When a job has more than one step, then there is one row in the sysjobsteps table for each step within in a job. Rows within the sysjobsteps table can be uniquely identified by their job_id and step_id column values.

By joining the sysjobsteps table to the sysjobs table, you can expand the amount of information displayed for jobs on SQL Server Agent. For example, the last run date and time are not available from the sysjobs table. By joining the sysjobsteps table to the sysjobs table, you can display the last run date and time values along with those job properties from the sysjobs table.

The sysjobsteps code samples reference five columns from the sysjobsteps table. None of these values can be NULL.

The Job_id column value for a job has the same uniqueidentifier value as in the sysjobs table. This feature facilitates jointly displaying last_run_date and last_run_time values along with other job properties, such as job name. Step_id is an int data type value denoting the order of a step within a job. Step_name is a sysname data

SQL Server Memory Accounting: Aligning Perfmon & DMVs

$
0
0

Short one for today, trying to get back into the habit of posting stuff here ;-)

I remember staring at all of the SQL Server memory-related counters available in perfmon, and wondering how I was going to make sense of all of them.

I remember the relief when I settled on "total = database+ stolen + free", which aligned with the formula I usually used when evaluating UNIX server memory: total = file + computational + free.

Here's a link to a post from a few years ago about #SQLServer memory accounting...

Perfmon: SQL Server Database pages + Stolen pages + Free pages = Total pages

http://sql-sasquatch.blogspot.com/2013/09/perfmon-database-pages-stolen-pages.html

And here's a post going through a tricky detail when accounting for memory in AIX...

IBMPower AIX Memory Accounting: Fun with vmstat... Numperm Surprise!!

http://sql-sasquatch.blogspot.com/2014/10/ibmpower-aix-memory-accounting-fun-with.html

Its time to return to squaring the numbers between the DMVs and Perfmon. I want to make sense of resource_semaphore_query_compile waits; I'll have to evaluate threshold values from sys.dm_exec_query_optimizer_memory_gateways in light of optimizer memory, granted memory, used granted memory, and total stolen memory. The gateway threshold values aren't available in perfmon so I'd like to grab all of the numbers from the DMVs. We'll see.

Here's a query I'm using to square up the basics between perfmon and the DMVs.

;WITH calc(counter_name, dmv_calc) AS (SELECT 'Target Server Memory (KB)' AS counter_name, CONVERT(INT, sc.value_in_use) * 1024 AS dmv_calc FROM sys.configurations sc WHERE [name] = 'max server memory (MB)' UNION ALL SELECT 'Database Cache Memory (KB)' AS counter_name, pages_kb AS dmv_calc FROM sys.dm_os_memory_clerks mc WHERE mc.[type] = 'MEMORYCLERK_SQLBUFFERPOOL' AND memory_node_id < 64 UNION ALL SELECT 'Total Server Memory (KB)' AS counter_name, SUM(virtual_address_space_committed_kb) + SUM(locked_page_allocations_kb) AS dmv_calc FROM sys.dm_os_memory_nodes UNION ALL SELECT 'Stolen Server Memory (KB)' AS counter_name, SUM(allocations_kb) AS dmv_calc FROM sys.dm_os_memory_brokers UNION ALL SELECT 'Free Memory (KB)' AS counter_name, a - b - c AS dmv_calc FROM (SELECT SUM(virtual_address_space_committed_kb) + SUM(locked_page_allocations_kb) AS a FROM sys.dm_os_memory_nodes) a CROSS JOIN (SELECT SUM(allocations_kb) AS b FROM sys.dm_os_memory_brokers) b CROSS JOIN (SELECT pages_kb AS c FROM sys.dm_os_memory_clerks mc WHERE mc.[type] = 'MEMORYCLERK_SQLBUFFERPOOL' AND memory_node_id < 64) c) SELECT opc.[object_name], opc.counter_name, opc.cntr_value, calc.dmv_calc, opc.cntr_value - calc.dmv_calc AS diff FROM sys.dm_os_performance_counters opc JOIN calc ON calc.counter_name = opc.counter_name WHERE opc.counter_name IN ('Target Server Memory (KB)', 'Total Server Memory (KB)', 'Database Cache Memory (KB)', 'Stolen Server Memory (KB)', 'Free Memory (KB)') ORDER BY opc.cntr_value DESC;

The results are pretty close If anyone knows where the variance in database cache/stolen memory comes from (or how to get rid of it), please let me know. I'd really like to get it dead on, even though production system values will typically be changing over time even when measured in small increments.

Here's what I got on my test system as it was ramping up...


SQL Server Memory Accounting: Aligning Perfmon &amp; DMVs

Skype for Business SQL Server Frustrations

$
0
0
Scenario

The environment is new and secure.

Assumptions: You are a DBA who knows how to build WSFCs and AGs. If you need help with that, then you’re in the wrong place for this post.

The windows 2016 / SQL Server 2016 environment has been built, SQL Server 2016 Enterprise Edition installed, the Windows Failover Cluster configured, HA has been enabled on the SQL Servers, the Skype for Business server software has been installed (but not configured) and all is well.

Now to configure / create the Skype For Business topology. How hard can it be?

NB

This is being done from memory, so the actual implementation detail may be a little bit hazy, and I haven’t got any screenshots or actual error messages as this was done on someone else’s computer.

What went wrong, though that’s real. And the reasoning is also as near as I can remember.

Creating the Topology Configuring for Availability Groups

Launch the Topology Configuration wizard/app. By default it wants to go with mirroring, so you tell it to use Availability Groups. This causes the app to ask up front for the availability group name, which doesn’t exist, and then ask for a server name.

Don’t assume that this is a mistake and give it the server name. Alternatively, don’t get confused and give it a server name. You’ve got to get this right up front. If you don’t, you’ll have to strip out the whole thing and start over.

Use PowerShell rather than the GUI

The GUI feels nicer (barring obvious confusion above), but doesn’t adequately report error / warning / problems. We were experiencing problems with the installation, and all the messages that the skype engineers were seeing led them to view it as a “problem with the database server”.

Using the Install-CsDatabase command gives a whole heap of more useful information, including showing that a connection was made to the server and configuration settings (for database file paths) were read correctly. It’s not a problem with the server.

Oh, and use the -UseDefaultSQLPaths parameter, assuming you’ve configured your server up correctly, as it’s easier than trying to figure out the various other path options.

What $ share?

The output from Install-CsDatabase showed that the installer was attempting to copy database files to the appropriate locations, but had translated the value retrieved from SQL Server, say, E:\mssql\SQLData to \\SkypeServer1\E$\MSSQL\SQLData the transaction log path had suffered similar indignities. Yes, that’s using the $ / “admin” shares. The problem is that, for various security reasons, these shares have been removed.

If you try creating the blank databases, the installer complains that the database is there, but it’s version 0, and that the installer can’t upgrade from version 0 to version 12. So drop those databases as they’re not doing any good.

The only way forward on this appeared to be to get an exemption logged and create those shares. Installation can (finally) proceed. Hurrah.

When the databases have all been created, create AG, failover, rerun topology-push to create users & jobs on the secondary, and failback.

That was a painful afternoon.

Still, all is not entirely well.

The Next Day

The following day, you’ll notice that various SQLAgent jobs have failed. Checking the server, you will notice that there are the following jobs:

LcsCDR_Purge LcsCDR_UsageSummary LcsLog_Purge QoEMetrics_Purge QoEMetrics_UsageSummary

These jobs have been set up in a sensible way step 1 is a check to see if the database / server is the primary or the secondary, and step 2 actually does the work (if this is the primary). Here’s that step 1 in full:

declare @_MirroringRole int
set @_MirroringRole = (
select mirroring_role
from sys.database_mirroring
where database_id = DB_ID('LcsCDR')
)
if (@_MirroringRole is not NULL and @_MirroringRole = 2) begin
raiserror('Database is mirrored', 16, 1)
end
else begin
print ('Database is not mirrored')
end

That’s great, if you’re using mirroring. But we’re not. And Skype-for-Business *knows* that we’re not, as we told it that right up front. What this check should be looking at is the sys.dm_hadr_database_replica_state DMV and working against that. This code seems to work:

declare @_AGPrimary bit
if exists (select * from sys.dm_hadr_database_replica_states where database_id = DB_ID('LcsCDR'))
BEGIN
set @_AGPrimary = (SELECT is_primary_replica FROM sys.dm_hadr_database_replica_states WHERE database_id = DB_ID('LcsCDR') and is_local = 1)
IF (@_AGPrimary = 0)
BEGIN
RAISERROR('Database is AG secondary',16,1);
END
BEGIN
print ('Database is AG primary');
END
END

Your choice either replace the existing step 1, or add this code to it, and the jobs now work. Depending on how nit-picky you are about these things, you might want to change the two references to ‘LcsCDR’ to match the database mentioned in the job… Given that the databases are / should be in the same AG, though, this might be overkill/paranoia. I’m paranoid. It goes with the territory.

Creating the Listener Active Directory Permissions

Stepping back a bit, one other thing that bit us was that we had problems creating the availability group listener the cluster didn’t have the rights to do this. This is fixed by either:

(Preferred) granting the failover cluster virtual server object permission to create objects in the OU in which it lives. (alternatively, should work, untested by me) create the listener computer object first, and grant the failover cluster virtual server CNO rights to control / edit that object. All done

The engineers can now carry on with configuring / building out the skype-for-business environment. Rather them than me. I need a beer after all that!

RegEx in SQL Server for replacing text

$
0
0

In this post I’m presenting usage of two functions that use Regex. Myprevious post was about searching text in SQL Server using regex, and now I’m showing how to use it for replacing text in SQL Server.

The two new functions added to theSqlRegex.dll are RgxTrim () and RgxReplace ().

Before starting with them I’ll mention the in-built functions for similar purposes, like the LTRIM and RTRIM functions and TRIM in SQL Server on linux and starting with SQL 2017 on windows. They are only trimming white spaces from the left, right or from both sides of a string.

What RgxTrim is doing additionally is replacing multiple white spaces inside the text with a single white space. RgxTrim does trimming of the leading and ending white spaces of an input string/text too.

After importing SqlRegex.dll in your database, run the following code to create the RgxTrim function.

CREATE FUNCTION dbo.RgxTrim(@Text NVARCHAR(max)) RETURNS NVARCHAR(MAX) WITH EXECUTE AS CALLER EXTERNAL NAME SqlRegex.UserDefinedFunctions.RgxTrim; GO

RgxTrim would successfully remove multiple tabs with a single white space for an input string/text as well. You can use RgxTrim in an update statement like any normal T-SQL function.

The second function is RgxReplace which is very similar to Replace:

REPLACE ( string_expression , string_pattern , string_replacement )

However, RgxReplace is designed to additionally use regular expression patterns for searching in a text and do replacement, while REPLACE uses pure-written patterns only. RgxReplace has very similar syntax with REPLACE:

RgxReplace ( string_expression, regex_pattern , string_replacement ).

The only difference is the second parameter regex_pattern which can additionally accept regular expressions. Create RgxReplace with the following code.

CREATE FUNCTION [dbo].[RgxReplace](@Text [NVARCHAR](MAX), @pattern [NVARCHAR](MAX), @replacement [NVARCHAR](MAX)) RETURNS [NVARCHAR](MAX) WITH EXECUTE AS CALLER AS EXTERNAL NAME [SqlRegex].[UserDefinedFunctions].[RgxReplace] GO

The usage of RgxReplace is specific. For example, imagine a situation where in the Description column of a table there is sensitive information like IBAN accounts, Credit Card numbers or Addresses for the clients/users. And when you display this information in an application you should carry about what you’re displaying and to whom. At least you should mask this sensitive information in a given case.

The following query finds all rows of a table that have a pattern for an account and performs masking of the same. The query demonstrates a usage of RgxReplace .

SELECT UserID,[Description],dbo.RgxReplace([Description],'\d{4}-?\d{4}-?\d{4}-?\d{4}','XXXXXXXXXXXXXXXX') [Masked description] FROMdbo.UserNotes WHERE dbo.ContainsString([Description],'\d{4}-?\d{4}-?\d{4}-?\d{4}')=1;

The result from the query is given in the next figure.


RegEx in SQL Server for replacing text

From the figure above, you can see that all IBAN accounts are masked. Full names of the users and their addresses are manually masked. By using regex you can detect any kind of addresses, credit card numbers and etc., and combining with the ContainsString () the query from the figure could be written to do additional stuff.You can use the RgxReplace ()function like any T-SQL function in an update statement.

You can now recreate everything from SqlRegex.dll (including the functions from the previous post) with thefollowing script.

[Video] Last Season’s Performance Tuning Techniques

$
0
0

You’re kinda-sorta comfortable doing performance tuning on SQL Server. You’ve read a few blogs, you monitor Page Life Expectancy, you rebuild your indexes, and you add an index here or there. However, you haven’t been to a day-long performance tuning class yet, and you’re wondering what you’re missing.

Thing is, in SQL Server, performance tuning changes fast. One day Perfmon counters are in, and the next day they’re out. It’s totally okay if you’re still using SQL Server 2008 that’s not the problem. You shouldn’t be wearing hammer pants, and you shouldn’t be looking at PLE, monitoring Disk Queue Length, or putting your data and log files on separate drives.

In this rapid-fire session , Erik and I will show you specific examples of why performance tuning tips and tricks have fallen out of fashion, and why they’re not too legit to quit:

You can get the slides and demos here.

(While there’s some awesome historical photos of Erik and I in here, PASS didn’t include the video camera recordings. That’s kind of a bummer because I had some awesome costumes .)

If you learned stuff in that session, that’s a good thing: it means that while your skills might be drifting out of date, you can still catch back up. Erik and I are doing a full-day pre-con at the Summit called Expert Performance Tuning for SQL Server 2016 & 2017 . Go read more about that, check out the cool free stuff you’ll get for attending, and then register now . See you in Seattle!

Querying SQL Server Data from R Using RODBC

$
0
0

In the previous blog, we looked at some of the annoyances encountered when installingRODBC on linux. Of course, RODBC can also be used in R running on windows. In either case, running queries using RODBC is straightforward and without surprises.

As is always the case, the first thing we need to do is connect. We saw how to create a DSN, or Data Source Name, when we looked at LibreOffice base. RODBC can use a DSN to connect to a database; the RODBC odbcConnect() function accepts a DNS name as an argument. We can, however, specify everything, including the ODBCdriver, right in the function call and do away with the DSN entirely. Such a connection is sometimes called a “DSN-less connection”. We only need to call the odbcDriverConnect () and supply the driver name.

We will be connecting to the preview of SQL Server vNext on Linux, but once you connect, the rest of the commands won’t care what platform you are running on.

We’ve already seen that the installation process on Linux defines a driver name in /etc/odbcinst.ini.


Querying SQL Server Data from R Using RODBC
The “DSN-Less” Connection ch<-odbcDriverConnect("DRIVER={ODBC Driver 13 for SQL Server};SERVER=localhost;Database=Northwind;uid=SA;pwd=sa!314159")
The “ch” variables hold a reference to a connection, and will be used in virtually all our interactions with SQL Server. In this particular example we specified a database name, butthis is not necessary. The default database may be fine, but in addition you have the option of using the same “USE database” statement that you have used in the Management Studio.

odbcQuery(ch, “USE Northwind”)

Note the first argument is “ch”, the channel handle we obtained in from odbcDriverConnect(). The sqlQuery() function would have worked just as well in this example, but it is simply my habit to call odbcQuery when I do not intend to fetch any row results.
If appropriate, we can directly fetch a table using sqlFetch. The results are, of course, an R dataframe.
resultset = sqlFetch(ch,"Products")

Of course, you are much moreprone to want to run an SQL query; this can be done using sqlQuery. The sqlQuery() function is just a simple wrapper function that calls odbcQuery to execute a query and then calls sqlGetresults to retrieve the rowset.

resultset <- sqlQuery(ch, “SELECT * FROM Products WHERE CategoryID = 3”)

resultset <- sqlQuery(ch,”EXECUTE TestProcedure”)

A very nice feature of sqlQuery is that we can specify a maximum number of rows to retrieve with the optional “max” parameter. This is useful when the resultset is quite large and we wish to bring over the data chunks at a time. We call sqlQuery to execute the query and retrieve the first, well, we’ll say five rows in this example.

resultset= sqlQuery(ch, "SELECT * FROM Products", max=5)

After processing the first five rows, we could use sqlGetResults to get the next 10.

resultset = sqlGetResults(ch, max=10)

The close method allows us to do the right thing and clean up after ourselves.

close(ch)

Conclusion

RODBC is a very simple library to use, and the core set of functions needed to get started querying SQL Server data from R is even simpler. However, writing complex queries in R will be cumbersome and error-prone. It is wise to define desired SQL queries as views which can be called from R very simply.

How to simplify SQL Server Database Object usage with Synonyms

$
0
0

The concept of SQL Server Synonyms was introduced the first time in SQL Server 2005 as an alias name that references an existing database object, replacing its fully qualified name. In this way, it makes the database object more portable and provides more flexibility for the clients to reach and maintain it. You can imagine Synonyms as a layer of abstraction that provides us with an easy way to connect to and manage the database objects without the need to identify the real name and location for these objects.

Synonyms are useful in simplifying complicated and lengthy object names by providing short and friendly alternative names for these database objects. You can benefit from Synonyms by providing backward compatibility for the database objects that are used by legacy systems in case you drop or rename that objects. It may be found to be very simple from the definition, but for database administrators and developers, it would be very useful and simplify their jobs if it is used in a correct way.

Synonym changes are also transparent from the client application perspective, as no change required from the client side if the Synonym is changed to reference a different database object, as long as the column names are not changed.

Assume that you plan to change the name of a database object that is used heavily in your queries. It would seem like a very difficult task, as you need to go through all places in which this object is used. An easy way to perform that is to create a Synonym that references the database object and use that Synonym in your queries. If you need to change the database object name, you need only to change the referenced object only in the Synonym definition, by dropping and recreating the Synonym, without the need to visit all places in which object is mentioned. You can also move the base object easily to another database in the same server or to another SQL server without performing any change from the application side. Just you need to drop the Synonym and recreate a new one that points to the new location of that object.

Synonyms also help also in obscuring the name of the database objects, for security purposes, by creating a Synonym that references the database object and allows the users to query the Synonym rather than querying the base table directly.

The Synonym, like any other database object, should belong to a database schema and should be provided a unique name that follows the T-SQL identifiers rules. The naming convention rules can be also applied to the Synonym, such as using a prefix or suffix with the Synonym name to make it easy to recognize that the database object is a Synonym. A Synonym can be used to reference the following database objects types:

Assembly (CLR) stored procedure Assembly (CLR) table-valued function Assembly (CLR) scalar function Assembly (CLR) aggregate functions Replication-filter-procedure Extended stored procedure SQL scalar function SQL table-valued function SQL inline-tabled-valued function SQL stored procedure View User-defined table

On the other hand, you cannot use a Synonym to reference other Synonyms or to reference a user-defined aggregate function. The object that is referenced by a Synonym will be checked at runtime, which means that the Synonym can be created with spelling or referencing errors, but you will get these errors while using that Synonym.

You can easily drop a Synonym without getting any error or warning messages that it is being referenced by any database object, or you can modify the base object without affecting the Synonym, due to the fact that the Synonyms and the base objects are loosely bonded. Synonyms cannot be referenced in a DDL T-SQL statement, such as modifying using an ALTER statement. Having the fact that the Synonyms are not schema-bound database objects, it cannot be referenced by schema-bound expressions such as:

CHECK constraints Computed columns Default expressions Rule expressions Schema-bound views Schema-bound functions

To create a Synonym that references objects across schemas, databases and servers, you need to specify the schema and the name of the Synonym and the schema and the name of the database object that the synonym references. The syntax that is used to create a new Synonym is as shown below:

CREATE SYNONYMschema_name_1. synonym_name FOR server_name. database_name. schema_name_2. object_name

For a new Synonym, you need to provide the schema_name_1 that specifies the schema in which the synonym will be created, with the default schema of the current user will be used if the schema is not specified in the CREATE SYNONYM statement and the synonym_name that specifies the name of the new synonym.

For the referenced object, you can provide the server_name that specifies the name of the server on which base object is located, the database_name that is the name of the database in which the base object is located, the schema_name_2 that is the name of the schema of the base object, with the default schema of the current user will be used if it is not provided, and the object_name that specifies the name of the base object that will be referenced by the Synonym.

To be able to create a Synonym in the provided schema, you should have CREATE SYNONYM permissions with db_owner permissions in that schema or, at least at a minimum, ALTER SCHEMA permissions. Synonyms can be also created with the New Synonym window using SQL Server Management Studio, by right-clicking on the Synonyms node under the current database as shown below:


How to simplify SQL Server Database Object usage with Synonyms

Where you can provide the previously described parameters, such as the Synonym schema and name and the server name, database name, schema name and the name of the referenced object, in order to create the Synonym as follows:


How to simplify SQL Server Database Object usage with Synonyms

Once created, you can perform SELECT, INSERT, UPDATE, DELETE or EXECUTE operations on that Synonym. To be able to perform these operations, a number of permissions should be granted on the Synonym, such as:

CONTROL DELETE EXECUTE INSERT SELECT TAKE OWNERSHIP UPDATE VIEW DEFINITION

Synonym owners or users with db_owner or db_ddladminpermissions can GRANT, DENY or REVOKE permission at the Synonym level, with no effect at the base table level. The below script is used to create a new database user, and grant it SELECT permission at the Synonym level:

USE [SQLShackDemo] GO CREATE USER [suheir] FOR LOGIN [suheir] GO GRANT SELECT ON [dbo].[MySynonym] TO [suheir] Creating a synonym to reference a local object

The below T-SQL statement is used to create a Synonym to reference a local table in the same SQL Server instance, where the base database name, the schema and the table name are provided in the CREATE SYNONYM statement:

USE SQLShackDemo GO CREATE SYNONYM dbo.MySynonym FOR SQLShackDemo.dbo.SynTestNew; GO

After creating the Synonym, you can easily retrieve data from it directly using the SELECT statement shown below:

SELECT TOP 10 * FROM MySynonym

The SELECT statement result will retrieve the data from the base table directly as follows:


How to simplify SQL Server Database Object usage with Synonyms
Creating a synonym to reference a remote object As mentioned previously within this article, Synonyms can be used to simplify the name of the base object by using a short alias instead of using the full object name. The below T-SQL statem

Working with SQL Server in Hybrid Cloud Scenarios. Part 2

$
0
0

As a rule, impersonal information is stored in a public cloud, and the personalized part in a private cloud. The question thus arises how to combine both parts to return a single result at a user’s request? Suppose there is a table of customers divided vertically. The depersonalized columns were included in the table located in windows Azure SQL Database, and columns with sensitive information (e.g., full name) remained in the local SQL Server. Both tables must be linked by the CustomerID key. Because they are located in different databases on different servers, the JOIN statement will not work. As a possible solution, we have considered thescenario,when the linkage was implemented on the local SQL Server. It served as a kind of entry point for the applications, and the cloud-based SQL Server was set up on it as a linked server. In this article, we will consider the case when both, the local and cloud servers, are equal in terms of the application, and the data merging occurs directly in it, i.e. at the business logic level.

Pulling data from SQL Azure from the point of view of application code is no different from working with a local SQL Server. Let’s just say, it is identical up to the connection string. In the code below, u1qgtaf85k is the name of the SQL Azure server (it is generated automatically when it is created). I’ll remind you that the connection with the server is always established on the TCP/IP network library, port 1433. The Trusted_Connection=False parameter is not Integrated Security (it is always standard in SQL Azure), Trust_Server_Certificate=false is meant to avoid a possible man-in-the-middle attack.

using System; using System.Data; using System.Data.SqlClient; using System.Diagnostics; using System.Resources; namespace DevCon2013 { class Program { static void Main(string[] args) { ResourceManager resMan = new ResourceManager("DevCon2013.Properties.Resources", System.Reflection.Assembly.GetExecutingAssembly()); string sqlAzureConnString = String.Format(@"Server=tcp:u1qgtaf85k.database.windows.net,1433;Database=AdventureWorks2012;User ID=alexejs;Password={0};Trusted_Connection=False;Encrypt=True", resMan.GetString("Password")); SqlConnection cnn = new SqlConnection(sqlAzureConnString); cnn.Open(); SqlCommand cmd = cnn.CreateCommand(); cmd.CommandText = "select top 100 CustomerID, AccountNumber from Sales.Customer order by CustomerID"; DataTable tbl = new DataTable(); tbl.Load(cmd.ExecuteReader()); cnn.Close(); foreach (DataRow r in tbl.Rows) { for (int i = 0; i < tbl.Columns.Count; i++) Debug.Write(String.Format("{0}\t", r[i])); Debug.WriteLine(""); } } } }

Script 1

I will also add the connection with the on-premise resource, i.e. with the local SQL Server. I assume that this process does not require explanations, so let’s just modify the previous code by adding two methods ExecuteSQL to connect to the source and execute a query against it, and DumpTable to somehow visualize the results. Thus, working with SQL Azure and on-premise SQL Server from the point of view of the application will occur absolutely symmetrically.

string sqlOnPremiseConnString = @"Server=(local);Integrated Security=true;Database=AdventureWorks2012"; DataTable resultsOnPremise = ExecuteSQL(sqlOnPremiseConnString, "select BusinessEntityID, FirstName, LastName from Person.Person where BusinessEntityID between 1 and 100"); string sqlAzureConnString = String.Format(@"Server=tcp:u1qgtaf85k.database.windows.net,1433;Database=AdventureWorks2012;User ID=alexejs;Password={0};Trusted_Connection=False;Encrypt=True", resMan.GetString("Password")); DataTable resultsFromAzure = ExecuteSQL(sqlAzureConnString, "select CustomerID, AccountNumber from Sales.Customer where CustomerID between 1 and 100"); ... static DataTable ExecuteSQL(string cnnStr, string query) { SqlConnection cnn = new SqlConnection(cnnStr); cnn.Open(); SqlCommand cmd = cnn.CreateCommand(); cmd.CommandText = query; DataTable tbl = new DataTable(); tbl.Load(cmd.ExecuteReader()); cnn.Close(); return tbl; } static void DumpTable(DataTable tbl) { foreach (DataRow r in tbl.Rows) { for (int i = 0; i < tbl.Columns.Count; i++) Debug.Write(String.Format("{0}\t", r[i])); Debug.WriteLine(""); } }

Script 2

Now that we have both vertical pieces inside the application in two DataTable instead of the single Customers table: one from the local server, the other from SQL Azure we need to unite them by the CustomerID field, which exists there and there. For simplicity, we will not consider the case of a composite key, i.e. we will assume that the connection is made by a simple match of one column in one table with one column in the other table. This is a standard ADO.NET task. There are two most common ways to solve it, which are approximately equivalent in performance. The first method is using DataRelation. It is implemented in the JoinTablesADO method. Create a new DataSet, add both tablets to it, and create a relation (DataRelation) between them specifying the field in the parent and the field in the child table upon which JOIN will be built. Which of the two DataTable is the parent table and which one is the child table does not matter in this situation, because, in our case, the relationship is not one-to-many, but one-to-one. Create an empty workpiece for the resulting DataTable. Looping through all records of the “child” table, we get the corresponding record of the “parent” table and combine it from the fields of both DataRow records, which we then put in the resulting DataTable.

DumpTable(JoinTablesADO(resultsFromAzure, resultsOnPremise, "CustomerID", "BusinessEntityID")); ... static DataTable JoinTablesADO(DataTable parentTbl, DataTable childTbl, string parentColName, string childColName) { DataSet ds = new DataSet(); ds.Tables.Add(parentTbl); ds.Tables.Add(childTbl); DataRelation dr = new DataRelation("ля-ля", parentTbl.Columns[parentColName], childTbl.Columns[childColName]); ds.Relations.Add(dr); DataTable joinedTbl = new DataTable(); foreach (DataColumn c in parentTbl.Columns) joinedTbl.Columns.Add(c.Caption, c.DataType); foreach (DataColumn c in childTbl.Columns) joinedTbl.Columns.Add(c.Caption, c.DataType); //К сож., Clone() над DataColumn не поддерживается :( foreach (DataRow childRow in childTbl.Rows) { DataRow parentRow = childRow.GetParentRow("ля-ля"); DataRow currentRowForResult = joinedTbl.NewRow(); for (int i = 0; i < parentTbl.Columns.Count; i++) currentRowForResult[i] = parentRow[i]; for (int i = 0; i < childTbl.Columns.Count; i++) currentRowForResult[parentTbl.Columns.Count + i] = childRow[i]; joinedTbl.Rows.Add(currentRowForResult); } return joinedTbl; }

Script 3

The second method is using Linq. The idea is the same as in the first case. The difference is in the implementation details. First, we create a resulting table as a copy of the parent table structure. Then we add fields to it from the child table. We get the collection of records as a result of the Linq-request to the collection of records of the parent table by the link condition with the collection of records of the child table, which is then added to the resulting table.

DumpTable(JoinTablesLinq(resultsFromAzure, resultsOnPremise, "CustomerID", "BusinessEntityID")); ... static DataTable JoinTablesLinq(DataTable parentTbl, DataTable childTbl, string parentColName, string childColName) { DataTable joinedTbl = parentTbl.Clone(); var childColumns = childTbl.Colu

SCCM, SQL, DBATools, and Coffee

$
0
0

Warning : This article is predicated on (A) basic reader familiarity with System Center Configuration Manager and the SQL Server aspects, and (B) nothing better to do with your time.

Caveat/Disclaimer : As with most of my blog meanderings, I post from the hip. I fully understand that it exposes my ignorance at times, and that can be painful at times, but adds another avenue for me to learn and grow.


SCCM, SQL, DBATools, and Coffee

I don’t recall exactly when I was turned onto Ola Hallengren, or Steve Thompson, but it’s been a few years, at least. The same could be said for Kent Agerlund, Johan Arwidmark, Mike Niehaus, and others. None of whom I’ve yet to meet in person, but maybe some day. However, that point in time is when my Stevie Wonder approach to SQL “optimization” went from poking at crocodiles with a pair of chopsticks, to saying “ A-Ha! THAT’s how it’s supposed to work! ”

As a small testament to this, while at Ignite 2016, I waited in line for the SQL Server guy at his booth, like an 8 year old girl at a Justin Bieber autograph signing, just to get a chance to ask a question about how to “automate SQL tasks like maintenance plans, and jobs, etc.”. The guy looked downward in deep thought, then looked back at me and said “ Have you heard of Ola Hallengren? ” I said “Yes!” and he replied, “ he’s your best bet right now. ”

Quite a lot has changed.

For some background, I was working on a small project for a customer at that time focusing on automated build-out of an SCCM site using PowerShell and BoxStarter. I had a cute little gist script that I could invoke from the PowerShell console on the intended target machine (virtual machine), and it would go to work:

Install windows Server roles and features Install ADK 10 Install MDT 2013 Install SQL Server 2014 Adjust SQL memory allocations (min/max) Install WSUS server role and features Install Configuration Manager Install ConfigMgr Toolkit 2012 R2 and so on.

Since it was first posted, it went through about a dozen iterative “improvements” (translation: breaking it and fixing and improving and breaking and fixing, and repeat).

The very first iteration included the base build settings as well, such as naming the computer, assigning a static IPv4 address, DNS servers and gateway, join to an AD domain, etc. But I decided to pull that part out into a separate gist script.

The main thing about this experiment that consumed the most time for me was:

On-the-fly .INI construction for the SQL automated install On-the-fly .INI construction for the SCCM install On-the-fly SQL memory allocation configuration

Aside from the hard-coding of content sources (not included on this list), item 2 drove me nuts because I didn’t realize the “SA expiration” date property was required in the .INI file. The amount of coffee I consumed in that 12 hour window would change my enamel coloring forever. Chicks dig scars though, right? Whatever.

Then came item 3. I settled on the following chunk of code, which works…

$SQLMemMin = 8192 $SQLMemMax = 8192 ... write-output "info: configuring SQL server memory limits..." write-output "info: minimum = $SQLMemMin" write-output "info: maximum = $SQLMemMax" try { [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.VisualBasic') | Out-Null [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null $SQLMemory = New-Object ('Microsoft.SqlServer.Management.Smo.Server') ("(local)") $SQLMemory.Configuration.MinServerMemory.ConfigValue = $SQLMemMin $SQLMemory.Configuration.MaxServerMemory.ConfigValue = $SQLMemMax $SQLMemory.Configuration.Alter() write-output "info: SQL memory limits have been configured." } catch { write-output "error: failed to modify SQL memory limits. Continuing..." }

But there’s a few problems, or potential problems, with this approach…

It’s ugly (to me anyway) The min and max values are static If you change this to use a calculated/derived value (reading WMI values) and use the 80% allocation rule, and the VM has dynamic memory, it goes sideways.

Example:

$mem = $(Get-WmiObject -Class Win32_ComputerSystem).TotalPhysicalMemory $tmem = [math]::Round($mem/1024/1024,0) ...

I know that option 2 assumes a “bad practice” (dynamic memory), but it happens in the real world and I wanted to “cover all bases” with this lab experiment. The problem that it causes is that the values returned from a WMI query can fluctuate along with the host memory allocation status, so the 80% value can be way off at times.

Regardless, forget all that blabber about static values and dynamic tragedy. There’s a better way. A MUCH better way. Enter DBATools . DBATools was the brainchild of Chrissy LeMaire , which is another name to add to any list that has Ola’s name on it. (side note: read Chrissy’s creds, pretty f-ing impressive). There are other routes to this as well, but I’ve found this one to be most user friendly for my needs. (Feel free to post better suggestions below, I welcome feedback!)

Install-Module dbatools $sqlHost = "cm01.contoso.com" $sqlmem = Test-DbaMaxMemory -SqlServer $sqlHost if ($sqlmem.SqlMaxMB -gt $sqlmem.RecommendedMB) { Set-DbaMaxMemory -SqlServer $sqlHost -MaxMB $sqlmem.RecommendedMB }

This is ONLY AN EXAMPLE, and contains an obvious flaw: I’m not injecting an explicit 80% derived value for the -MaxMB parameter. However, this can be accomplished (assuming dynamic memory is not enabled) as follows…

Install-Module dbatools $sqlHost = "cm01.contoso.com" $sqlmem = Test-DbaMaxMemory -SqlServer $sqlHost $totalMem = $sqlmem.TotalMB $newMax = $totalMem * 0.8 if ($sqlmem.SqlMaxMB -ne $newMax) { Set-DbaMaxMemory -SqlServer $sqlHost -MaxMB $newMax }

Here’s the code execution results from my lab…


SCCM, SQL, DBATools, and Coffee

You might have surmised that this was executed on a machine which has dynamic memory enabled, which is correct. The Hyper-V guest VM configuration is questionable…


SCCM, SQL, DBATools, and Coffee

This is one of the reasons I opted for static values in the original script.

Thoughts / Conclusions

Some possible workarounds for this mess would be trying to detect dynamic memory (from within the guest machine) which might be difficult, or insist on a declarative static memory assignment.

Another twist to all of this, and one reason I kind of shelved the whole experiment, was a conversation with other engineers regarding the use of other automation/sequencing tools like PowerShell DSC , Ansible , and

SQL SERVER Invalid Object Name ‘master.dbo.spt_values’ in Management Studio

$
0
0

One of my clients contacted me for On Demand (55 minutes) as they believed that this was a simple issue. John was honest enough to confess the mistake which he has done, which lead to error related to invalid object name.

John was trying to troubleshoot a deadlock issue, and he found that this specific server doesn’t have a system_health session in extended events.


SQL SERVER   Invalid Object Name ‘master.dbo.spt_values’ in Management Studio

So, he found that the definition of the session is defined in U_tables.sql file from “Install” folder.

C:\Program Files\Microsoft SQL Server\mssql13.SQLSERVER2014\MSSQL\Install

He executed the script, but it failed with below the messages.

This file creates all the system tables in master.
drop view spt_values ….
Creating view ‘spt_values’.
Msg 208, Level 16, State 1, Procedure spt_values, Line 56
Invalid object name ‘sys.spt_values’.
sp_MS_marksystemobject: Invalid object name ‘spt_values’
Msg 15151, Level 16, State 1, Line 61
Cannot find the object ‘spt_values’, because it does not exist or you do not have permission.
drop table spt_monitor ….
Creating ‘spt_monitor’.
Grant Select on spt_monitor
Insert into spt_monitor ….
SQL SERVER   Invalid Object Name ‘master.dbo.spt_values’ in Management Studio

Now, there was a bigger problem. A lot of places in SSMS, he started seeing below errors.


SQL SERVER   Invalid Object Name ‘master.dbo.spt_values’ in Management Studio
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
Invalid object name ‘master.dbo.spt_values’. (Microsoft SQL Server, Error: 208)

John knew that he should create the view ‘master.dbo.spt_values’ using below but was unable to.

WORKAROUND/SOLUTION

To create master.dbo.spt_values the reference is needed to sys.spt_values. This can’t be accessed by normal connection. There are two ways

Start SQL Server in single user mode, which would need downtime. Connect using Dedicate Administrator Connection (DAC). You can read more about DAC over here. Diagnostic Connection for Database Administrators

After making connection run below script.

create view spt_values as select name collate database_default as name, number, type collate database_default as type, low, high, status from sys.spt_values go EXEC sp_MS_marksystemobject 'spt_values' go grant select on spt_values to public go

Reference: Pinal Dave ( https://blog.sqlauthority.com )

Sql Server Performance Testing

$
0
0

Home Performance Tuning Sql Server Performance Testing

Performance Tuning

Sql Server Performance Testing

Sachin Diwakar August 1, 2017 no comments

sql server performance testing, sql server performance testing tools, sql server performance testing cdc, ms sql server performance testing, performance testing for sql server.

Sachin Diwakar

Sharing creates magic!!


Sql Server Performance Testing
Sql Server Performance Testing
Sql Server Performance Testing
Microsoft SQL Server Performance Tuning: Live

You want your SQL Server to go as fast as possible, but you don't have a lot of time or budget money, and you're not allowed to change the application.

Like this:

Like Loading...

Related

SSAS: Discipline, Accuracy, Attention to Details. Part 2 OLAP Cube

$
0
0
Introduction

In this article, I will continue describing my experience with Microsoft Analysis Services. In addition to theprevious article, I would like to write about unconventional solutions that were implemented in the recent project. These solutions got me close with Microsoft Analysis Services, and allowed me to do things that seemed to me impossible before.

Period Average Sum

A customer wanted a sum of average values for each element per period, as it is shown below:


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube
Variants of Solution

1. First variant:

For this variant, we need to create a hidden measurement [ELEM COPY]. For this, we need to create a copy of [ELEM] and assign False to the Visible property. Next, we need to select “New Calculated Member” in the “Calculations” section of the cube, as shown below:
SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

And write the following in the Expression text box:

iif ( not isleaf([ELEM].[ELEM SK].currentmember), sum(EXISTING [ELEM COPY].[ELEM SK].currentmember.Children,[Measures].[FCT VAL]), [Measures].[FCT VAL] ) Where [ELEM COPY].[ELEM SK] is the key attribute of the hidden dimension. [Measures].[FCT VAL] is the AvarageOfChildren aggregation, i.e. when assembling the cube, the measurement of the AvarageOfChildren type was created and the cube calculated it automatically. Here is an example of how to create the calculatio

n

with the AvarageOfChildren aggregation:


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

2. Solution with Scope

This solution was presented to me by a colleague of mine. In my opinion, it is more simple for understanding, though it is harder in terms of implementation.

What must be done:

We need to rewrite SELECT for the fact table. Instead of:

SELECT [DATE] ,ID_CO ,ID_CUST ,ID_SYS ,ID_VOL ,ID_QUAL ,GB_used_DATA FROM [000_REP].NAS_FACTS

we need to write the following:

SELECT [DATE] ,ID_CO ,ID_CUST ,ID_SYS ,ID_VOL ,ID_QUAL ,GB_used_DATA ,CONVERT(VARCHAR(10), [DATE], 120) + '|' + CAST(ID_VOL AS VARCHAR(MAX)) AS VolDate ,NULL AS Avg_GB_used_DATA FROM [000_REP].NAS_FACTS

Where GB_used_DATA is the fact we want to add to the cube. We need to make a non-standard behavior for the dimension with the ID_VOL key. As for the rest of dimensions with keys ID_CO, ID_CUST, ID_SYS and ID_QUAL, the behavior must remain standard, and in our task, everything must be simply summed up. Only for ID_VOL the period average must be calculated for each element, and the total for the ID_VOL elements must be summed up as well, the sum of average values for ID_VOL is expected as a result.

In the second query, we added 2 columns:

The first column uniquely determines the binding between the date and Id elements, for which the sum of average values must be calculated. A dimension with the DistinctCount aggregation function is added to the cube by this column. An example is shown below:


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

The second column always stores the NULL value in all rows of the table. As for the second column, its name plays an important role it is required for the creation of a dimension in the cube we can use for binding with the SCOPE function. It is also important that for this dimension, the Sum aggregation function is used. An example is shown below:


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

Next, in the “Calculations” section of the cube, we need to select “Script View” and put the following code into the text box:

SCOPE([Measures].[Avg GB Used DATA]); SCOPE([ID_VOL Items].[ID VOL].[ID VOL].MEMBERS); THIS = [Measures].[Sum GB Used DATA]/[Measures].[Vol DateDistinct Count]; END SCOPE; END SCOPE; where [ID_VOL Items] is a calculation with the ID_VOL key.

The image below shows the sequence of actions for this step:


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

In this solution, we sum only the expression that is written in SCOPE, since without a formula in SCOPE, the NULL value that comes from the query to the database is stored there.

Both solutions gave identical result and calculated the average sum, as required.

Statistical Average

After a while, the customer returned to the topic of calculation of average values. This time, he wanted a classical average instead of the average sum, i.e. he needed the same functionality as the AVERAGE function in Excel has. Since the customer constantly used the “Statistical Average” term, it became the title of this section.


SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube

We needed to calculate the average value for the whole range. An average for all elements for each day is divided by the number of days in the period. As a result, we get the average values for one element per period. The following solution was supposed:

CREATE MEMBER CURRENTCUBE.[Measures].[Avg GB Used DATA (AvgAll Only valid days)] as [Measures].[Sum GB Used DATA]/[Measures].[Count VCMDB Only valid days], VISIBLE = 1; CREATE MEMBER CURRENTCUBE.[Measures].[Count VCMDB Only valid days] as Count(NonEmpty({crossjoin([DIM Business Time HD].[DAY].currentmember.Children,[DIM NASProvider Configuration Item HD].[NAS Volume CMDBID].currentmember.Children)}, [Measures].[Sum GB Free Data] )), VISIBLE = 1;
SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube
In this solution, only the day where the element values were present were used. Also, we used a trick with a hidden dimension (these are dimensions [DIM Business Time HD].[DAY] and [DIM NAS Provider Configuration Item HD].[NAS Volume CMDBID]). We got the number of days with the values with help of crossjoin.

If we need to get the average for all values for all days, where the absence of values for a day is set equal to zero, I used the following expression:

CREATE MEMBER CURRENTCUBE.[Measures].[Count VCMDB All days] as [Measures].[NAS Volume CMDBID Distinct Count] * [Measures].[NAS BTIME Count], VISIBLE = 1; CREATE MEMBER CURRENTCUBE.[Measures].[Count VCMDB All days] as [Measures].[NAS Volume CMDBID Distinct Count] * [Measures].[NAS BTIME Count], VISIBLE = 1; where [Measures].[NAS Volume CMDBID Distinct Count] and [Measures].[NAS BTIME Count] are measures of the cube, that are built by the tables for dimensions (temporary dimension and element dimension):
SSAS: Discipline, Accuracy, Attention to Details. Part 2   OLAP Cube
One more useful function

During my work with the cube, it was required that the calculation of values must be changed depending on the hierarchy level. That is, if the days were selected, we would see the average per period; if months were selected, we would see the sum. It was implemented with help of the level function:

CREATE MEMBER CURRENTCUBE.[Measures].[ML] as case when [DIM Business Time].[HIERARCHY CAL].currentmember.level is [DIM Business Time].[HIERARCHY CAL].[YEAR] then 3 when [DIM Business Time].[HIERARCHY CAL].currentmember.level is [DIM Business Time].[HIERARCHY CAL].[PERIOD KAL] then 2 when [DIM Business Time].[HIERARCHY CAL].currentmember.level is [DIM Business Time].[HIERARCHY CAL].[DAY] then 1 else 4 end, VISIBLE = 1; Conclusion Frankly speaking, when I saw the requirements to the calculation of average values, I was dazed and confus

深入浅析SQL中的group by 和 having 用法

$
0
0

一、sql中的group by 用法解析:

  Group By语句从英文的字面意义上理解就是“根据(by)一定的规则进行分组(Group)”。

  作用:通过一定的规则将一个数据集划分成若干个小的区域,然后针对若干个小区域进行数据处理。

  注意:group by 是先排序后分组!

  举例说明:如果要用到group by 一般用到的就是“每”这个字, 例如现在有一个这样的需求:查询每个部门有多少人。就要用到分组的技术 

select DepartmentID as '部门名称',COUNT(*) as '个数'
  from BasicDepartment
  group by DepartmentID
  这个就是使用了group by +字段进行了分组,其中我们就可以理解为我们按照部门的名称ID
  DepartmentID将数据集进行了分组;然后再进行各个组的统计数据分别有多少;

二、group by 和having 解释

  前提:必须了解sql语言中一种特殊的函数――聚合函数。

  例如:SUM, COUNT, MAX, AVG等。这些函数和其它函数的根本区别就是它们一般作用在多条记录上。

  WHERE关键字在使用集合函数时不能使用,所以在集合函数中加上了HAVING来起到测试查询结果是否符合条件的作用。

  having称为分组过滤条件,也就是分组需要的条件,所以必须与group by联用。

  需要注意说明:当同时含有where子句、group by 子句 、having子句及聚集函数时,执行顺序如下:

  1、执行where子句查找符合条件的数据;

  2、使用group by 子句对数据进行分组;

  3、对group by 子句形成的组运行聚集函数计算每一组的值;

  4、最后用having 子句去掉不符合条件的组。

  having 子句中的每一个元素也必须出现在select列表中。有些数据库例外,如oracle.

  having子句和where子句都可以用来设定限制条件以使查询结果满足一定的条件限制。

  having子句限制的是组,而不是行。聚合函数计算的结果可以当条件来使用,where子句中不能使用聚集函数,而having子句中可以。

总结

以上所述是小编给大家介绍的SQL中的group by 和 having 用法,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!

sqlserver分页查询处理方法小结

$
0
0

sqlserver2008不支持关键字limit ,所以它的分页sql查询语句将不能用mysql的方式进行,幸好sqlserver2008提供了top,rownumber等关键字,这样就能通过这几个关键字实现分页。

下面是本人在网上查阅到的几种查询脚本的写法:

几种sqlserver2008高效分页sql查询语句

top方案:

sql code:

select top 10 * from table1
where id not in(select top 开始的位置 id from table1)

max:

sql code:

select top 10 * from table1
where id>(select max(id)
from (select top 开始位置 id from table1 order by id)tt)

row:

sql code:

select *
from (
select row_number()over(order by tempcolumn)temprownumber,*
from (select top 开始位置+10 tempcolumn=0,* from table1)t
)tt
where temprownumber>开始位置

3种分页方式,分别是max方案,top方案,row方案

效率:

第1:row

第2:max

第3:top

缺点:

max:必须用户编写复杂sql,不支持非唯一列排序

top:必须用户编写复杂sql,不支持复合主键

row:不支持sqlserver2000

测试数据:

共320万条数据,每页显示10条数据,分别测试了2万页、15万页和32万页。

页码,top方案,max方案,row方案

2万,60ms,46ms,33ms
15万,453ms,343ms,310ms
32万,953ms,720ms,686ms

是一种通过程序拼接sql语句的分页方案,

用户提过的sql语句不需要编写复杂的sql逻辑

诺用户提供sql如下

sql code

select * from table1

从第5条开始,查询5条,处理后sql变为

sql code

select *
from (
select row_number()over(order by tempcolumn)temprownumber,*
from (select top 10 tempcolumn=0,* from table1)t
)tt
where temprownumber>5

这是什么意思呢?分解一下

首先将用户输入的sql语句转稍稍修改

在select后添加top 开始位置+条数变成

再外加一列tempcolum,变成这样

sql code

select top 20 tempcolumn=0,* from clazz

嵌套一层,这样便可查询出行号

刚才那个列就是用来这里order by用的

(也不知道sqlserver的row_number函数为什么必须要order by)

sql code

select row_number()over(order by tempcolumn)temprownumber,*
from (修改过的查询)t

再套一层,过滤掉行号小于开始位置的行

sql code

select * from (第二层)tt
where temprownumber>10

总结

以上所述是小编给大家介绍的sqlserver分页查询处理方法小结,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!

SqlServer索引的原理与应用详解

$
0
0
索引的概念

索引的用途:我们对数据查询及处理速度已成为衡量应用系统成败的标准,而采用索引来加快数据处理速度通常是最普遍采用的优化方法。

索引是什么:数据库中的索引类似于一本书的目录,在一本书中使用目录可以快速找到你想要的信息,而不需要读完全书。在数据库中,数据库程序使用索引可以重啊到表中的数据,而不必扫描整个表。书中的目录是一个字词以及各字词所在的页码列表,数据库中的索引是表中的值以及各值存储位置的列表。

索引的利弊:查询执行的大部分开销是I/O,使用索引提高性能的一个主要目标是避免全表扫描,因为全表扫描需要从磁盘上读取表的每一个数据页,如果有索引指向数据值,则查询只需要读少数次的磁盘就行啦。所以合理的使用索引能加速数据的查询。但是索引并不总是提高系统的性能,带索引的表需要在数据库中占用更多的存储空间,同样用来增删数据的命令运行时间以及维护索引所需的处理时间会更长。所以我们要合理使用索引,及时更新去除次优索引。

数据表的基本结构

一个新表被创建之时,系统将在磁盘中分配一段以8K为单位的连续空间,当字段的值从内存写入磁盘时,就在这一既定空间随机保存,当一个 8K用完的时候,数据库指针会自动分配一个8K的空间。这里,每个8K空间被称为一个数据页(Page),又名页面或数据页面,并分配从0-7的页号, 每个文件的第0页记录引导信息,叫文件头(File header);每8个数据页(64K)的组合形成扩展区(Extent),称为扩展。全部数据页的组合形成堆(Heap)。

SQLS规定行不能跨越数据页,所以,每行记录的最大数据量只能为8K。这就是char和varchar这两种字符串类型容量要限制在8K以内的 原因,存储超过8K的数据应使用text类型,实际上,text类型的字段值不能直接录入和保存,它只是存储一个指针,指向由若干8K的文本数据页所组成 的扩展区,真正的数据正是放在这些数据页中。

页面有空间页面和数据页面之分。 

当一个扩展区的8个数据页中既包含了空间页面又包括了数据或索引页面时,称为混合扩展(Mixed Extent),每张表都以混合扩展开始;反之,称为一致扩展(Uniform Extent),专门保存数据及索引信息。

表被创建之时,SQLS在混合扩展中为其分配至少一个数据页面,随着数据量的增长,SQLS可即时在混合扩展中分配出7个页面,当数据超过8个页面时,则从一致扩展中分配数据页面。 

空间页面专门负责数据空间的分配和管理,包括:PFS页面(Page free space):记录一个页面是否已分配、位于混合扩展还是一致扩展以及页面上还有多少可用空间等信息;GAM页面(Global allocation map)和SGAM页面(Secodary global allocation map):用来记录空闲的扩展或含有空闲页面的混合扩展的位置。SQLS综合利用这三种类型的页面文件在必要时为数据表创建新空间; 

数据页或索引页则专门保存数据及索引信息,SQLS使用4种类型的数据页面来管理表或索引:它们是IAM页、数据页、文本/图像页和索引页。

windows中,我们对文件执行的每一步操作,在磁盘上的物理位置只有系统(system)才知道;SQL SERVER沿袭了这种工作方式,在插入数据的过程中,不但每个字段值在数据页面中的保存位置是随机的,而且每个数据页面在“堆”中的排列位置也只有系统 (system)才知道。 

这是为什么呢?众所周知,OS之所以能管理DISK,是因为在系统启动时首先加载了文件分配表:FAT(File Allocation Table),正是由它管理文件系统并记录对文件的一切操作,系统才得以正常运行;同理,作为管理系统级的SQL SERVER,也有这样一张类似FAT的表存在,它就是索引分布映像页:IAM(Index Allocation Map)。 

IAM的存在,使SQLS对数据表的物理管理有了可能。 

IAM页从混合扩展中分配,记录了8个初始页面的位置和该扩展区的位置,每个IAM页面能管理512,000个数据页面,如果数据量太 大,SQLS也可以增加更多的IAM页,可以位于文件的任何位置。第一个IAM页被称为FirstIAM,其中记录了以后的IAM页的位置。 

数据页和文本/图像页互反,前者保存非文本/图像类型的数据,因为它们都不超过8K的容量,后者则只保存超过8K容量的文本或图像类型数据。而索 引页顾名思义,保存的是与索引结构相关的数据信息。了解页面的问题有助我们下一步准确理解SQLS维护索引的方式,如页拆分、填充因子等。
页分裂

一半的数据将保留在老页面,而另一半将放入新页面,并且新页面可能被分配到任何可用的页。所以,频繁页分裂,后果很严重,将使物理表产生大量数据碎片,导致直接造成I/O效率的急剧下降,最后,停止SQLS的运行并重建索引将是我们的唯一选择!

填充因子

索引的一个特性,定义该索引每页上的可用空间量。FILLFACTOR(填充因子)适应以后表数据的扩展并减小了页拆分的可能性。填充因子是从0到100的百分比数值,设为100时表示将数据页填满。只有当不会对数据进行更改时(例如 只读表中)才用此设置。值越小则数据页上的空闲空间越大,这样可以减少在索引增长过程中进行页分裂的需要,但这一操作需要占用更多的硬盘空间。填充因子指定不当,会降低数据库的读取性能,其降低量与填充因子设置值成反比。

索引的分类

SQL SERVER中有多种索引类型。

按存储结构区分:“聚集索引(又称聚类索引,簇集索引)”,“分聚集索引(非聚类索引,非簇集索引)”

按数据唯一性区分:“唯一索引”,“非唯一索引”

按键列个数区分:“单列索引”,“多列索引”。

聚集索引

聚集索引是一种对磁盘上实际数据重新组织以按指定的一列或多列值排序。像我们用到的汉语字典,就是一个聚集索引,比如要查“张”,我们自然而然就翻到字典的后面百十页。然后根据字母顺序跟查找出来。这里用到微软的平衡二叉树算法,即首先把书翻到大概二分之一的位置,如果要找的页码比该页的页码小,就把书向前翻到四分之一处,否则,就把书向后翻到四分之三的地方,依此类推,把书页续分成更小的部分,直至正确的页码。

由于聚集索引是给数据排序,不可能有多种排法,所以一个表只能建立一个聚集索引。科学统计建立这样的索引需要至少相当与该表120%的附加空间,用来存放该表的副本和索引中间页,但是他的性能几乎总是比其它索引要快。

由于在聚集索引下,数据在物理上是按序排列在数据页上的,重复值也排在一起,因而包含范围检查(bentween,<,><=,>=)或使用group by 或order by的查询时,一旦找到第一个键值的行,后面都将是连在一起,不必在进一步的搜索,避免啦大范围的扫描,可以大大提高查询速度。

非聚集索引

sqlserver默认情况下建立的索引是非聚集索引,他不重新组织表中的数据,而是对每一行存储索引列值并用一个指针指向数据所在的页面。他像汉语字典中的根据‘偏旁部首'查找要找的字,即便对数据不排序,然而他拥有的目录更像是目录,对查取数据的效率也是具有的提升空间,而不需要全表扫描。

一个表可以拥有多个非聚集索引,每个非聚集索引根据索引列的不同提供不同的排序顺序。

创建索引

语法

CREATE [UNIQUE] [CLUSTERED| NONCLUSTERED ]
INDEX index_name ON { table | view } ( column [ ASC | DESC ] [ ,...n ] )
[with[PAD_INDEX][[,]FILLFACTOR=fillfactor]
[[,]IGNORE_DUP_KEY]
[[,]DROP_EXISTING]
[[,]STATISTICS_NORECOMPUTE]
[[,]SORT_IN_TEMPDB]
]
[ ON filegroup ]

CREATE INDEX命令创建索引各参数说明如下:

UNIQUE:用于指定为表或视图创建唯一索引,即不允许存在索引值相同的两行。

CLUSTERED:用于指定创建的索引为聚集索引。

NONCLUSTERED:用于指定创建的索引为非聚集索引。

index_name:用于指定所创建的索引的名称。

table:用于指定创建索引的表的名称。

view:用于指定创建索引的视图的名称。

ASC|DESC:用于指定具体某个索引列的升序或降序排序方向。

Column:用于指定被索引的列。

PAD_INDEX:用于指定索引中间级中每个页(节点)上保持开放的空间。

FILLFACTOR = fillfactor:用于指定在创建索引时,每个索引页的数据占索引页大小的百分比,fillfactor的值为1到100。

IGNORE_DUP_KEY:用于控制当往包含于一个唯一聚集索引中的列中插入重复数据时SQL Server所作的反应。

DROP_EXISTING:用于指定应删除并重新创建已命名的先前存在的聚集索引或者非聚集索引。

STATISTICS_NORECOMPUTE:用于指定过期的索引统计不会自动重新计算。

SORT_IN_TEMPDB:用于指定创建索引时的中间排序结果将存储在 tempdb 数据库中。

ON filegroup:用于指定存放索引的文件组。

例子:

--表bigdata创建一个名为idx_mobiel的非聚集索引,索引字段为mobiel
create index idx_mobiel
on bigdata(mobiel)
--表bigdata创建一个名为idx_id的唯一聚集索引,索引字段为id
--要求成批插入数据时忽略重复值,不重新计算统计信息,填充因子为40
create unique clustered index idx_id
on bigdata(id)
with pad_index,
fillfactor=40,
ignore_dup_key,
statistics_norecompute
管理索引

Exec sp_helpindex BigData --查看索引定义
Exec sp_rename 'BigData.idx_mobiel','idx_big_mobiel' --将索引名由'idx_mobiel' 改为'idx_big_mobiel'
drop index BigData.idx_big_mobiel --删除bigdata表中的idx_big_mobiel索引
dbcc showcontig(bigdata,idx_mobiel) --检查bigdata表中索引idx_mobiel的碎片信息
dbcc indexdefrag(Test,bigdata,idx_mobiel) --整理test数据库中bigdata表的索引idx_mobiel上的碎片
update statistics bigdata --更新bigdata表中的全部索引的统计信息
索引的设计原则

对于一张表来说索引的有无和建立什么样的索引,要取决与where字句和Join表达式中。

一般来说建立索引的原则包括以下内容:

系统一般会给逐渐字段自动建立聚集索引。
有大量重复值且经常有范围查询和排序、分组的列,或者经常频繁访问的列,考虑建立聚集索引。
在一个经常做插入操作的表中建立索引,应使用fillfactor(填充因子)来减少页分裂,同时提高并发度降低死锁的发生。如果在表为只读表,填充因子可设为100.
在选择索引键时,尽可能采用小数据类型的列作为键以使每个索引页能容纳尽可能多的索引键和指针,通过这种方式,可使一个查询必需遍历的索引页面降低到最小,此外,尽可能的使用整数做为键值,因为整数的访问速度最快。

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持脚本之家。


SQL Server 2012使用Offset/Fetch Next实现分页数据查询

$
0
0

在Sql Server 2012之前,实现分页主要是使用ROW_NUMBER(),在SQL Server2012,可以使用Offset ...Rows  Fetch Next ... Rows only的方式去实现分页数据查询。

select [column1]
,[column2]
...
,[columnN]
from [tableName]
order by [columnM]
offset (pageIndex-1)*pageSize rows
fetch next pageSize rows only

上面代码中,column1,column2 ... columnN表示实现需要查询的列,tableName是表名,columnM是需要排序的列名,pageIndex是页码,pageSize是每页数据的大小,实际中一般是先计算(pageIndex-1)*pageSize的结果,然后在sql里直接使用具体的结果(数字)

例如数据库中有T_Student表,数据如下:

SQL Server 2012使用Offset/Fetch Next实现分页数据查询

 假如需要查询第3页的数据(由于数据少,这里假设每页数据是2条,即pageSize=2),那么SQL语句如下:

select [Id]
,[Name]
,[StudentId]
,[MajorId]
from T_Student
order by [Id]
offset 4 rows
fetch next 2 rows only

结果如下:

SQL Server 2012使用Offset/Fetch Next实现分页数据查询

注意:使用Offset /Fetch Next需要指定排序,即必须有order by ***

总结

以上所述是小编给大家介绍的SQL Server 2012使用Offset/Fetch Next实现分页数据查询,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!

大容量csv快速内导入sqlserver的解决方法(推荐)

$
0
0

前言

在论坛中回答了一个问题,导入csv 数据,并对导入的数据增加一个新的列date datetime。要求在10s内完成,200w行数据的导入.分享下解决问题的思路和方法

分析

通常来讲Bulk insert 比 BCP 更快一点,我们选择Bulk insert的方式。 提出的解决方案:先把数据导入到sql server 的临时表,然后再插入目标表。 具体语句如下:

bulk insert test07232 from 'D:\2017-7-22.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '/n' )
SELECT * ,GETDATE() AS Date INTO ttt FROM test07232

但是他提供的的csv 进行导入时,提示如下错误

消息 4866,级别 16,状态 1,第 1 行 大容量加载失败。数据文件中第 1 行的第 2 列太长。请验证是否正确指定了字段终止符和行终止符。 消息 7399,级别 16,状态 1,第 1 行 链接服务器 "(null)" 的 OLE DB 访问接口 "BULK" 报错。提供程序未给出有关错误的任何信息。

消息 7330,级别 16,状态 2,第 1 行 无法从链接服务器 "(null)" 的 OLE DB 访问接口"BULK"提取行。

这是由于行终止符无法识别导致的。使用notepad++打开csv文件,在视图中选择显示行尾号。

可以看到文件的换行符是LF

而对于正常的csv问题 默认是用CRLF作为换行符的

因此上面的bulk insert 语句无法正常执行。

解决

1.首先想到的是修改数据源头,让源头产生正常的数据,但是源头数据不好修改 2.用程序,写c#处理,,太费时间 3.最后终于找到了正确的办法

bulk insert test07232 from 'D:\2017-7-22.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '0x0a' )
SELECT * ,GETDATE() AS Date INTO ttt FROM test07232

最后全部都在SSD 上,导入用时2s。生产正式表1s 。整个过程3s完成。

总结

解决问题要从各个问题入手,找到原因才更好的解决问题

总结

以上所述是小编给大家介绍的大容量csv快速内导入sqlserver的解决方法,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!

SQL查询字段被包含语句

$
0
0

前言

说到SQL的模糊查询,最先想到的,应该就是like关键字。

当我们需要查询包含某个特定字段的数据时,往往会使用 ‘%关键字%' 查询的方式。例如:

SELECT ... FROM 表名 WHERE 字段名 LIKE '%关键字%'

这应该可以算是一种典型的”包含XXX”的方式,但如果我们需要查询字段被包含于特定字符的数据时呢?

比如,我有一张联系人数据表ConnectName,其中有个字段用于记录姓名name。我想获取名为小兰和灰原的人的联系资料。正常情况下,我们首先能想到的做法应该是:

SELECT * FROM ConnectName
WHERE
name = '小兰'
OR name = '灰原'

这样的做法是可以实现这种目的的。如果这时候,我突然想,再查一个人,比如说“柯南”,那么我们就要修改SQL的结构,添加一个Where条件句:

SELECT * FROM ConnectName
WHERE
name = '小兰'
OR name = '灰原'
OR name = '柯南'
我们知道,OR条件查询本身是属于效率较低的,而且结构变动的语句在MyBatis实现稍微麻烦些(当然也是可以实现的,遍历插入字段就行了)。

能不能简单一些呢?我可以把所有关键字放在一起,只用一个Where条件去实现吗?

CHARINDEX登场

这时候,我们就可以用 CHARINDEX 关键字了,CHARINDEX可以返回某个字段在一串文字中出现的位置,跟String的indexOf用法类似,不多废话,我们来举个栗子:

CHARINDEX('李白','曹操很帅') =0

在上面的栗子中,因为曹操很帅不包含李白关键字,所以找不到,返回0.

CHARINDEX('李白','李白很帅') =1

同样的栗子,因为包含里李白关键字,会返回关键字所在的开头第一个字的索引,所以返回1.

了解了使用方法之后,我们就可以运用CHARINDEX关键字优化下我们的SQL语句:

SELECT * FROM ConnectName
WHERE
CHARINDEX(name ,'小兰灰原柯南')>0

如果name字段对应的名字在 ‘小兰灰原柯南' 中出现,那么CHARINDEX函数就会返回大于1,就可以得到我们想要的数据啦(他们3个人也可以在一起愉快的玩耍咯^-^)

对应的mybatis实现也相对简洁

SELECT * FROM ConnectName
WHERE
<!--[CDATA[ AND CHARINDEX(name ,#{传入的参数}) --> 0 ]]>

如果后期我们想要加入一个新的人,比如毛利小五郎,只需要在传入的参数中加入 ‘小兰灰原柯南毛利小五郎' 就可以了,是不是简单了许多呢?

以上所述是小编给大家介绍的SQL字段的被包含查询语句,希望对大家有所帮助,如果大家有任何疑问欢迎给我留言,小编会及时回复大家的!

Sql Server2012 使用IP地址登录服务器的配置图文教程

$
0
0

最近在使用NFineBase框架+c#做一个系统的时候,在使用sql server 2012 连接数据库的时候 ,遇到几个问题。

一.

就是在本地或者远程连接别人的数据库的时候,以ip地址作为服务器名称 以SQL Server 身份验证(即输入登录名和密码)的方式登录数据库 总会出现错误

比如说会提示一下错误:

用户 'sa' 登录失败,该用户与可信 SQL Server 连接无关联。

但是使用 计算机名\实例名 这种方式就可以登录

\
Sql Server2012 使用IP地址登录服务器的配置图文教程

最后发现还是数据库的配置问题,解决方案如下:

我使用的是win10 系统 数据库是sql server 2012

鼠标移动到win 键上 点击鼠标左键,找到安装的sql server 2012 如下图 所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

在这里还有一个问题,就是在这里边 你是无法直接找到 sqlserver 数据库的 那个 配置工具 文件夹的,对于win7系统在这里可以直接看到一个 配置工具 文件夹,里面有SQL SERVER 配置管理器 ,对于这种情况,其实可以直接在win10 的启动文件夹下找到,在 点开的sql server 2012 文件夹下 选中 SQL Server Data Tools 点击右键,如下图所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

打开文件位置 如下图所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

或者 你可以直接打开 C:\ProgramData\Microsoft\windows\Start Menu\Programs\Microsoft SQL Server 2012 这个文件夹,根据你自己电脑的结构操作。

然后点开 配置工具-->SQL Server 配置管理器 如下图所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

点开 SQLEXPRESS 的 协议 下的 TCP/IP,点击启用,这里我的已经启用了,如图下所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

然后点开 IP地址

Sql Server2012 使用IP地址登录服务器的配置图文教程

将你的本机的ip地址输入到 IP3 中 我的本机的IP地址是192.168.17.199 将 TCP动态端口设为空,TCP端口设为1433 下面 活动和已启用 选项改为 是

然后点击应用 确定 然后重新启动 SQLEXPRESS服务,然后可以使用IP地址登录数据库了。

二.

使用IP地址作为服务器名登录数据库时,出现 用户 'sa' 登录失败。该用户与可信 SQL Server 连接无关联等错误,也就是使用 IP地址作为服务器名称 以登录名和密码的的方式登录数据库时,出现错误。

解决方法:

先用windows身份验证的方式登录进去然后设置

选中 你的连接实例 ,右键 点击属性,如下图所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程
Sql Server2012 使用IP地址登录服务器的配置图文教程

然后 点击 连接实例下 的 安全性--->sa--->状态,如下图所示:

Sql Server2012 使用IP地址登录服务器的配置图文教程

最后发现 对于每一个 连接,比如:

Sql Server2012 使用IP地址登录服务器的配置图文教程

这就是 一个连接,对于每个连接,都需要设置上面这些权限,才可以保证在本地或者远程 以 IP地址为服务器名称,以 登录名和密码登录数据库

如果 换成另一个连接或用户,要重新配置才可以,当使用某个自定义用户登录时,要记着,设置这个用户的连接配置信息。

总结

以上所述是小编给大家介绍的Sql Server2012 使用IP地址登录服务器的配置图文教程 ,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!

SQLServer查询某个时间段购买过商品的所有用户

$
0
0

goods表如下:

name time product
A 2016-1-2 13:23:00 WFEY
B 2016-2-17 11:43;34 ASG
A 2017-1-10 15:23:00 SGH
C 2015-4-5 13:47:20 HRT
C 2016-7-12 19:56:03 XCC
A 2017-3-4 14:00:00 ESFW
SELECT DISTINCT OO.name
FROM (SELECT name,DATE_FORMAT(time , '%h:%m') AS ti FROM goods) AS OO
WHERE ti BETWEEN '12:00' AND '14:00';
以上所述是小编给大家介绍的SQLServer查询某个时间段购买过商品的所有用户,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对脚本之家网站的支持!
Viewing all 3160 articles
Browse latest View live




Latest Images