Index Types

The following types of indexes are available in SQL Server 2012

  • Clustered
  • Non-clustered (including covering and filtered indexes)
  • XML (primary and secondary)
  • Spatial
  • Full-text
  • Columnstore

It’s obvious when you know about the different types of indexes but the 70-462 guide is the first place I’ve seen a list of which indexes are best used in which situations so I thought I’d list them here.

Clustered indexes are best when querying…

  • a large percentage of columns in the table
  • a single row based on the clustered index key
  • range-based data

Non-clustered indexes are best when querying…

  • only a few rows in a large table
  • data that is covered by the index

Filtered indexes are best…

  • when combined with sparse columns (columns with lots of NULL values) to find specific non-NULL rows
  • with queries where a small subset of rows are queried often

More information on indexes can found in Chapter 10, Lesson 1 of the exam guide.

Advertisements

Columnstore Indexes – The Very Basics

Columnstore indexes are briefly mentioned in chapter 10, lesson 1 on the 70-462 exam guide so I don’t think there will be many questions on them in the exam.

There are a new feature in 2012 and can provide performance improvement for data warehouse type queries that have to perform aggregations over large data sets, e.g. a query that groups on a few columns along with an AVG() and SUM() and an ORDER BY.

The columnstore index organises data by columns rather than rows which allows SQL Server to compress repeating data values which, according to the exam guide, is approximately double the compression rate of PAGE compression.

Obviously, like everything else in SQL Server you need to test what works best for your particular scenario.

You can check the query plan to ensure the column store index is being used. There are two modes it can run under: batch and row. Batch mode should be quick but row mode means it is not using the full benefits of the column store index.

The column store index is split into segments of around 1 million rows and the optimiser can use segment elimination in a similar way to how it uses partition elimination in a partitioned table. The segments store the min and max values in their metadata so the optimiser can quickly deduce which segments it needs to use.

If your query uses only a small number columns from a table then SQL Server only gets this column data from disk rather than the entire row. This can significantly reduce disk I/O and memory cache usage.

However if your query uses several columns it might be better to just use row based indexes as SQL Server has to go and get the row data anyway. This is why column store indexes are suitable for data warehouse queries.

The syntax to create the index is as follows:

CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_CS_Test]
ON dbo.table (col1, col2, ...);

However, there are limitations on columnstore indexes: there are several data types that cannot be used; and the table must be read-only.

This second limitation is probably going to be a problem on most data warehouse systems as I expect you’d want to load new data in on a regular basis. There are a few ways around this: disable the index (making the table read/write), load in the new data and re-enable the index (this is probably a good option for data warehouses where data is loaded from OLTP systems at regular intervals but you wouldn’t do this if the table is updated frequently from, say, a client app); use partition switching to add the new data into the read-only table; use. UNION ALL in your query to include an active data set in with the columnstore data and then add the active data into the columnstore later on.

For more information…

http://msdn.microsoft.com/en-us/library/gg492088.aspx

http://blog.sqlauthority.com/2011/10/29/sql-server-fundamentals-of-columnstore-index/

SQL Server Audit

Chapter 6, Lesson 3 of “Exam 70-462: Administering Microsoft SQL Server 2012 Databases” is all about SQL Server Audit which is something I had never heard of before.  This falls under the “Optimise and Troubleshoot” objective. I’ve gone through the lesson in the book and had a play around and this is what I’ve found…

SQL Server Audit allows you to set up auditing at the instance or database level (only 2012 Enterprise Edition allows database level auditing) to track activity, including who is connecting to the instance, who is changing objects, who is updating permission etc. The audit logs are written to either a flat file, or the Windows Security or Application logs.  Access to the Windows Security log is restricted whereas the Windows Application log can be access by anyone who can authenticate against the server so Microsoft recommends that you send the audit data to a different machine to the one you are auditing and only give certain people access to the logs so that anyone making unwanted changes to the instance cannot tamper with the audit logs to hide what they are doing.  The destination server should be configured so that only Audit Administrators and Audit Readers have access.

Setting up a SQL Server Audit

To set up SQL Server Audit we firstly need to create a new Server Audit.  In SSMS expand the Security node, right-click on Audits and select New Audit…  The Create Audit window will be displayed.

Create Audit

Enter a name and the queue delay, which is the delay in milliseconds before audit actions are processed.

Choose the On Audit Log Failure which tells the engine what to do if the auditing fails: either continue, shut down the engine or just fail with no event written.

Finally choose the destination which can either be a file or the Application or Security log.  We’re going to write to the Security log and this could also have been set up using the following T-SQL statement against the master DB:

CREATE SERVER AUDIT [SimonTest]
TO SECURITY_LOG
WITH (QUEUE_DELAY = 1000, ON_FAILURE = CONTINUE);

The audit is disabled by default so you’ll need to right-click an enable it.

In order to write to the Windows Security log the SQL Server service account must have the Generate Security Audits policy.  I’m running everything locally on Windows 7 Ultimate and this can be found under Control Panel -> System and Security -> Administrative Tools -> Local Security Policy -> Security Settings -> Local Policies -> User Rights Assignment -> Generate security audits.

Generate security audits

You’ll also need to enable both the Success and Failure audit attempts which can be found in the Audit Policy node of the Local Policies.

Audit object access

We’ll also create a second audit that writes to a file so we can pick and choose where we want to send the audit log data to.

Create File Audit

Now we can set up both Server Audit Specifications and Database Audit Specifications depending on what level and what actions you want to log.  For a list of all the available actions see http://msdn.microsoft.com/en-us/library/cc280663.aspx

Setting up a Server Audit Specification

In this example we’re going to set up audits for the DATABASE_CHANGE_GROUP for the two audits we set up so that we write this audit log info to the Security log and the file.

In SSMS, under the Security node right click on Server Audit Specifications and select New Server Audit Specification…

Audit Spec Security Log

Enter a name, select the appropriate audit (firstly we’ll set up the one using the Security log) and then select DATABASE_CHANGE_GROUP as the Audit Action Type.  This will audit and changes to any of the databases on that instance.

Create a second Server Audit Specification that uses the audit that write to the file and against select DATABASE_CHANGE_GROUP as the Audit Action Type.

The Server Audit Specifications will be disabled by default so you will need to enable these in SSMS.

We could have also the following T-SQL to create on the Server Audit Specifications:

CREATE SERVER AUDIT SPECIFICATION [DB Change Security Log]
FOR SERVER AUDIT [SimonTest]
ADD (DATABASE_CHANGE_GROUP);

Now if we make a change to one of the databases on the instance, e.g. change the recovery model of one of the databases we should see an entry written to both the Windows Security Log and the file.

The entry in the Windows Security Log will look as follows.  In this example I’ve changed the recovery model of the database [Simon] to be bulk-logged.

Secuity Log

We can also view the log by right-clicking on the Audit in SSMS and selecting View Audit Logs.  This opens the SSMS log viewer and pulls the SQL Audit events out of the Security log.

NB. you’ll see in the Select logs bar on the left that you can view SQL Audit log for both the Windows Application and Security logs.  Also if you expand the Audit Collection node you will also be able to see logs for any file audits that have been set up.

Log Viewer

If we browse to the folder where we set up our file audit you will see a .sqlaudit file exists with the name of the audit and then a guid as the file name, e.g.

SimonFileTest_C7051B5C-BED2-4B05-9975-2F8EA0D002DB_0_130160222675740000.sqlaudit

It’s not easy to read this in a text file app such as Notepad.exe so it’s best to view it through SSMS Log File Viewer.  With the file log each piece of information is written to a column in the Log Viewer so it is easy to filter the logs to check a specific database or user.  It’s also easy to view all the individual statements that have been run.

Setting up a Database Audit Specification

Now we’re going to set up a Database Audit Specification to log all select statements made against a certain table.  In SSMS expand the appropriate database, then right-click on Database Audit Specifications under the Security node and select New Database Audit Specification…

Database Audit Specification

In this example we’re using our existing file audit and we’re going to log all SELECT statement run by any users running under dbo against the object dbo.TestSQLAudit.

Now if we enable the new Database Audit Specification and run a select statement against dbo.TestSQLAudit we get a new entry in the SSMS Log Viewer that records the full statement that was executed.

We can also create Database Audit Specifications using the following T-SQL statement

CREATE DATABASE AUDIT SPECIFICATION [SELECT Logger]
FOR SERVER AUDIT [SimonFileTest]
ADD (SELECT ON OBJECT::[dbo].[TestSQLAudit] BY [dbo]);

To enable the specification using T-SQL run

ALTER DATABASE AUDIT SPECIFICATION [SELECT Logger] WITH (STATE = ON);

Alternatively we could have added the WITH (STATE = ON) clause to the end of the CREATE DATABASE AUDIT SPECIFICATION statement.

There are a bunch of DMVs and DMFs that can be used to view the audit configuration on a server.  Here’s the list pinched straight from the exam guide

  • sys.dm_audit_actions Stores information on each audit action and each audit action group that you can use with SQL Server Audit
  • sys.server_audits Enables you to see information about each SQL Server audit configured on an instance
  • sys.dm_server_audit_status Enables you to see information about currently configured audits
  • sys.server_audit_specifications Enables you to see information about server audits
  • sys.server_audit_specifications_details Enables you to see information about actions that are audited at the server level
  • sys.database_audit_specifications Enables you to see information about currently configured database audit specifications
  • sys.database_audit_specifications_details Enables you to see information about actions that are audited at the database level
  • fn_get_audit_file Enables you to query a file-based target for audit information

Setting up Login Auditing

There is a third type of auditing that the exam guide briefly mentions as this is Login Auditing which is configured on the Security page of the Server Properties dialog box as shown below.  The options are pretty obvious what they do.  With the Login Auditing you don’t use the SQL Audit to specify where it’s logs to – it logs to the SQL error log.

And finally…

The lesson in the exam guide finally talks about c2 Audit Mode (which can audit successful and failed calls as defined by the c2 security standard), Common Criteria Compliance (which supersedes c2 Audit Mode) and Policy-Based Management (which allows you to manage more than one instance across multiple servers).  For on these subjects see the following links:

c2 Audit Mode: http://msdn.microsoft.com/en-us/library/ms187634.aspx

Common Criteria Compliance: http://msdn.microsoft.com/en-us/library/bb326650.aspx

Policy-Based Management: http://msdn.microsoft.com/en-us/library/bb510667.aspx

If you’re studying for the 70-462 exam, the guide mentions an exam tip in the Policy-Based Management section that says there is an existing policy called Database Auto Shrink that can be used instead of created your own policy.  I guess you need to know that this existing policy exists.

Discarding changesets in TFS

We’ve recently updated our branching strategy at work and now have a Main branch and up to six branches which will each usually only have one work item on.  Once a work item is released to live the branches are forward merged from Main.  More recently I upgrade the database solution in Main and all the branches to SSDT for VS2012.  Each branch was manually upgraded so the changeset that contained the upgrade on Main wasn’t required to be forward merged into the branches. You can use the following command to discard any unwanted changesets so they don’t appear in the list of changesets in the merge wizard in TFS.

tf merge "$/sourcePath/Main/Data" "$/targetPath/Branches/BranchA/Data" /discard /recursive /version:C20144~C20666

tf.exe can be found in C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE If you have a batch of changesets that you want to discard, view them in the merge wizard in TFS and get the start and the end changeset numbers.  These are the two values you use in the /version tag prefixed with a “C”.  If there is only one changeset you still need to specific it as a range, e.g. /version:C20144~C20144. This seems to check out all the files that you would be merging if you weren’t discarding the changesets and you’ll need to check them in.

Running TFS Build Definitions for SSDT projects

After successfully upgrading our database solutions to SSDT for Visual Studio 2012 we now managed to update our build definitions to deploy the new SSDT projects.  These build definitions deploy both the DB and .Net code and run a batch of units.

Before the upgrade to SSDT the build definitions were using VSDBCMD.exe to deploy the database projects.  In the “Items to Build” section of the “Process” tab of the build definition we specified all of the database project that we wanted to deploy.

After the upgrade to SSDT we wanted to use MSBuild to handle the build and publishing of the database projects.  Rather than specifying the individual projects to build we just told it to build the solution file.  We did try just building one project but we got an error saying something along the lines of the “OutputPath value is not set” and it told us to include the solution file in the build.

First of all we set up the build definition using the default XAML file and this has a bit in it that loops through the solution file and processes all the project files within it.

I was able to deploy the entire solution by just passing in the following to the “MSBuild Arguments” parameter in the “3. Advanced” part of the build definition:

/t:Build /t:Publish /p:SqlPublishProfilePath="MM443 (Simon).publish.xml"

The important bit here is the SqlPublishProfilePath parameter.  “MM443 (Simon)” was the name of the configuration that I use to deploy the solution to my local SQL Server instance.  I simply created a “MM443 (Simon).publish.xml” file for every database in my solution all pointing at my local SQL Server. The build definition then just picks out this publish.xml file in each of the projects it processes and all the projects are deployed.

Once we’d got the database solution deploying to my local instance we updated the build definition to use our specific XAML file that also deploys the .Net code and run the unit tests.  We just had to update the bit that deployed the databases to use MSBuild with the same parameters and it all worked fine.

One last thing we wanted was the ability to just generate the SQL scripts rather than publishing the databases.  We do this in the build definition that runs against our as live environment.  The generated scripts are then run manually by our DBA against the as live environment for UAT and then against live for the release.  In order to just generate the publish scripts without updating the database just add the /p:UpdateDatabase=False parameter to the MSBuild arguments.  So now the full value is:

/t:Build /t:Publish /p:SqlPublishProfilePath="MM443 (Simon).publish.xml" /p:UpdateDatabase=False

To automatically publish to the databases (which is the default) either remove the UpdateDatabase parameter or set the value to True.