High CPU? Implicit Conversion?

Featured

At PASS Summit 2018 and SQL Saturday Oregon 2018, Angela Henry aka @SqlSwimmer on Twitter gave a presentation on Data Types Do Matter.  Most of us have programmed stored procedures adding parameters without looking at the original datatype.  Shortcuts are taken and the correct data type is not chosen, it will not matter. But will it?

A while ago, we contracted with a third party to start using their software and database with our product.  We put the database in Azure but within a year, the database grew to over 250 gigs and we had to keep raising the Azure SQL Database to handle performance issues.  Due to the cost of Azure, a decision was made to bring the database back on-premise.  Before putting the database on the on-premise SQL Server, the server was running with eight CPUs.  In production, we are running SQL Server 2016 Enterprise Edition. When we put the vendor database into production, we had to dramatically increase our CPUs in production, ending up with twenty-eight CPUs. Even with twenty-eight CPUs, during most of the production day, CPUs were running persistently at seventy-five percent. But why?

Query Store was on the database.  We began to analyze Query Store to see why one additional database on the production server would drive our CPU usage so high. To see what was causing the issue, the most resource intensive query report in Query Store was run on the database and we saw this,

ImplicitConversionCpuGraph

We found the queries which were causing the CPU issues.  By highlighting one of the graphs, you can view the query plan the query is using in the window pane to the right of this window pane in Query Store.  You will see one or more dots showing you the query plan but the real magic is hovering your mouse over one of the dots to see how the query was actually performing.  Below is what I saw when I hovered over the query plan dot,

ImplicitConversionExecutionTimes

In one hour, the most resource intensive query ran eighty-three thousand one hundred and ninety-four times.  Wow, this query runs a lot and it is using a ton of CPU time per hour.  In the bottom pane of the Query Store, we see the execution plan.  Looking at the execution plan, I see one of the icons has a yellow yield warning symbol,

ImplicitConversionExecutionWarning

Lastly, if we right click on the icon and look at the properties of the icon, toward the bottom of the detail property screen, we see that the query is having a problem with implicit conversion,

ImplicitConversionExeuctionMessage

The information was passed back to the vendor to fix their code.  The vendor sent us a new release with the fix for implicit conversion.  Upon deploying the change to production, we had a massive reduction in CPU usage.  From twenty-eight CPUs running persistently at seventy-five percent usage down to eight CPUs running persistently at less than thirty percentage usage.  Shocking, never thought implicit conversion could cause such a wide variance in CPU usage.

As Angela says, Data Types Do Matter!

Changing TDE Database and Cell Level Encryption by Certificate

It has been a while since I last blogged.  But as I stated before, I will not blog unless  I have issues finding a clear solution to an issue. Today I was trying to find out how to change TDE security at both the database level and cell level encryption when using a certificate. But first, do you know when your TDE encryption certificate is going to expire?  What are you doing to watch that certificate?

When you create TDE certificate by default, the certificate is just created for a year. Depending upon your auditor, you may be asked to change this TDE certificate yearly.  In this blog post, you can watch when your TDE certificate expires using policy-based management, https://www.mssqltips.com/sqlservertip/2388/internal-sql-server-certificates-have-expired-is-this-an-issue/.

I also change my TDE and Cell Level Encryption when I move and scrub the database for QA and Dev.  On the scrub server, I restore the production database with the production certificate.  After creating a QA Dev TDE Certificate, I change the restored production database to the QA Dev Tde Certificate.  Here is the script to make that change.

declare @EncryptionState int

— Change TDE Certificate

ALTER DATABASE ENCRYPTION KEY  

ENCRYPTION BY SERVER CERTIFICATE DEVQA_TDE_20170501;

select @EncryptionState = encryption_state

from sys.dm_database_encryption_keys

where db_Name(database_id) = ‘TdeDatabase’

–select @EncryptionState

while @EncryptionState <> 3

begin

WaitFor Delay ’00:01′

select @EncryptionState = encryption_state

from sys.dm_database_encryption_keys

where db_Name(database_id) = ‘TdeDatabase’

end;

/*

— Test to see it worked

Use TdeDatabase;

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘YourMasterPassword’;

select top 10 *

from people ;

close Master Key;

*/

— Create QA Dev Database Level Cert

Use TdeDatabase;

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘YourMasterPassword’;

If not exists(select name from sys.certificates where name = ‘TDE_DEV_QA_Field_Level_20170501’)

Begin

CREATE CERTIFICATE TDE_DEV_QA_Field_Level_20170501 WITH SUBJECT = ‘TDE DB Dev QA Field Level Certificate’, EXPIRY_DATE = ‘20170501’;

end;

close Master Key;

— Set the QA Encrypt Field

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘YourMasterPassword’;

OPEN SYMMETRIC KEY ProductEncryptField DECRYPTION BY CERTIFICATE Prod_Field_Level_20170501

— First, encrypt the key with a password.

–ALTER SYMMETRIC KEY ProductEncryptField

—    ADD ENCRYPTION BY PASSWORD = ‘YourPassword’;

— Now remove encryption by the certificate.

ALTER SYMMETRIC KEY ProductEncryptField

   DROP ENCRYPTION BY CERTIFICATE Prod_Field_Level_20170501;

ALTER SYMMETRIC KEY ProductEncryptField

   Add ENCRYPTION BY CERTIFICATE TDE_DEV_QA_Field_Level_20170501;

CLOSE SYMMETRIC KEY ProductEncryptField;

drop certificate Prod_Field_Level_20170501;

This is how you change both the TDE and Field Level Encryption Certificates.  Note, SQL Server 2016 Always Encryption is not addressed here.

Rules of Scrubbing Data

After setting up a scrub server, we need to review each database for data that needs to be scrubbed.  The data you need to scrub depends upon the government rules you need to follow.  Discovering the data to be scrubbed will require us to query the database and also talk to our developers.  Here is a list of possible field names that we need to search for;

Email

Address

City

State

Zip

Postal Code

Name

Credit Card

Social Security Number (SSN).

 

Here is a query to search for the fields,

select t.name as TableName

,c.name as ColumnName

,x.name as ColumnType

,c.max_length

from sys.columns c

inner join sys.tables t

on c.object_id = t.object_id

inner join sys.types x

on c.system_type_id = x.system_type_id

where c.name like ‘%email%’

and (x.name = ‘varchar’

or x.name = ‘nvarchar’)

 

Remember, this will only find the fields that match a certain pattern.  Your developers may help you find other fields.

The database might also have fields that have been encrypted with cell-level encryption or a third party product.  If data is encrypted, you will need to change the encryption certification to a non-Production certificate.  I recommend sharing the QA/Dev certificate for a non-Production environment.

After you have discovered all the fields that need to be scrubbed, next we determine how we want to scrub the data.  One thing to note, you may need to do something special when scrubbing the data.  For example, when I worked for a credit card company, we had to maintain the first digit of the credit card number.  Yes, there was some code which had the value information hard coded.

Now, let’s discuss the different methods for scrubbing data:

  1. Change the data. For example, change the social security number by maybe replacing the data with primary key padded with zeros.  With this said, this is assuming your primary key is an integer value.
  2. Scramble the data. With the primary key, pull the data to scrub out of the table and randomize the data.  After the data is randomized we will put the data back into the original table using the primary key.
  3. Remove the data. You can null, zero out or space out the data.  Remember, you are changing the selectivity of the data.
  4. Change all email fields. Remember you don’t want to spam your customers during development, QA testing or UAT testing.

When I am scrubbing data, I want to keep as much of the data the same as I can.  If I can scramble the data, it is my first choice.  For other options of scrubbing or generating data, please check out this article from Red-Gate, https://www.simple-talk.com/sql/database-administration/obfuscating-your-sql-server-data/.

In our next article, we will wrap automating the process of scrubbing data.

 

Setting Up a Scrub Server

A scrub server is a server where we will restore protected Production data. Production data must be changed to not expose data that is protected by government regulations. Some of the government regulations will include privacy and personal identifiable information (PII data) laws. Other laws might be HIPPA and PCI laws. This is not an exhaustive list of all laws that covers data in your production environment. Check with your government and management for laws you need to adhere to.

With that said, when I set up a Scrub server, it is in a very secure area where the data access is very, very limited. For example, in my current company, the server is in a separate domain from Production and QA/Dev. Only DBAs are allowed to access this server. If you have multiple DBAs at your location, you may want to even limit which DBAs have access to this server. Our goal is to automate the entire scrubbing process so no one has to access the data including copying backup files from Production and to a shared scrub location for QA/Dev to retrieve.

First, we have to determine which version of SQL Server will be on the Scrub server or you may decide to have multiple versions of SQL Server. I chose to only have one version of SQL Server on my Scrub Server. Determining the SQL Server version will all depend upon the version of SQL Server in Production. Hopefully, your QA/Dev environments will be at least the same version of SQL Server as Production or newer. Note, I did not say the same service pack or cumulative update version but SQL Server 2008 R2, 2012, 2014 or 2016. You may be in the process of deploying service pack 1 to Production but if you are like me, you start out changing your Development servers first and if the deployment goes well moving the service pack to QA, then UAT and finally to Production. So we must be on the same major version number as Production, not the same service pack.

Second, you must have a lot of disk space. By far, this is my largest SQL Server in terms of hard drive space. People may say you are wasting disk space but remember what you are really doing is protecting the Company from a data breach. When protecting the data was explained to management by scrubbing data, disk space request was no longer an issue. On my scrub server, I have enough disk space to restore all of my Production databases from all of my Production servers. Yes, that is a lot of disk space. My scrub server has 1.6 TB of disk space. To put this in relation, my largest database is 160 gigs. In my previous company where I had PCI data, the largest production database was 700 gigs and my scrub server was 4 TB of disk space. Remember you will need space for the backup from Production, the restored database and the backup you will take after you scrub it. So 160 gigs database would need 480 gigs of space if you can’t compress the backup files.

Finally, I recommend the Scrub server to be a Virtual Server to keep some cost down. My Scrub VM server has 2 vCpu and 8 gigs of Ram. The scrub server does not need much horsepower. With automation, I restore, scrub and backup 17 databases in 25 minutes run by a SQL Server Agent job with Powershell and T-SQL.

As we wrap up this blog, the most important thing to remember when creating a scrub server is security. You will be restoring Production data, keep it secure. Let’s build, troubleshoot and test with the best possible dataset we can provide to our team by scrubbing Production data. Next blog post will discuss how we discover what data needs to be scrubbed and how I scatter the data across the database.

Developing Environmentally Aware Database Servers

All of us have Production database servers and hopefully, you also have additional database servers for Development, QA, and UAT. Some IT shops will also have a Continuous Integration server and maybe other servers. If you only have Production servers this needs to be addressed and is outside the scope of this post. In the locations where I have worked, we also have a Scrub server. The question is when a script executes, do you know what environment the query is executing in? Most scripts will not care what environment the script executes in but other scripts could cause damage in a Production environment. For example, if the script is removing email addresses so you don’t spam your clients with automated email messages, you would not want the script to execute in a Production environment.

So how do you make your database server environmentally aware? Here is how I do it:

  • Create a DBA database to use for DBA purposes, not master!! I call mine, Admin. Only restore this database to the same server or a new server that is replacing the old server. It will contain the server environment information.
  • Create a table to store specific information about the server.
  • Add fields to store information key and information value. Example: Environment = Production or Environment = QA
  • Populate the fields with Environment and the value for the server you are on. You can do this by maintaining a population script.

The DBA database will be on every database server including developers computers, Environment = Dev. No database server will be excluded. If SQL Server is installed on the computer, the DBA database will be on the computer.

Once this has been accomplished, you are now able to make your scripts environmentally aware. How would you do this? When your script needs to be environmentally aware, get the environment location from you DBA database. Here are two examples of how I use the environment information:

  • Data change scripts – If I don’t want the change script to execute in Production, at the beginning of the script I would check the environment location. When I am in the Production environment I would just return.

If @Environment = ‘Prod’

      Return;

  • Security scripts – I change the value of the security based upon environment

Case when @Environment = ‘Prod’ then ‘ProdAccount’

       When @Environment = ‘UAT’ then ‘UATAccount’

       When @Environment like ‘QA%’ then ‘QAAccount’

       When @Environment = ‘Dev’ then ‘DevAccount’

 

Notice, QA may have multiple servers and if I am using the same security account, I could use a like to search multiple QA servers.

Protect yourself and make your database servers environmentally aware!

 

Why Do I Need Production Data for Testing?

Every day as we test and develop, we need data to see how a screen will work. If the screen is filling in an address of an employee and the employee is adding their home address state, how does the screen behave? Is the UI allowing just US states or is this a worldwide application? If this is a legacy database system, what values have been allowed in the past? What happens if I run a state report?

Have you ever heard, “but it works on my machine”? Is this because of data perfection in Development and QA or having specific failure conditions? Can you think of all the data scenarios that accompany Production data? What about performance? Why did the application fail? What happens if I add this index?

Here are the reasons I believe you should get a scrubbed version of your production database into your Development, QA and UAT environments.

  1. Data Selectivity – if you have data in your address table, do you have addresses, which are 90 percent one state, California, or are the states spread out. If your database is only a US database, do you only have US states or do you have other data?
  2. Application Errors – You need to figure out why the application is failing in Production, you need to validate the error condition. Is there more than one row of data causing this error? Did a data value overflow the data type?
  3. Data Usage Stats – When you look at the data statistics, what happens if you add an index? Does this explain why sometimes your query is fast and other times it is very slow?
  4. Data Anomalies –What data are you expecting in the field? Is the data what you are expecting? Do we need to fix the screen to only allow certain values to be entered? What if you have a legacy database which has unexpected data values, how does the application handle the situation if the data can’t be changed?
  5. Index Maintenance – What is the effect of the index you want to add or remove?
  6. Data Security – Never move real Production data outside of Production without scrubbing the data first.

All of this should not be done or debugged in Production. Let’s get a copy from Production, scrub it and work with this data. Next, let’s learn how to build a data scrubbing environment.

It Has Been A While!

Hello and welcome back!  It has been a while since my last blog post.  Since the last post, I moved from Denver to Raleigh.  Last week while at the PASS Summit, I was visiting with Jason Horner and he suggested I blog about the subject I was sharing with him. The topic is Restoring a Production Database to a Non-Production Server.  Over the next few weeks, we will be reviewing several sub-topics about restoring a production database. I call this process, Scrubbing the Data. Here is an overview of the topics we will be discussing:

  • Why do I need Production Data outside Production
  • Developing an Environmental Aware Servers
  • Setting up a Scrub Server
  • Rules of Scrubbing Data
  • Changing TDE Database and Cell Level Encryption by Certificate
  • Complete Scrubbing Process

I will be leaving on vacation on Friday, so keep your eyes open for the first post on scrubbing your data around the end of November.

SSPI Context Error – Easy Fix.

Have you ever had just one error message that you wish it would give you a lot more information.  This is an error message which I don’t like.  

What causes this error message to occur?  If you rename a server with Sql Server on it and rename Sql Server with domain level security running Sql Server, you will get this error when you try to connect to Sql Server from a remote computer.  You can Remote Desktop to the server and connect to Sql Server with no issues, but as soon as you try to connect remotely you get this error.  This will also happen if you clone a virtual machine with Sql Server on it and then rename it.

What causes this error message?  The Sql Server with a new name does not automatically create SPN (Server Principal Name) in Active Directory for Sql Server when you have a windows account running Sql Server.  

One way to fix this issue go find your Active Directory admin and have him add SPN records for Sql Server.  The other day when my Windows Admin was doing this, even though the account for the server did not exist as this was a new server, he was getting a duplicate error message.  You have to be a domain admin to add new SPN records.

Here is an article from MS Sql Tips to set the SPN

http://www.mssqltips.com/sqlservertip/2955/register-a-spn-for-sql-server-authentication-with-kerberos/

But we found an easier way after reviewing several articles.  We opened the Sql Server Configuration Management tool, changed Sql Server to start with Local System instead of starting with the Windows Domain account.  Then we restarted Sql Server Engine.  At this point, SPN records were created for Sql Server.  We then switched the Sql Server startup account back to a Windows Domain account and restarting Sql Server again.

After doing this, we were able to login remotely to our Sql Server.  I sure wish I would have found this years ago.  Note, we were on a Windows 2012 and Sql Server 2012 server.

SSRS Reports Crawling – Too Many Images?

Over the past several weeks, I have been trying to find out why the Reporting Services report was taking so long to render.  I just wrote my first Reporting Services report just a few weeks ago.  I have administered Reporting Services for a few years but have never built a report.  In this new job I took early this year, my first project with the new company was to create a cube and we would have reports executing against the cube.  We hired a contractor to help me since this was the first cube where I would help build it.

We finished the project in time but since I was new and the contractor was helping, we did not always have all the facts.  Right after he left, we deployed it to Production.  Yes it was there.  The processing of the cube is very fast, under 4 minutes average processing running over 80 stored procedures to gather the information. The cube processing time was excellent.

The data was in the cube very quick  but running the reports was like watching paint dry on the wall.  We only have 5 different reports.  No matter how simple, the report was very slow.  I could request a report and I could walk away from my desk, go get coffee or tea, talk to some other developers, return to my desk 8 minutes later and the report would still not be finished.  The report has expandable columns and if we clicked to open an expandable part of the report you would have to wait another 5 minutes before you would see the expanded part of the report.  As we all know, this is totally unacceptable.

I read all sorts of articles and blogs, nothing helped.  We had two fact tables and I combined it to one Fact table.  On the single Fact table, I added ColumnStore Index.  This should fix the slowness of the report, right? No, the report was still slow.

While testing our product manager noticed images in the report to show either a check mark or a “x” mark.  In the report we were using the gauge indicator to display the check mark or “x” mark depending upon the value in the database.  Much to our dismay, the image coming from the indicator gauge was being rendered several hundred times on several of the reports causing the slowness.  Not Good!

    Solution to our slow running reports was to remove the gauge indicator and make the check mark and “x” mark a text value instead using the WindDings font.  After changing all the reports to use text instead of a image, the reports rendered 300 times faster.  No more slow reports, issue solved.

.

Page Life Expectancy Issues

When I took this new job in January, I inherited a Windows 2003 Standard server running Sql Server 2005 with only 4 gigs of ram.  How in the world does this server even run Sql Server with only 4 gigs of ram.  As I always do, I set the min and max memory setting for Sql Server with min being 1 gig of ram and the max getting 2 gigs of ram.  If you are not aware, Windows 2003 Standard maximum memory is 4 gigs.

As I began to monitor this server, the first thing I did was ask for more ram since I was seeing Page Life Expectancy issues everyday.  If I could get a page to last 3 seconds, I was lucky.  I knew to get better performance from the Sql Server, I had to solve this issue.  Rebuilding the server was not an option since development was building a newer version of the product.  To make it even worse, the server has the web and app layer also on this server.  I know you are not suppose to do this, but remember I inherited it.  Lucky me!

Due to a great resource of BrentOzar.com, I ran sp_blitzindex on the server to see if what type of indexing issues might be occurring.  I saw several indexes not being used and missing indexes.  I also saw tables with no clustered indexes.  Last night with a change control ticket, I fixed all the missing indexes issues and on one query which was timing out some times, I added the missing clustered index.  I also dropped indexes not in use.  This morning when I came in to review the monitoring of the server, I noticed my Page Life Expectancy had changed dramatically.  The server when from 3 seconds to 2 hours 17 minutes and 51 seconds.  Wow, don’t tell my system administrators but fixing the index issues also solved memory pressure issues.  You know a DBA always wants more memory.  Yes, I knew it was going to help but not to the degree of change I am seeing now.

Check out Brent Ozar and look for the sp_blitzindex and sp_blitz scripts.  Brent and team, you make me look very smart.  Thanks for all you do for the community.