September 2011 SQL Server 2008 Diagnostic Queries

Its been a couple of months since I have released an updated version of my SQL Server 2008 Diagnostic Information Queries, so I thought it was about time for a new version.

The first half of this set of queries is focused on the hardware and SQL Server Instance, so it does not matter which database you are connected to when you run them. The second half of the set is database specific, so you want to be connected to the database you are interested in, rather than the Master database. Many of these queries are DMV queries, which require VIEW SERVER STATE permission.

I recommend that you run each query individually, after reading the instructions and interpretation comments for that query. Personally, I like to paste the results for each query into a separate tab in a spreadsheet so I can keep them for baseline purposes.

I also strongly recommend that you look at all of the results, and consider everything you see before you make any rash decisions. For example if you see indexes show up in the “missing indexes” query, don’t go wild and add every index that shows up. A big part of being a wise DBA is to look at the entire picture, and use your judgment, experience and common sense as you interpret the results of these queries, and decide what (if anything) to do about them.

Posted in SQL Server 2008, SQL Server 2008 R2 | Tagged , | 6 Comments

Microsoft Releases Two Cumulative Updates for SQL Server 2008

Sticking to their eight week release cycle, Microsoft has released two different Cumulative Updates (CU) for SQL Server 2008. The first update is SQL Server 2008 SP1 CU16, which is Build 10.00.2850. This CU has seven fixes by my count. The second Cumulative Update is SQL Server 2008 SP 2 CU6, which is Build 10.00.4321. It has 13 fixes included.

There is no Cumulative Update for SQL Server 2008 RTM, which is considered to be a “unsupported service pack” by Microsoft, meaning no more Cumulative Updates for the RTM branch will be released, and you will get limited support from Microsoft if you ever need to open a support case with Microsoft CSS. If you are still on the RTM branch, you really should be making plans to get on a newer branch of SQL Server 2008, preferably the SP2 branch. The Build chart below should hopefully make  the relationship between the builds and branches more clear.

Remember, neither of these Cumulative Updates has anything to do with SQL Server 2008 R2, which is a completely separate major release (at least according to Microsoft). SQL Server 2008 R2 is on a separate release schedule for Cumulative Updates (every eight weeks, but not in synch with SQL Server 2008), so you need to get Cumulative Updates for SQL Server 2008 R2 separately.

 

-- SQL Server 2008 RTM is considered an "unsupported service pack" as of April 13, 2010
-- SQL Server 2008 RTM Builds   SQL Server 2008 SP1 Builds     SQL Server 2008 SP2 Builds
-- Build       Description      Build       Description         Build     Description
-- 1600        Gold RTM
-- 1763        RTM CU1
-- 1779        RTM CU2
-- 1787        RTM CU3    -->    2531        SP1 RTM
-- 1798        RTM CU4    -->    2710        SP1 CU1
-- 1806        RTM CU5    -->    2714        SP1 CU2 
-- 1812        RTM CU6    -->    2723        SP1 CU3
-- 1818        RTM CU7    -->    2734        SP1 CU4
-- 1823        RTM CU8    -->    2746        SP1 CU5
-- 1828        RTM CU9    -->    2757        SP1 CU6
-- 1835        RTM CU10   -->    2766        SP1 CU7
-- RTM Branch Retired     -->    2775        SP1 CU8        -->  4000       SP2 RTM
--                               2789        SP1 CU9
--                               2799        SP1 CU10    
--                               2804        SP1 CU11       -->  4266       SP2 CU1        
--                               2808        SP1 CU12       -->  4272       SP2 CU2    
--                               2816        SP1 CU13       -->  4279       SP2 CU3    
--                               2821        SP1 CU14       -->  4285       SP2 CU4    
--                               2847        SP1 CU15       -->  4316       SP2 CU5    
--                               2850        SP1 CU16       -->  4321       SP2 CU6   
 
Posted in Microsoft, SQL Server 2008 | Tagged | Leave a comment

Slides For My 24 Hours of PASS Session

The slide deck for my session Hardware 301: Diving Deeper into Database Hardware is available here. There are still twelve more sessions in 24HOP coming up on September 8.

Posted in Computer Hardware, PASS, Processors, Teaching | Tagged | 1 Comment

24 Hours of PASS on September 7

Just in case you forgot, I wanted to remind everyone that the free, 24 Hours of PASS online event starts tomorrow at 12:00 GMT. 

My session is Hardware 301: Diving Deeper into Database Hardware starts at 22:00 GMT, and the abstract is below:

Making the right hardware decisions is extremely important for database scalability. Having properly sized and configured hardware can both increase application performance and reduce capital expenses dramatically. Unfortunately, there are so many different choices and options available when it comes to selecting hardware and storage subsystems, it is very easy to make bad choices based on outmoded conventional wisdom.

This session will give you a framework for how to pick the right hardware and storage subsystem for your workload type. You will learn how to evaluate and compare key hardware components, such as processors, chipsets, and memory. You will also learn how to evaluate and compare different types of storage subsystems for different database workload types. This session will give you the knowledge you need to make sure you get the best performance and scalability possible from your hardware budget!

The nice thing about 24 Hours of PASS is that it is completely free, and it requires no travel time or expenses. Assuming you don’t have a crisis at work, and that you are not stuck in too many meetings, you should be able to watch and listen to as many sessions as you want, via Windows Live Meeting.  I always enjoy #24hop, and hopefully my two miniature dachshunds will refrain from barking during my session… I hope to see you there!

You can still register here.

Posted in Computer Hardware, PASS, Processors, Teaching | Tagged | Leave a comment

OT: Rant About Mail-in Rebates For Computer Hardware

As you may know from reading my blog, I buy quite a bit of computer hardware components from places like Micro Center. It is fairly common for items like memory or motherboards to have a mail-in rebate (MIR) offer from the manufacturer. The idea with a MIR is to stimulate sales with the appearance of a lower final price while trying to minimize the rebate redemption rate. If it is too much trouble to jump through all the hoops to submit your rebate, or you simply forget to do it, the manufacturer wins.

This is not a new development. MIRs have always been a painful experience, and many people simply ignore them. Some retailers have stopping offering MIRs, which I think is a great idea. What has changed over the last year or so is how many manufacturers have made the MIR process even more complicated than it used to be. In the past, in order to claim your MIR, you usually just had to fill out a rebate form, cut out the UPC code from the box, include a copy of the receipt, and send all of that in by mail. Then, 6-8 weeks later, you would get your rebate check in the mail. Getting a check let you do anything you wanted with the money, so I typically would deposit it in my savings account.

Over the past year, many manufacturers have switched to a much more complicated system. Now, you have to go to a rebate web site and register your rebate request online. Next, you have to print out the rebate registration form from the web site, along with an address label. You also still have fill out the rebate information on the register receipt, cut out the UPC code from the box, and tape it to the rebate submission form. You also have to cut the address label from the rebate submission form and tape it to your envelope. Then you send all of this off to the manufacturer, and 6-8 weeks later, you get a prepaid debit card. The latest wrinkle on all of this is that you have to call a 1-800 number to activate your prepaid debit card! All of of this is a lot of time and hassle to get a $10-20 rebate card, which basically forces you to buy something else in order to use it, rather than just letting you deposit it in your account.

Obviously, this is not a huge issue in the overall scheme of things. Many people have much bigger problems to worry about. I do feel a little better going on this mini-rant though…

Posted in Computer Hardware, Processors | Tagged , | 5 Comments

Geekbench 2.2 Released

Primate Labs has released Geekbench 2.2.0, which is a free upgrade for Geekbench 2.1.x users. The release notes are here.

Here are the results for my little Toshiba Portege R835, (which has more CPU horsepower than the $25,000 Dell PowerEdge 6800 that used to power the RSS platform for NewsGator back in 2007-2008).

image

 

image

I think Geekbench is a very valuable tool for checking your CPU and memory performance, whether it is on a laptop or a database server.

Posted in Computer Hardware, Processors | Tagged | 2 Comments

SQL Server Hardware Choices Made Easy

Redgate has released a free eBook that I wrote called SQL Server Hardware Choices Made Easy. This is basically a compilation of the month long series I wrote in April 2011, called A SQL Server Hardware Nugget A Day, along with additional content from my SQL Server Hardware book.

I think it is a pretty easy read, and I hope that you enjoy it!

Posted in Computer Hardware, Processors, Storage Subsystems | Tagged | 6 Comments

New Dell Precision M4600 Mobile Workstation

After a very long wait, I finally received my new company laptop this past week, which is a very fast Dell Precision M4600. This particular unit has the 2.3GHz Core i7-2820QM quad-core processor, with 8MB of L3 cache. It uses Turbo Boost 2.0 to increase the clock speed to 3.4GHz. There is one slightly faster processor available, which is the 2.5GHz Core i7-2920XM Extreme quad-core with 8MB of L3 cache. The Extreme i7-2920XM processor is roughly 5-10% faster than the i7-2820QM, yet it costs slightly over twice as much, and it requires more power, so I don’t think it makes sense for a laptop.

I also have 16GB of DDR3 RAM, although we did not order it from Dell, since they want $880.00 for 16GB of RAM, which is about nine times the fair market price! If you are buying a new laptop, you should take a look at the pricing from the manufacturer vs. pricing from somewhere like NewEgg.

It also has the standard ATI FirePro 5950 1GB graphics card with a very nice 1920 x 1080 screen. This card is plenty good enough for for non 3D gaming usage, with a 6.9 WEI rating for both graphics and gaming graphics, (see Figure 1).

image

Figure 1: Dell Precision M4600 Windows Experience Index Scores.

The unit came with a 256GB Plextor LAT-256M2S SATA III SSD, which uses a Marvell controller. It is not as fast as the SandForce 2281 controller based SSDs such as the OCZ Vertex 3, but hopefully it will be more reliable than the SandForce drives. Its 1000MB CrystalDiskMark scores are shown in Figure 2.

image

Figure 2: CrystalDiskMark Scores for 256GB Plextor LAT-256M2S SSD

The M4600 also has a second drive bay, so I will eventually put a second SSD in it.

image

Figure 3: Intel Core i7 2820QM CPU-Z Screen

This laptop has a pretty impressive Geekbench score, shown in Figure 4.

image

Figure 4: Dell Precision M4600 Geekbench Scores

On the slight downside, this is a pretty heavy laptop, especially compared to my little 3.2 pound Toshiba Portege R835-P55X. It also has a very big and heavy 180 watt power adapter. Overall, I think I am going to like this laptop quite a bit.

Posted in Computer Hardware, Processors | Tagged | 6 Comments

A Couple of Data Compression Tricks in SQL Server 2008

As I have said several times before, I am a big fan of data compression in SQL Server 2008 and above. Data compression is an Enterprise Edition only feature that was added to SQL Server 2008, and enhanced with Unicode compression in SQL Server 2008 R2.

It can be very effective in allowing you to trade some CPU utilization for a large reduction in I/O pressure and in memory pressure. Particularly if you have indexes that are relatively static, with data that compresses well, on large tables, with a weak or overstressed storage subsystem. Powerful CPUs are usually a lot less expensive than adding extra I/O capacity.

If you have large existing indexes that are good candidates for compression, you need to consider how to get those indexes compressed with the least performance impact on your system. Compressing an index means that it is rebuilt. The entire index must be read off of the storage subsystem, compressed, and then written back out to the storage subsystem. This can cause I/O pressure, memory pressure, and CPU pressure on your system.

If you use the Data Compression Wizard that is built into the SSMS GUI, the T-SQL script that it will generate can cause big concurrency issues for an active database that must be available 24 x 7 x 365.  This is because it will rebuild the index in OFFLINE mode, and it will not restrict the number of CPU cores that are used for the index rebuild (by using the MAXDOP = xx option). This will rebuild and compress the index as fast as your I/O subsystem and processors will allow, but it will end up locking your table and very likely pushing your CPU utilization to 100% during the operation.

A safer, but more time consuming alternative is to add the ONLINE = ON and MAXDOP = xx (where xx is 25-50% of your logical CPU cores) to the command, like you see in Figure 1 below.

-- Page compress clustered index on BigAddress table
ALTER INDEX [PK_BigAddress] 
ON [dbo].[BigAddress] REBUILD PARTITION = ALL 
WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, 
ALLOW_PAGE_LOCKS  = ON, ONLINE = ON, MAXDOP = 2, SORT_IN_TEMPDB = ON, DATA_COMPRESSION = PAGE);

Figure 1: Example of modified ALTER INDEX command

Adding ONLINE = ON will add about 50% more time to the ALTER INDEX operation, but will keep the table available during the command. Adding the MAXDOP option will restrict the number of logical CPU cores used for the operation (if you have the instance level MAXDOP set to zero, which is the default), thereby putting a ceiling on CPU utilization. This will also slow down the operation. You might be thinking “Why would I want to slow down the index rebuild?”, but believe me, running it ONLINE, with CPU usage throttled is much safer for concurrency purposes!

Once you have the command running in ONLINE mode, you might be curious about how long it will take to complete. Well, I just happen to have a query that will help you estimate that, shown in Figure 2.

-- Get Table names, row counts, and compression status for clustered index or heap
SELECT OBJECT_NAME(object_id) AS [ObjectName], 
SUM(Rows) AS [RowCount], data_compression_desc AS [CompressionType]
FROM sys.partitions 
WHERE index_id < 2 --ignore the partitions from the non-clustered index if any
AND OBJECT_NAME(object_id) NOT LIKE 'sys%'
AND OBJECT_NAME(object_id) NOT LIKE 'queue_%' 
AND OBJECT_NAME(object_id) NOT LIKE 'filestream_tombstone%' 
AND OBJECT_NAME(object_id) NOT LIKE 'fulltext%'
AND OBJECT_NAME(object_id) NOT LIKE 'ifts_comp_fragment%'
GROUP BY object_id, data_compression_desc
ORDER BY SUM(Rows) DESC OPTION (RECOMPILE);

Figure 2: Getting row counts and compression status for clustered indexes

It turns out that if you use the ONLINE = ON flag, after SQL Server starts writing out the PAGE compressed version of the clustered index, it will show up when you query sys.partitions, like you see in Figure 3, where you can see that it was about 90% done when I ran this query.

ObjectName               RowCount         CompressionType
BigAddress                    330391354          NONE
BigAddress                    306682536          PAGE

Figure 3: Duplicate entries in sys.partitions while index is being compressed

If the ALTER INDEX command has been running for 60 minutes, and you see that it is 90% done writing out the new index, that means that it will probably finish in about 3-4 minutes, since it took some time to read the index in off of the disk subsystem.

Posted in SQL Server 2008, SQL Server 2008 R2 | Tagged | 8 Comments

SQL Server 2008 R2 RTM Cumulative Update 9

Microsoft has released Cumulative Update 9 for SQL Server 2008 R2 RTM, which is Build 10.50.1804.0. I count 26 fixes in this Cumulative Update, which is for the RTM branch of SQL Server 2008 R2. Remember, this is completely separate from SQL Server 2008 SP2, which is the previous “major” version of SQL Server.

If you are still on the RTM branch of SQL Server 2008 R2, I would start thinking about and planning for how and when you are going move to the SP1 branch of SQL Server 2008 R2. You have some time, but at some point (probably in mid-2012), Microsoft is going to retire the RTM branch of SQL Server 2008 R2, which means that it will be considered an “unsupported service pack”. If you ever have to open a Support incident with Microsoft CSS, and they find out that you are on an “unsupported service pack” of SQL Server, they are not going to be able to give you the same level of support that you might expect…

Here is the current build chart for the two branches of SQL Server 2008 R2:

 

-- SQL Server 2008 R2 Builds                   SQL Server 2008 R2 SP1 Builds
-- Build            Description                Build         Description
-- 10.50.1092        August 2009 CTP2        
-- 10.50.1352        November 2009 CTP3
-- 10.50.1450        Release Candidate
-- 10.50.1600        RTM
-- 10.50.1702        RTM CU1
-- 10.50.1720        RTM CU2
-- 10.50.1734        RTM CU3
-- 10.50.1746        RTM CU4
-- 10.50.1753        RTM CU5
-- 10.50.1765        RTM CU6     --->           10.50.2500    SP1 RTM
-- 10.50.1777        RTM CU7
-- 10.50.1797        RTM CU8     --->           10.50.2769    SP1 CU1
-- 10.50.1804        RTM CU9     --->           10.50.2772    SP1 CU2
Posted in Microsoft, SQL Server 2008 R2 | 3 Comments