Tim Radney, Author at SQLPerformance.com https://sqlperformance.com/author/timradney SQL Server performance articles curated by SentryOne Wed, 05 Jan 2022 05:07:22 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://sqlperformance.com/wp-content/uploads/2024/01/cropped-SW_Logo_Stacked_Web_Orange-32x32.png Tim Radney, Author at SQLPerformance.com https://sqlperformance.com/author/timradney 32 32 Updated Azure SQL Database Tier Options https://sqlperformance.com/2020/04/azure/updated-sql-database-tier-options Mon, 27 Apr 2020 09:00:13 +0000 https://sqlperformance.com/?p=10395 Tim Radney explains many new options for Azure SQL Database, including serverless, hyperscale, and hardware configuration optimized for CPU or memory.

The post Updated Azure SQL Database Tier Options appeared first on SQLPerformance.com.

]]>
Azure SQL Database is Microsoft’s database-as-a-service offering that provides a tremendous amount of flexibility. It is built as part of the platform-as-a-service environment which provides customers with additional monitoring and security for the product.

Microsoft is continually working on improving their products and Azure SQL Database is no different. Many of the newer features we have in SQL Server were initially launched in Azure SQL Database, including (but not limited to) Always Encrypted, Dynamic Data Masking, Row Level Security, and Query Store.

DTU Pricing Tier

When Azure SQL Database first launched, there was a single pricing option known as “DTUs” or Database Transaction Units. (Andy Mallon, @AMtwo, explains DTUs in "What the heck is a DTU?") The DTU model provides three tiers of service, basic, standard, and premium. The basic tier provides up to 5 DTUs with standard storage. The standard tier supports from 10 up to 3000 DTUs with standard storage and the premium tier supports 125 up to 4000 DTUs with premium storage, which is orders of magnitude faster than standard storage.

vCore Pricing Tier

Fast forward a few years after Azure SQL Database was released to when Azure SQL Managed Instance was in public preview, and “vCores” (virtual cores) were announced for Azure SQL Database. These introduced the general-purpose and business-critical tiers with Gen 4 and Gen 5 processors. Gen 5 is the primary hardware option now for most regions since Gen 4 is aging out.

Gen 5 supports as few as 2 vCores and up to 80 vCores with ram being allocated at 5.1 GB per vCore. The general-purpose tier provides remote storage with max data IOPS ranging from 640 for a 2 vCore database up to 25,600 for an 80 vCore database. The business-critical tier has local SSD which provide much better IO performance with max data IOPS ranging from 8000 for a 2 vCore database up to 204,800 for an 80 vCore database. Both general-purpose and business-critical tiers max out at 4,096GB for storage, and this became a limitation for many customers.

HyperScale Database

To solve for the 4TB limit of Azure SQL Database, Microsoft created the hyperscale tier. Hyperscale allows customers to scale up to 100TB of database size in addition to providing rapid scale out for read-only nodes. You can also easily scale up and down within the vCore model. Hyperscale databases are provisioned using vCores. With Gen 5, a Hyperscale database can use between 2 – 80 vCores and 500 – 204,800 IOPS. Hyperscale achieves high performance from each compute node having SSD-based caches which helps minimize the network round trips to fetch data. There is a lot of awesome technology involved with Hyperscale in how it is architected to use SSD-based caches and page servers. I highly recommend that you take a look at the diagram that breaks down the architecture and how it all works in this article.

Serverless Database

Another request that was very common from customers was the ability to automatically scale up and scale down their Azure SQL Database as workloads increase and decrease. Customers have traditionally had the ability to programmatically scale up and down using PowerShell, Azure Automation, and other methods. Microsoft took that idea and built a new compute tier called Azure SQL Database serverless, which became generally available in November 2019. They allow the customer to set minimum and maximum numbers of vCores. This way they can know that there is always a minimum compute level available, and they can always automatically scale to a designated compute level. There is also the ability to configure an autopause delay. This setting allows you to automatically pause the database after a specific amount of time that the database has been inactive. When a database enters the autopause stage, compute costs goes to zero and only storage costs are incurred. The overall cost of serverless is the summation of the compute cost and storage cost. When the compute usage is between the minimum and maximum limits, compute cost is based on vCores and memory used. If actual usage is below the minimum value, compute cost is based on the minimum vCores and minimum memory configured.

The serverless tier has the potential to save customers a great deal of money while also giving them the ability to provide a consistent database user experience with the database being able to scale up as demand requires it.

Elastic Pools

Azure SQL Database has a shared resource model that enables customers to have a higher resource utilization. A customer can create an elastic pool and move databases into that pool. Each database can then start sharing predefined resources within that pool. Elastic pools can be configured using the DTU pricing model, or the vCore model. Customers determine the amount of resources that the elastic pool needs to handle the workload for all its databases. Resource limits can be configured per database so that one database can’t consume the entire pool. Elastic pools are great for customers who have to manage a large number of databases or multitenant scenarios.

New Hardware Configuration for Provisioned Compute Tier

The Gen4/Gen5 hardware configurations are considered "balanced memory and compute." This works well for many SQL Server workloads, however, there have been use cases for a lower CPU latency and higher clock speed for CPU-heavy workloads and a need for higher memory per vCore. Microsoft has once again delivered and created a compute optimized and memory optimized hardware configuration. These are currently in preview and only available in certain regions.

In the general-purpose provisioned tier you can select the Fsv2 Series which can deliver more CPU performance per vCore than the Gen 5 hardware. Overall, the 72 vCore size can provide more CPU performance than the 80 vCore Gen 5 by providing a lower CPU latency and higher clock speeds. The Fsv2 series does have less memory and tempdb per vCore than Gen 5, so that will have to be taken into consideration.

In the business-critical provisioned tier, you can access the M-series which is memory optimized. The M-series offers 29GB per vCore compared to the 5.1GB per vCore in the ‘balance memory and compute’ configuration. With the M-series you can scale vCore up to 128 which would provide up to 3.7TB of memory. To enable the M-series, you currently must be in a Pay-As-You-Go or Enterprise Agreement and open a support request. Even then, the M-Series is currently only available in East US, North Europe, West Europe, West US 2, and may also have limited availability in additional regions. 

Conclusion

Azure SQL Database is a feature rich database platform that offers a wide range of options for compute and scale. Customers can configure compute for a single database or elastic pool using DTUs or vCores. For databases with a large storage requirement or read scale out, Hyperscale can be utilized. For customers with varying workload requirements, serverless can be used to automatically scale up and down as their workload demands change. New to Azure SQL Database is the preview feature of a compute optimized and memory optimized hardware configuration for those customers that need lower latency CPU or those with a high memory to CPU requirement.

To learn more about Azure resources, check out my previous articles:

The post Updated Azure SQL Database Tier Options appeared first on SQLPerformance.com.

]]>
Azure SQL Managed Instance Performance Considerations https://sqlperformance.com/2020/02/azure/sql-managed-instance-performance-considerations https://sqlperformance.com/2020/02/azure/sql-managed-instance-performance-considerations#comments Wed, 26 Feb 2020 09:00:44 +0000 https://sqlperformance.com/?p=10327 Tim Radney talks about how to decide whether a move to Azure SQL Database Managed Instance makes sense and, if so, which tier.

The post Azure SQL Managed Instance Performance Considerations appeared first on SQLPerformance.com.

]]>
Azure SQL Database Managed Instance became generally available in late 2018. Since then, many organizations have started migrating to Managed Instance for the benefits of a managed environment. Organizations are taking advantage of having managed backups, lots of built-in security features, an uptime SLA of 99.99%, and an always up-to-date environment where they are no longer responsible for patching SQL Server or the operating system.

One size does not fit all.One size does not always fit all.

Managed Instance provides two tiers for performance. The General Purpose tier is designed for applications with typical performance and I/O latency requirements and provides built-in HA. The Business Critical tier is designed for applications that require low I/O latency and higher HA requirements. Business Critical also provides two non-readable secondaries and one readable secondary. The readable secondary is a great way to distribute the workload off of the primary, which can lower the service tier required for the primary — decreasing the overall spend for the instance.

When Managed Instance was first released, you could choose between Gen4 and Gen5 processors. Gen4 is still described in the documentation, but this option is mostly unavailable now. For this article, I'll only be covering configurations using the Gen5 processors.

Each service tier supports anywhere from 4 to 80 logical CPUs — also known as virtual cores, or vCores. Memory is allocated at approximately 5.1 GB per vCore. General Purpose provides up to 8 TB of high-performance Azure Blob storage, while Business Critical provides up to 4 TB of super-fast local SSD storage.

Memory

With only having 5.1 GB of memory per vCore, an instance with fewer vCores could struggle. The options for vCore configurations are 4, 8, 16, 24, 32, 40, 64, and 80 vCores. If you do the math on each of the vCore options ((number of vCores) × (5.1 GB)), you'll get the following core / memory combinations:

  4 vCores  =   20.4 GB
  8 vCores  =   40.8 GB
 16 vCores  =   81.6 GB
 24 vCores  =  122.4 GB
 32 vCores  =  163.2 GB
 40 vCores  =  204.0 GB
 64 vCores  =  326.4 GB
 80 vCores  =  408.0 GB

For many organizations I've helped transition from on-premises to Managed Instance, I've seen a significant reduction in memory. Typical configurations on-premises would be 4 vCores and 32 GB of memory, or 8 vCores and 64 GB. Both account for more than a 30% reduction in memory. If the instance was already under memory pressure, this can cause problems. In most cases, we've been able to optimize the on-premises instance to help alleviate the memory pressure prior to moving to Managed Instance, but in a few cases, the customer had to go with a higher vCore instance to alleviate the memory pressure.

Storage

Storage is a bit more difficult to plan and make considerations for, due to having to consider multiple factors. For storage you need to account for the overall storage requirement for both storage size, and I/O needs. How many GBs or TBs are needed for the SQL Server instance and how fast does the storage need to be? How many IOPS and how much throughput is the on-premises instance using? For that, you must baseline your current workload using perfmon to capture average and max MB/s and/or taking snapshots of sys.dm_io_virtual_file_stats to capture throughput utilization. This will give you an idea of what type of I/O and throughput you need in the new environment. Several customers I've worked with have missed this vital part of migration planning and have encountered performance issues due to selecting an instance level that didn’t support their workload.

This is critical to baseline because with on-premises servers, it is common to have storage provided from a super-fast SAN with high throughput capabilities to smaller size virtual machines. In Azure, your IOPS and throughput limits are determined by the size of the compute node, and in the case of Manage Instance, it is determined by the number of vCores allocated. For General Purpose there is a limit of 30-40k IOPS per instance or 500 up to 12,500 IOPS per file depending on the file size. Throughput per file is also based on size ranging starting at 100 MiB/s for up to 128 GB files, and up to 480 MiB/s for 4 TB and larger files. In Business Critical, IOPS range from 5.5K – 110K per instance or 1,375 IOPS per vCore.

Consumers must also account for log write throughput for the instance. General Purpose is 3 MB/s per vCore with a max of 22MB/s for the instance and Business Critical is 4 MB/s per vCore with a max of 48 MB/s for the entire instance. In my experience working with customers, many have far exceeded these limits for write throughput. For some it has been a showstopper, and for others, they have been able to optimize and modify their system to decrease the load.

In addition to needing to know overall throughput and I/O requirements, storage size is also tied to the number of vCores chosen. For General Purpose:

        4 vCores  =  2 TB max
   8 - 80 vCores  =  8 TB max

For Business Critical:

    4 – 16 vCores  =  1 TB
        24 vCores  =  2 TB
   32 - 80 vCores  =  4 TB

For General Purpose, once you get to 8 vCores, you can max out the available storage, which works well for those who only need General Purpose. But when you need Business Critical, things can be more challenging. I've worked with many customers who have multiple TBs allocated to VMs with 4, 8, 16, and 24 logical processors. For any of these customers, they would have to move up at least 32 vCores just to meet their storage requirement, a costly option. Azure SQL Database has a similar issue with max database size, and Microsoft came up with a Hyperscale option. We expect this to become an option for Managed Instance at some point to address the storage limits in a similar way.

The size of tempdb is also correlated to number of vCores. In the General Purpose tier, you get 24 GB per vCore (up to 1,920 GB) for the data files, with a tempdb log file size limit of 120 GB. For the Business Critical tier, tempdb can grow all the way up to the currently available instance storage size.

In-memory OLTP

For those who are currently taking advantage of In-Memory OLTP (or plan to), note that it is only supported in the Business Critical service tier. The amount of space available for In-Memory tables is also limited by vCores:

    4 vCores  =    3.14 GB
    8 vCores  =    6.28 GB
   16 vCores  =   15.77 GB
   24 vCores  =   25.25 GB
   32 vCores  =   37.94 GB
   40 vCores  =   52.23 GB
   64 vCores  =   99.90 GB
   80 vCores  =  131.86 GB

Conclusion

When planning a migration to Azure SQL Managed Instance, there are multiple considerations to take into account prior to deciding to migrate. First you need to fully understand your memory, CPU, and storage requirements, as this will determine the size of the instance. Just as important is knowing what your storage I/O requirements are. IOPS and throughput for the General Purpose tier are directly tied to vCores and the size of the database files. For Business Critical it is tied to the number of vCores. If you have a very I/O intensive workload, Business Critical is the more appealing service tier due to it providing higher IOPS and throughput numbers. The challenge with Business Critical is the lower storage capacity and only supporting 1TB for the entire instance up to 16 vCores.

With proper planning, and possible deconsolidation of larger instances to smaller Managed Instances, this offering can be a very viable migration option for many organizations. The appeal are the benefits of having managed backups, built-in HA with an SLA of 99.99%, added security features and options, and not having to worry about patching an OS or SQL Server instance.

The post Azure SQL Managed Instance Performance Considerations appeared first on SQLPerformance.com.

]]>
https://sqlperformance.com/2020/02/azure/sql-managed-instance-performance-considerations/feed 4
The Importance of Selecting the Proper Azure VM Size https://sqlperformance.com/2019/11/azure/importance-vm-size Mon, 18 Nov 2019 09:00:11 +0000 https://sqlperformance.com/?p=10137 Tim Radney talks about the performance trade-offs when deciding between various VM sizes in Azure.

The post The Importance of Selecting the Proper Azure VM Size appeared first on SQLPerformance.com.

]]>
Migrating an on-premises SQL Server instance to an Azure Virtual Machine (VM) is a common method to migrate to Azure. IT professionals are familiar with scoping the size of VMs with regards to vCPU, memory, and storage capacity. Microsoft offers multiple VM types and sizes for an organization’s needs. You’ll see the types referenced as Family in the Azure Portal when sizing a VM.

VM Types and Sizes

Microsoft has helped simplify things by creating multiple types of virtual machines. The types are geared toward defining the machines by purpose. The various types are:

  • General purpose – Balanced CPU-to-memory ratio, small to medium databases
  • Compute optimized – High CPU-to-memory ration, medium traffic web servers and application servers
  • Memory optimized – High memory-to-CPU ratio, relational database servers, medium to large caches, and in-memory analytics
  • Storage optimized – High disk throughput and IO
  • GPU – Heavy graphic rendering and video editing
  • High performance compute – Fastest and most powerful CPU virtual machines

Within each type/family there are numerous sizes to select from. The sizes offer you options for the number of vCPUs, RAM, and data disks. The number of data disks will determine maximum IOPS (IOPS stands for input/output operations per second.) and the overall size will determine the amount of temporary storage available. Certain sizes also support premium storage, which should be a requirement for a production SQL Server.

VM Image Options

When selecting an image for SQL Server, you have several options. You can choose to select just the OS, such as Linux or Windows, or you can choose to select an image with the OS and SQL Server already installed. Currently I prefer to choose the image with just the OS so that I can install SQL Server and configure it the way I like. When you choose the gallery image with SQL Server preinstalled, all components that are included in the ISO for that version are installed, and I don’t always need Integration Services or Analysis Services installed.

VM Sizing Considerations

Selecting an Azure VM may seem straight forward, with choosing a size based on the number of vCPU, memory, and enough storage to hold your databases, however I am seeing customers having performance issues related to storage. The common trend is to choose a VM based exclusively on vCPU, memory, and storage capacity without benchmarking the current IO and throughput requirements. When you create an Azure VM in the Azure Portal, you can filter the options based on the following:

  • Size
  • Generation
  • Family
  • RAM/memory
  • Premium storage support
  • Number of vCPU
  • Ephemeral OS disk

Once you select your filter options, if any, you will see a list of available servers. In the list it displays the VM Size, offering, family, vCPU, RAM, number of data disk supported, max IOPS, temporary storage (D:), if premium disk is supported, and estimated cost. I’ve filtered the following on premium disk supported and size starting with the letter E for memory optimized.

What is not displayed, however, is the amount of throughput allowed per VM. Throughput measures the data transfer rate to and from the storage media in megabytes per second.

Throughput can be calculated using the following formula

MB/s = IOPS * KB per IO / 1,024

Using this formula, KB per IO would be your block size. If you are formatting your data and log drives at 64k, then the formula for the E2s_v3, E4-2s_v3, and E8-2s_v3 VMs with 3,200, 6,400, and 12,800 IOPS would be:

MB/s = 3,200 * 64/1,024 or 200 MB/s
MB/s = 6,400 * 64/1,024 or 400 MB/s
MB/s = 12,800 * 64/1,024 or 800 MB/s

The throughput calculations based on the IOPS of each VM with a 64k block size aren’t too bad. These numbers don’t take into consideration any write penalties for RAID. I put each of these VMs to a test using CrystalDiskMark. The numbers I got for throughput were nowhere near what the calculations were estimating.

Benchmark Test

I provisioned three virtual machines of the same family, however different sizes, and each with 2 vCPU. The specifications of the virtual machines were:

  • E2s_v3 – 2 vCPU – 16GB RAM – 3,200 IOPS – Support up to 4 data disks
  • E4-2s_v3 – 2 vCPU – 32GB RAM – 6,400 IOPS – Support up to 8 data disks
  • E8-2s_v3 – 2 vCPU – 64GB RAM – 12,800 IOPS – Support up to 16 data disks
  • P60 data disk – Premium SSD – 16,000 IOPS

On each VM, I provisioned a P60 premium disk at 8TB capacity. This disk advertised 16,000 IOPS which with a 64k block size could support 1,000 MBps throughput, however Azure documentation states the disk provides 500 MBps throughput.

CrystalDiskMark is an open source disk drive benchmark tool for Windows, and it’s based-on Microsoft’s Diskspd tool. The tool measures sequential and random performance of reads and writes.

Across the top of the tool are some drop downs. The number 5 is the number of iterations of the test that will be run. Next is 1GiB, this is the size of the test file. For a real production test, this should be large enough to help avoid hitting cache. Version 7.0 supports up to a 64 GiB file. Last is the drive that you want to perform the test against.

Once you’ve made your selection, you can click All to begin the series of test. The test will loop through various sequential and random read/write operations. Use caution if you plan to benchmark real production servers. This test puts a load on your disk and could drastically impact a production workload. After hours or during a maintenance window would be preferred.

I opted to run 5 iterations of the test with a 32 GiB file on the P60 disk which was drive E:.

The E2s_v3 VM maxed out under 50 MBps, which is way less than the 200MB that we calculated.

The E4-2s_v3 VM maxed out under 100 MBps rather than 400 MBps.

The E8-2s_v3 VM maxed out under 200 MBps rather than the estimated 800 MBps.

Why Lower Throughput?

Based upon my earlier calculations, 3,200 IOPS at 64k block size should produce 200MB throughput, yet I didn’t see numbers close to that until I had a 16,000 IOPS disk on a VM that supports up to 12,800 IOPS. The reasoning is that each VM size has a limit for throughput. If you look at the documentation for the memory optimized family of VMs, you’ll find that the E2s max uncached disk throughput is 3,200 IOPS /48 MBps, E4s max uncached disk throughput is 6,400 IOPS / 96 MBps, and the E8s max uncached disk throughput is 12,800 IOPS / 192 MBps. These numbers coincide with the results I obtained using CrystalDiskMark.

Even though you can allocate very large disks that have plenty of IOPS and that support high throughput numbers, the VM size could very well be limiting your throughput.

Benchmarking your current throughput needs should be a big priority before migrating any SQL Server workload to Azure.

Conclusion

I understand that IOPS is a unit of measurement that disk manufacturers and storage vendors provide, and that is ok. However, when it comes to testing storage, we tend to focus more on throughput numbers. It is just a math problem, but unless you are in the business of benchmarking storage and doing the calculations from IOPS to throughput based on block size, it can be confusing.

What is troubling to me is the fact that the restriction on throughput is not clear when you select a VM size. The unit of measurement for storage IO is IOPS. At 3,200 IOPS with a 64k block size, I could be around 200 MBps however my VM was limited to 48 MBps. Many IT professionals have discovered they have disk performance issues and scaled their storage to larger and faster disk (more IOPS) expecting better performance, only to find that it didn’t solve their problem. The issue is the size of the VM was limiting their throughput. Scaling up to a higher size VM would solve the issue, but that comes with a cost. In my example, the E4 was twice the cost of the E2, however it doubled the memory and throughput, while retaining the same vCPU. The E8 was double the cost of the E4 and doubled the throughput and memory, while retaining the same vCPU. Maintaining the same vCPU count means I wouldn’t have an increase in core SQL Server license cost.

Ultimately, you need to benchmark your current throughput requirements and make sure your sizing the Azure VM, or any VM appropriately for your needs. Don’t just focus on a benchmark of your local storage, analyze what your workload needs and size accordingly.

The post The Importance of Selecting the Proper Azure VM Size appeared first on SQLPerformance.com.

]]>
Tuning SQL Server Reporting Services https://sqlperformance.com/2019/09/reporting-services/tuning-sql-server-reporting-services Tue, 17 Sep 2019 09:00:12 +0000 https://sqlperformance.com/?p=10064 Tim Radney provides some sage advice about configuring and tuning SQL Server Reporting Services.

The post Tuning SQL Server Reporting Services appeared first on SQLPerformance.com.

]]>
Many database administrators find themselves having to support instances of SQL Server Reporting Services (SSRS), or at least the backend databases that are required for SSRS. For years SSRS was bundled with the installation of SQL Server, which helped add to some of the confusion around licensing and support for the product, so beginning with SSRS 2017, the installation package for Reporting Services is a separate download.

This article will cover many areas that database administrators need to be aware of in order to properly license, recover, and tune a Reporting Services installation. These topics apply to both SQL Server Reporting Services as well as Power BI Report Server.

Licensing

Installation and support of SSRS can be confusing. The reporting service can be installed as a standalone instance on a dedicated server, on the same instance as SQL Server, or in a scale-out deployment (Enterprise Edition only). Each instance where SSRS is installed in production requires a SQL Server license, as well as licensing the instance of SQL Server where the ReportServer and ReportServerTempDB databases reside.

The way I like to explain how to license Reporting Services is to think about the reporting service as an application that uses SQL Server on the back end. In the early days of SSRS, a requirement was to also install and configure Internet Information Services (IIS). SSRS 2008 brought that component into the reporting service module. It is very common to see SSRS and MSSQL installed on the same instance due to licensing and this can work well for smaller implementations. For larger deployments, it’s common to see a dedicated reporting service server with the ReportServer and ReportServerTempDB on a consolidated SQL Server. For very large installations, scale-out deployments are used to provide load balancing of the reporting server service. In a scale-out deployment, each instance in the farm must be licensed.

Recovery

In each of the deployment models, the role of the database administrator is to make sure that SSRS is stable, dependable, and recoverable. The recoverable part is something that causes some DBAs issues. The ReportServer database is encrypted and certain operations require restoring the symmetric key. If there is a failure and the database has to be restored to another server, the Report Server Windows service account name or password is changed, the computer name is changed, you migrate to another server, or you add a new server to a scale-out configuration, you’ll be required to restore the encryption key. Unless the key is backed up, any protected data, such as connection strings and passwords, can’t be decrypted. I’ve found that many DBAs are unaware of this until it’s too late. The key can be backed up and restored manually using the Reporting Services Configuration Manager, using the rskeymgmt.exe utility, or using PowerShell. You technically only need to back up one copy of the symmetric key.

The ReportServer and ReportServerTempDB databases are SQL Server databases and should be part of a regular backup process, just like other user databases. ReportServer should be using the full recovery model whereas the ReportServerTempDB can be using the simple recovery model. Technically, ReportServerTempDB can be recreated by a script in the event of a disaster, however Reporting Services cannot start without ReportServerTempDB. DBAs are familiar with restoring databases, rather than hunting for a script to recreate the database. Unlike the system database tempdb, ReportServerTempDB is not recreated at startup. Best practice for DBAs is really to just treat ReportServer and ReportServerTempDB like any other user database.

There are configuration files that store application settings that should also be backed up. These may be covered by your OS-level backups; however, DBAs should make sure these files are backed up after the initial configuration and/or after any custom extensions are applied. These files consist of:

  • Rsreportserver.config
  • Rssvrpolicy.config
  • Rsmgrpolicy.config
  • Reportingservciesservice.exe.config
  • Web.config
  • Machine.config

Consideration for backing up your Report Designer files such as; .rdl, .rds, .dv, .ds, rptproj, and .sln files should be given.

Tuning Options

Tuning SSRS is much like any other application. Users are executing reports from an application server that is communicating with databases. The big difference is that you have an application server (Reporting Services) with its own databases, but the report has data sources connecting to other user databases. DBAs must tune for overall health of the Reporting Services infrastructure as well as tuning the actual reports.

Reporting Services Infrastructure

Disk latency for ReportServer and ReportServerTempDB are very important. Depending on the workload, those databases may need to be placed on faster disks. Caches of report results are stored in the ReportServerTempDB database and I/O performance can become an issue here.

The Reporting Services workload may dictate that additional Reporting Services instances are needed to handle that workload. A scale-out deployment requires an Enterprise Edition license. Benefits of a scale-out deployment include giving customers load balancing for higher throughput, high availability, as well as the ability to segment report server processing.

Take advantage of Snapshots where they make sense. If you have reports that are using day-old data, consider using a Snapshot so that subsequent views of that report are using a Snapshot rather than having to generate the entire report again.

Data Driven Subscriptions can be used to run reports and deliver the content based on subscriber base and on a schedule. Based on the schedule, many of these reports can be run after data processing is complete, well before users arrive at work, in a less busy time for the environment.

DBAs should be familiar with execution logging as that can help identify reports that could be candidates for caching as well as reports that should be looked at for performance improvement based on execution processing. Execution logging provides insights into things like how often reports are run, the most requested format, and the percentage of processing tied to a phase of the report process. This data can be accessed using the built-in views for ExecutionLog, ExecutionLog2, and ExecutionLog3.

General Tuning

For the user databases being used by the reports and the instance holding the ReportServer and ReportServerTempDB, you want to track and baseline performance. You need to be monitoring memory/disk/CPU utilization, network I/O, tempdb usage, waits, and other counters to know what is normal for the overall workload of those systems. If users start reporting problems, you need to be able to know if the current utilization is normal or not. If you aren’t monitoring it, how can you measure it?

In addition to monitoring and baselining, industry-accepted best practices should be in place for the instance. I’ve covered memory settings, index maintenance, statistics, MAXDOP, and cost threshold for parallelism in a previous article, Common SQL Server Mishaps.

The real magic for making reports run faster is to optimize for that report. A report is essentially just another query. To tune for a slow report, that usually means you need to create indexes for that specific report or tune the code within the report. If these are reports that are hitting the OLTP database, then creating indexes for reports can impact the OLTP system by using more space, generating additional I/O for updates, inserts, and deletes, and incurring additional overhead for maintaining those indexes. You must balance the risk vs. reward. Some customers can separate the reporting database from the OLTP by using transactional replication and this allows for indexing the reporting database without impacting the OTLP database.

Although you can track report usage using the ExecutionLog, you’ll need to research the user database instance for high cost and long running queries for tuning opportunities too. DMVs and Query Store are a huge help as well, but a monitoring tool like SQL Sentry can be much more powerful and flexible. SQL Sentry doesn't have a "Reporting Services dashboard" per se, but you can create calendar views of SSRS events, easily filter built-in metrics and reports to focus on SSRS activity, and create robust Advisory Conditions to monitor the precise aspects of Reporting Services you care about (and none of the noise you don't). Even if you're not running SSRS on a SQL Server machine, you can still use Win Sentry to get rich performance insights into current and historical process- and service-level activity.

Conclusion

Tuning SQL Server Reporting Services has several unique characteristics, however standard performance tuning is still applicable for optimizing reports and monitoring the ReportServer and ReportServerTempDB databases. Backing up the encryption key is necessary for any disaster recovery or migration efforts. To better understand the usage of reports, DBAs should start using the ExcecutionLog.

The post Tuning SQL Server Reporting Services appeared first on SQLPerformance.com.

]]>
Common SQL Server Mishaps https://sqlperformance.com/2019/06/sql-performance/common-sql-server-mishaps Fri, 21 Jun 2019 09:00:12 +0000 https://sqlperformance.com/?p=9877 Tim Radney talks about some of the typical SQL Server mishaps he comes across out in the wild, and resources you can use to avoid them in your own environments.

The post Common SQL Server Mishaps appeared first on SQLPerformance.com.

]]>
I’ve been teaching and writing about common SQL Server mistakes for many years. I wrote a blog about it years ago too, however as time has marched on, guidance has changed a bit. This article will expand on my previous article and point out how these apply to SQL Server, Azure SQL Database, and Azure SQL Managed Instance.

For many years I’ve found users making the same mistakes. I call them mistakes however, in most cases, it is more just things not being done properly because the people managing the environment don’t know any better. Here are some of the more critical items that anyone installing and supporting SQL Server should know about:

  • Backups
  • DBCC CHECKDB
  • Memory settings
  • Statistics
  • Index maintenance
  • MAXDOP and cost threshold for parallelism
  • SQL Server Agent alerts

Backups

I always check backups first when looking at a new system. Having proper backups to meet recovery objectives is critical. Data loss can be detrimental to an organization. When looking at backups, I check for recovery model and the current history of backups for each database. I usually find a combination of the following:

  • No backup at all – no record of any backup for the database
  • Missing backups – no log backups for a database using the full recovery model
  • No recent backups – last backup was weeks/months/years old

Misconfigured backups are detrimental to an organization when a recovery situation comes up. Working with and having to tell customers that they’ve lost data is never fun or easy. Having proper backups to meet SLAs should be any organizations top priority in addition to making sure there are copies of these backups stored in a secondary location offsite.

This situation applies to on-premises SQL Server and IaaS. Azure SQL Database and Azure Managed Instance have managed backups.

DBCC CHECKDB

Database corruption happens unfortunately. Without regularly checking for corruption, customers can find themselves in a bad place by not having backups in order to recover when that corruption affects the physical data. To check for corruption, DBCC CHECKDB should be run against each database on a regular basis. What I find is very similar to backups:

  • No DBCC CHECKDBs performed at all
  • DBCC CHECKDBs being performed only on select databases
  • DBCC CHECKDBs last performed months or years ago

Worst case is a job scheduled reporting failed DBCC CHECKDBs

It is never pleasant finding corruption or having a customer reach out with a corruption issue when the corruption is a heap or clustered index and there are no backups prior to the corruption occurring. In these cases, the corruption is the actual data and starting the restore from before the corruption is in most cases, the only option. In cases where the corruption is a non-clustered index, rebuilding the index is the fix.

In a few situations, I’ve had to work with customers who have nasty corruption without proper backups where I’ve been able to script out the database and manually copy all the usable data into a newly created database. These costly situations can be easily avoided by running DBCC CHECKDB and having proper backup retention.

I advise customers to run DBCC CHECKDB on-premises, IaaS, Azure SQL Database, and Azure SQL Managed Instance. Azure does a great job checking for physical corruption; however, I feel that consumers need to be checking for logical corruption.

Memory Settings

A default installation of Microsoft SQL Server has minimum memory value set to 0 and maximum server memory value set to 2147483647 MB, which is 2 Petabytes. Prior to SQL Server 2012, the maximum server memory value only applied to the bufferpool, so customers needed to limit the amount of memory the bufferpool could use to save memory for the operating system and other processes. SQL Server 2012 introduced a memory manager rewrite so that the maximum server memory value applies to all SQL Server memory allocations.

It is highly advisable to set a maximum value for your SQL Server instance. Jonathan Kehayias has written an blog post How much memory does my SQL Server actually need, with a formula that helps establish the baseline for the maximum memory value. In cases of a shared SQL Server, I recommend my clients to set the minimum value to 30% of the memory on the server.

In situations with multiple instances or where the server is used for SQL Server, SSIS, SSAS, or SSRS, you need to evaluate how much memory those other systems need and reduce the maximum server memory value to allow adequate memory for the OS and the other services.

This issue is valid for on-premises, IaaS, and partially for Azure SQL Managed Instance. Managed Instance sets a max server memory value based on the deployed tier, however when I tested resizing the environment, the max memory value was not dynamically changed. In that situation, you would need to manually update the value. This issue does not apply to Azure SQL Database.

Statistics

The query optimizer uses statistics to build execution plans. This means SQL Server needs statistics to be up to date so that the query optimizer has a better chance of building a good execution plan. By default, statistics are updated after 20% +500 rows of data have been modified. That can take a long time on larger tables. Beginning with compatibility level 130, the threshold for statistics updates for large tables have been lowered. For SQL Server 2008R – 2014, you could lower this threshold using trace flag 2371.

I regularly find that customers are not manually updating statistics and even with the lower threshold, I’ve found that manually updating makes an environment more stable.

I recommend that customers use a third-party script to update statistics. Ola Hallengren has published a widely used Maintenance Solution for SQL Server. Part of that process is his Index Optimize procedure, which can take additional parameters to update statistics.

@UpdateStatistics 
    ALL     = update index and column statistics
    INDEX   = update index statistics
    COLUMNS = update column statistics
    NULL    = Do not perform statistics maintenance (this is the default)

@OnlyModifiedStatistics
    Y = Update statistics only if rows have been modified since most recent stats update
    N = Update statistics regardless of whether any rows have been modified

I’ve found that customers who are using third party products or scripts to perform index maintenance based upon the fragmentation level of the index are not considering that reorganizations do not update statistics like rebuilds do. Many of these third-party applications have options for updating statistics just like Ola’s Index Optimize procedure, you just need to turn it on.

Updating statistics applies to on-premises, IaaS, Azure SQL Database, and Azure SQL Managed Instance.

Index Maintenance

Performing index maintenance by removing fragmentation from your indexes is still important. Some retired documentation from Microsoft stated that index fragmentation can have a negative impact from 13-460% depending on the size of the environment and the level of fragmentation. While hardware such as intelligent SANs, Solid State Disk, and other advancements have helped speed things up, wasted space in index can translate to wasted space in the buffer pool as well as wasting more I/O.

Fragmentation occurs through regular operations such as inserts, updates and deletes. To remediate this, proper index maintenance of rebuilding or reorganizing your indexes is needed. I again turn to Ola Hallengren, for his Index Optimize script. Ola’s script provides the ability to specify to rebuild or reorganize based on the level of fragmentation and minimum pages. Many third-party tools offer the same logic. SQL Server Database Maintenance plans prior to SQL Server 2016 only allowed to rebuild or reorganize all indexes. Beginning with SQL Server 2016, you can now specify similar logic based on fragmentation levels. Don’t forget those statistics though if you are using smart logic based on fragmentation levels.

I like Ola’s script and third-party tools that log to a table. I can then query the table to see if I have any index hot spots where fragmentation is constantly occurring at high levels and troubleshoot why fragmentation is so prevalent and can anything be done.

There are exceptions to every rule or best practice. Some patterns of data access lead to constant fragmentation. The cost of constantly rebuilding/reorganizing those tables may not be worth it and can be excluded from maintenance. Those situations should be evaluated on a case by case basis.

This applies to on-premises, IaaS, Azure SQL Database, and Azure SQL Managed Instance.

MAXDOP

I find that max degree of parallelism and cost threshold for parallelism are typically left at the default values on the client servers. For MAXDOP the default value is zero which means an ‘unlimited’ number of CPUs could be used to execute a parallel region of a query. Technically up to 64 processors unless you enable a trace flag to use more.

A decade ago, when processors had lower core counts, this value was acceptable. Today, with high core density and multi-socket servers, an unlimited number of CPUs for parallelism isn’t so good. Microsoft has given guidance on what values to use for MAXDOP.

If you are on SQL Server 2008 – SQL Server 2014, for a single NUMA node with less than 8 logical processors, keep MAXDOP at or below the number of logical processors. If you have more than 8 logical processers, keep MAXDOP at 8. If you have multiple NUMA nodes with less than 8 logical processors per NUMA node, keep MAXDOP at or below the number of logical processors per NUMA node. Greater than 8, keep MAXDOP at 8.

SQL Server 2016 introduced soft-NUMA nodes. During service startup, if the Database Engine detects more than 8 physical cores per NUMA node or socket, soft-NUMA nodes are created automatically. The engine takes care of placing logical processors from the same physical core into different soft-NUMA nodes. For that reason, we have slightly different guidance for MAXDOP for SQL Server 2016 onwards.

If you are on SQL Server 2016 and up, for a single NUMA node with less than 16 logical processors, keep MAXDOP at or below the number of logical processors. If you have more than 16 logical processers, keep MAXDOP at 16. If you have multiple NUMA nodes with less than 16 logical processors per NUMA node, keep MAXDOP at or below the number of logical processors per NUMA node. Greater than 16, keep MAXDOP at half the number of logical processors per NUMA node with a MAX value of 16.

If you are mostly virtualized on machines with 8 or fewer logical processors with a default MAXDOP, you’re probably in OK. If you have big physical hardware with defaults, then you should look at optimizing MAXDOP.

All the figures above are guidelines, not hard truths. Your workloads vary and consideration should be taken when you determine what value is most optimal for your workload.

Configuring MAXDOP applies to on-premises, IaaS, and Azure SQL Managed Instance. However, there is a database scoped configuration that can be applied per database starting with SQL Server 2016, and this applies to Azure SQL Database.

Cost Threshold for Parallelism

Cost threshold for parallelism has a default value of 5. The history of this number goes back to the early days of SQL Server and the workstation that workload testing was performed on. With modern hardware, the cost estimation of 5 is outdated. Testing has shown that increasing the number from 5 to a higher value will keep shorter-running queries from having a parallel plan. I tend to recommend increasing this value to a higher number after examining the Plan Cache. In many cases I end up starting with a value of 25 and then monitor further and adjust from there, if needed. For more information about tuning cost threshold for parallelism, Jonathan Kehayias wrote: Tuning ‘cost threshold for parallelism’ from the Plan Cache.

This applies to on-premises, IaaS, and Azure SQL Managed Instance.

SQL Server Agent Alerts

Everyone should be leveraging SQL Agent alerts unless they have a third-party application monitoring for the same error conditions. Configuring alerts is easy and free, and having them configured will give you critical information when your servers are having problems.

I wrote an article titled SQL Server Agent Alerts, providing step-by-step instructions on how to create alerts for severity 19-25 errors and error 825. Enabling these alerts is easy: enable database mail, create a mail operator and then create the alerts. This can be accomplished using the GUI or with T-SQL. I encourage my everyone to script out this process using T-SQL and make it part of your standard server build.

This applies to on-premises, IaaS, and Azure SQL Managed Instance.

Summary

As you can see, there are many settings that should be modified from the defaults after installing SQL Server. This is not a comprehensive list; however, it does cover many of the more critical and performance impacting issues I find, and that I have lumped under my "SQL Server mishaps" category.

The post Common SQL Server Mishaps appeared first on SQLPerformance.com.

]]>
Azure SQL Database Performance Tuning Options https://sqlperformance.com/2019/03/azure/azure-sql-database-performance-tuning-options Mon, 18 Mar 2019 09:00:06 +0000 https://sqlperformance.com/?p=9687 Tim Radney explains some of the nuances of performance tuning your workloads in Azure SQL Database and Azure SQL Database Managed Instance.

The post Azure SQL Database Performance Tuning Options appeared first on SQLPerformance.com.

]]>
Azure SQL Database is Microsoft’s database-as-a-service offering that offers a tremendous amount of flexibility and security and, as part of Microsoft’s Platform-as-a-Service, gets to take advantage of additional features. Since Azure SQL Database is database-scoped, there are some big differences when it comes to performance tuning.

Tuning the Instance

Many instance-level items that you have been used to configuring on full installations are off limits. Some of these items include:

  • Setting min and max server memory
  • Enabling optimize for ad hoc workloads
  • Changing cost threshold for parallelism
  • Changing instance-level max degree of parallelism
  • Optimizing tempdb with multiple data files
  • Trace flags

Don’t be too upset about some of these. The ALTER DATABASE SCOPED CONFIGURATION statement allows quite a few configuration settings at the individual database level. This was introduced with Azure SQL Database and in SQL Server beginning with SQL Server 2016. Some of these settings include:

  • Clear procedure cache
  • Setting the MAXDOP to a value other than zero
  • Set the query optimizer cardinality estimation model
  • Enable or disable query optimization hotfixes
  • Enable or disable parameter sniffing
  • Enable or disable the identity cache
  • Enable or disable a compiled plan stub to be stored in cache when a batch is compiled for the first time.
  • Enable or disable collection of execution statistics for natively compiled T-SQL modules.
  • Enable or disable online by default options for DDL statements that support the ONLINE=ON/OFF syntax.
  • Enable or disable resumable by default options for DDL statements that support the RESUMABLE=ON/OFF syntax.
  • Enable or disable the auto-drop functionality of global temporary tables

As you can see from the list of scoped configurations, you have a lot of control and precision for fine-tuning specific behaviors for individual databases. For some customers, the limitations for instance-level control may have negative impact, while others will see it as a benefit.

For companies that have a database per customer, needing complete isolation, that is built into Azure SQL Database. For those who need the instance-level capabilities of SQL Server but would like to take advantage of Microsoft’s PaaS offering, there is Azure SQL Managed Instance, which is instance-scoped. The goal there is to have 100% surface area compatibility with SQL Server; so, you can set min and max server memory, enable optimize for adhoc workloads, and change both MAXDOP and cost threshold for parallelism. Tempdb on a Managed Instance already has multiple files, but you can add more, and increase the default size. In many ways, it truly feels like the full install of SQL Server.

Query Tuning

Another difference between Azure SQL Database and SQL Server is that Query Store is enabled by default in Azure SQL Database. You can turn Query Store off, but then you limit the Intelligent Performance tools in the Azure Portal that use it. Query Store is a feature that provides insight into query performance and plan choice. Query Store also captures a history of queries, plans, and runtime statistics so that you can review what is going on. Do you want to know which query has the highest recompile time, execution time, execution count, CPU usage, memory usage, the most physical reads/writes, and more? Query Store has that information. For SQL Server, you must enable this feature per database. If you are new to Query Store, my colleague Erin Stellato has a three hour course on Pluralsight that will help you get started.

The Intelligent Performance tools category has four features. Firstly, performance overview provides a summary of your overall database performance by listing the top 5 queries by CPU consumption, any recommendations from automatic tuning, tuning activity, and current automatic tuning settings. This landing page to gives you a quick glimpse into your performance.

Secondly, the performance recommendations option will list out any current recommendations for index creations or if any indexes should be dropped. If any recent actions have been completed, you’ll also see the history.

Thirdly, Query Performance Insight is where you can find a deeper insight into your resource consumption by viewing the top 5 queries by CPU, Data I/O, or Log I/O. The top 5 queries are color-coded so you can quickly see the percentage of overall consumption visually. You can click on the query-id to get more details including the SQL text. There is also a long running queries tab. I really like that Microsoft has included a feature like this in the Azure Portal at no cost. It provides value by giving customers a portal to see the top offending queries. What I find challenging here is having a way to see an overall baseline for comparison of day to day, week to week, and previous month. However, for a quick analysis and overview, Query Performance Insight is helpful.

The final feature in this category is automatic tuning. This is where you can configure the force plan, create index, and drop index settings. You can force it on, off, or choose to inherit from the server. Force plan allows Azure to pick what it feels would be the better of the execution plans for regressed queries. This feature also exists in SQL Server 2017 Enterprise Edition as automatic plan correction. Some DBAs get nervous when they hear about the automatic tuning features, as they fear it may replace the need for DBAs in the future. I always like to ask the question, “How much time each day do you spend proactively tuning queries?”. The overwhelming response is that people can actually spend very little time proactively tuning, and most respond that the only time they really ‘tune’ is after a code release or when users start complaining.

In addition to the built-in tools and having the value of using Query Store, DMVs are also readily available. Glenn Berry has an entire collection of scripts just for Azure SQL Database that you can utilize. One particular DMV I want to call out is sys.dm_os_wait_stats. This will pull from the server level, so if you really want to look at wait stats for the database level, you’ll need to use sys.dm_db_wait_stats instead.

Hardware – Scaling

Another area of consideration when looking at performance with Azure SQL Database is the underlying hardware. Azure SQL Database is priced by Database Transaction Units (DTUs) and vCores. DTUs are a blended measure of CPU, memory, and I/O, and come in three tiers; Basic, Standard, and Premium. Basic is only 5 DTUs, Standard ranges from 10-3,000 DTUs, and Premium ranges from 125-4,000 DTUs. For the vCore-based tiers we have General Purpose and Business Critical ranging from 1-80 vCores.

In the DTU model, Basic should be considered for development and testing. It only has a 7 day backup retention so I wouldn’t consider it viable for any production data. Standard is good for low, medium, and high CPU demand with moderate to low I/O demand. The Basic and Standard tier offers 2.5 IOPS per DTU with 5ms (read), 10ms (write). The Premium tier is for medium to high CPU demand and high I/O offering 48 IOPS per DTU with 2ms (read/write). The Premium tier has storage that is orders of magnitude faster that standard. In the vCore model, you have Gen4 processors that offer 7GB of RAM per physical core and Gen 5 processors that offer 5.1GB of RAM per logical core. From an I/O perspective General Purpose offers 500 IOPS per vCore with a 7,000 max. Business Critical offers 5,000 IOPS per core with a 200,000 max.

Summary

Azure SQL Database is great for those systems that need database isolation while Azure SQL Managed Instance is great for those environments where you need instance-level compatibility (cross database query support). When you need to tune Azure SQL Database, you must do things at the database level as instance-level options are off limits, so database scoped configuration settings are your fine-tuning options. With troubleshooting poor performing queries, you have some built-in tools that help, including Query Store, and most of your regular tuning scripts will work. You may find you still need more, such as baselines, more historical data, and the ability to create advisory conditions to help you manage your workloads. This is where powerful monitoring solutions like SentryOne DB Sentry can help.

When all else fails, or your workload has just simply increased past your current hardware resources, scale to a higher tier.

The post Azure SQL Database Performance Tuning Options appeared first on SQLPerformance.com.

]]>
End of support for SQL Server 2008 & 2008 R2 https://sqlperformance.com/2018/10/system-configuration/end-of-support-sql-server-2008-r2 https://sqlperformance.com/2018/10/system-configuration/end-of-support-sql-server-2008-r2#comments Fri, 12 Oct 2018 10:00:29 +0000 https://sqlperformance.com/?p=9541 Tim Radney talks about some of the things you can do to prepare for the end of extended support for SQL Server 2008 and SQL Server 2008 R2.

The post End of support for SQL Server 2008 & 2008 R2 appeared first on SQLPerformance.com.

]]>
Image credit : Benny Lin
https://www.flickr.com/photos/benny_lin/191393608/If you are running SQL Server 2008 or SQL Server 2008 R2, what does July 9th, 2019 mean for you? With both of these versions of SQL Server reaching the end of their support lifecycle together, you will no longer be able to get critical security updates. This can cause serious security and compliance issues for your organization.

When these versions of SQL Server were released, they came with 10 years of support; 5 years of Mainstream Support and 5 years of Extended Support. If your organization still has SQL Server 2008/2008 R2 in production, how is your organization planning to address the risk? For organizations that are heavily regulated, this is a big concern.

You need to choose how you’re going to migrate and where you’re going to migrate to, and then make sure you’re not going to hit any roadblocks along the way.

Migration Assessment Tools

If you are planning an upgrade from SQL Server 2008/2008 R2, Microsoft has made things much easier to test and validate your environment. Numerous tools exist that can assist with migration assessments and even handle migration tasks, and they’re all slightly different. These tools include:

  • Data Migration Assistant
  • Microsoft Assessment and Planning Toolkit
  • Azure Database Migration Service
  • Database Experimentation Assistant

The Data Migration Assistant helps you to upgrade to a modern data platform. It does this by detecting compatibility issues that can impact functionality on the newer version of SQL Server and makes recommendations for performance and reliability improvements for the new environment. Your source can be SQL Server 2005+ with a target of SQL 2012+ and Azure SQL Database.

The Microsoft Assessment and Planning Toolkit has been around for many years and is often referred to as the MAP Tool. It’s great for doing an inventory of your current environment to find where SQL Server (and other applications) exist.

The Azure Database Migration Service integrates some of the functionality of existing tools and services to provide customers with a comprehensive solution for migrating to Azure. The tool generates assessment reports that provide recommendations to guide you through any changes required prior to performing a migration. This service currently requires a VPN or Express Route.

Finally, the Database Experimentation Assistant is a new A/B testing solution for SQL Server Upgrades and it’s a tool you should become familiar with. It leverages Distributed Replay to capture a workload and replay it against a target SQL Server. This can be used to test hardware changes or version differences of SQL Server. You can capture workloads from SQL Server 2005 and up.

Migration Options

On-premises upgrade: One of the easiest migration methods is to upgrade to a newer version of SQL Server. In this case, you have SQL Server 2012, 2014, 2016, or 2017 to pick from. I encourage clients to upgrade to the latest version that they can. SQL Server 2012 is already out of Mainstream Support and SQL Server 2014 goes out of Mainstream Support on July 9th, 2019. Upgrading can be very time consuming and costly to organizations due to all the planning and testing involved, so moving to the latest version can increase the time before the next upgrade. There are also numerous performance and functionality improvements in SQL Server 2016 and 2017 that make migrating to SQL Server 2012 or 2014 a very poor choice at this time.

A common approach for on-premises upgrades is to build new and migrate, regardless of a physical or virtual environment. By building new, you can restore your databases and conduct numerous rounds of testing and validation to make sure everything works as expected before moving production.

Upgrade and migrate to an Azure VM: For organizations that are looking to migrate to the cloud, Azure Infrastructure as a Service (IaaS) is a great option. Running SQL Server on an Azure VM is much like on-premises. You specify the size of the VM (number of vCPUs and memory) and configure your storage for your I/O and size requirements. You are still responsible for supporting the OS and SQL Server for configuration and patching. Azure IaaS gives you the ability to easily scale your workloads by scaling the size of your virtual machine up or down as your workload needs change, as well as take advantage of Azure Active Directory integration, threat detection, and many other Azure benefits.

Migrate to Azure SQL Database: Another option you have is to migrate to Azure SQL Database. Azure SQL Database can be thought of as a Database as a Service and is part of Microsoft’s Platform as a Service (PaaS). Azure SQL Database functionality is database scoped, which means certain things such as cross database queries, SQL Server Agent, Database Mail, and more are not available. However, many customers that have applications that utilize a single database have been able to migrate to Azure SQL Database with minimal effort. You can quickly test for compatibility with Azure SQL Database by using the Data Migration Assistant. With Azure SQL Database, you can size your databases by DTU (Database Transaction Units) or vCores individually, or group databases into an Elastic Pool. Azure SQL Database allows you to scale your resources up and down with minimal effort and downtime.

Migrate to Azure SQL Managed Instance: A new option (as of 2018) is to migrate to Azure SQL Managed Instance. This is a new product that is currently generally available as of October 1st for the General-Purpose tier. Managed Instance was built using the instance-level programming model. This means that functionality we are used to with the full version of SQL Server is supported. The goal of Managed Instance is to have 100% surface area compatibility with on-premises. All databases in the instance are on the same server, so cross-database queries are supported, as are Database Mail, SQL Server Agent, Service Broker, and much more. There are two pricing tiers; General Purpose, that includes a non-readable secondary for HA, and Business Critical, that has two non-readable secondaries and a readable secondary. Managed Instance is part of Microsoft’s PaaS offering, so you get all the built-in features and functionality of PaaS.

Move as-is to Azure Virtual Machines: Microsoft is offering three years of Extended Security Updates at no additional charge if you move your SQL 2008/SQL 2008 R2 instances to an Azure VM. The goal is to give you a bit more time to upgrade to a newer version of SQL Server when you are ready.

Pay to Stay: This isn’t a migration option, but you do have an option to purchase up to three years of Extended Security Updates. There are restrictions around this option. You must have active Software Assurance for those instances or Subscription licenses under an Enterprise Agreement. If this applies to you, then this option can buy you more time to plan and migrate off of SQL Server 2008/2008 R2.

Migration Best Practices

When performing any migration or upgrade, there are certain things you need to be aware of. First, you need baselines and I can’t stress this enough. Anytime you make a change to an environment, you need to be able to measure how that change impacts the environment. Knowing key performance metrics for your environment can help you when troubleshooting any perceived impact. You can manually collect these metrics using perfmon and DMVs or invest in a performance monitoring platform. I wrote about both techniques in more detail in a previous post, and right now you can get an extended, 45-day evaluation of SentryOne. Having baseline metrics for things like CPU utilization, memory consumption, disk metrics, and more can quickly let you know if things look better or worse after an upgrade or migration.

You should also note your configuration options within your instance. Many times, I’ve been asked to look at a SQL Server instance after an upgrade or migration and found that most of the default settings are in use. If the old system is still available, I’m able to query it and get the previous non-default values that were in place and apply those to the new environment to get them back to a known configuration. It is always good to review sys.configurations on your production server to consider making similar changes on your new environment (cost threshold for parallelism, max degree of parallelism, optimize for ad hoc workloads, and more.) Notice I wrote ‘consider’. If your core count or memory is different on the new server, you need to configure the settings taking the new server’s size into account.

What is your backout plan if things go wrong? Do you have proper backups you can go back to? In most cases with an upgrade or migration, you are moving to a new VM or physical server. Your failback may be to move back to the old server. If you have had data change in the new version of SQL Server, your failback is much more complicated. You cannot restore a SQL Server database backup from a newer version of SQL Server to an older version.

Conclusion

If you are still using SQL Server 2008 or SQL Server 2008 R2, you have a few options available to you to stay in compliance after July 9th, 2019. To stay on SQL Server 2008 or SQL Server 2008 R2, you can purchase Extended Security Updates or move to an Azure virtual machine if you qualify. If you can upgrade, you can migrate to a supported version of SQL Server on-premises or on an Azure VM or consider migrating to a managed solution such as Azure SQL Database or Azure SQL Managed Instance.

The post End of support for SQL Server 2008 & 2008 R2 appeared first on SQLPerformance.com.

]]>
https://sqlperformance.com/2018/10/system-configuration/end-of-support-sql-server-2008-r2/feed 1
The Importance of Baselines https://sqlperformance.com/2018/05/baselines/importance-of-baselines https://sqlperformance.com/2018/05/baselines/importance-of-baselines#comments Thu, 24 May 2018 12:29:18 +0000 https://sqlperformance.com/?p=9371 Tim Radney goes into detail about why baselines are important, and how metrics you collect now can help you solve future problems.

The post The Importance of Baselines appeared first on SQLPerformance.com.

]]>
As a consultant working with SQL Server, many times I’m asked to look at a server that seems like it’s having performance issues. While performing triage on the server, I ask certain questions, such as: what is your normal CPU utilization, what are your average disk latencies, what is your normal memory utilization, and so on. The answer is usually, “we don’t know” or “we aren’t capturing that information regularly.” Not having a recent baseline makes it very difficult to know what abnormal behavior looks like. If you don’t know what normal behavior is, how do you know for sure if things are better or worse? I often use the expressions, “if you aren’t monitoring it, you can’t measure it,” and, “if you aren’t measuring it, you can’t manage it.”

From a monitoring perspective, at a minimum, organizations should be monitoring for failed jobs such as backups, index maintenance, DBCC CHECKDB, and any other jobs of importance. It is easy to set up failure notifications for these; however you also need a process in place to make sure the jobs are running as expected. I’ve seen jobs that get hung and never complete. A failure notification wouldn’t trigger an alarm since the job never succeeds or fails.

From a performance baseline, there are several key metrics that should be captured. I’ve created a process that I use with clients that captures key metrics on a regular basis and stores those values in a user database. My process is simple: a dedicated database with stored procedures that are using common scripts that insert the result sets into tables. I have SQL Agent jobs to run the stored procedures at regular intervals and a cleanup script to purge data older than X days. The metrics I always capture include:

Page Life Expectancy: PLE is probably one of the best ways to gauge if your system is under internal memory pressure. Most systems have PLE values that fluctuate during normal workloads. I like to trend these values to know what the minimum, average, and maximum values are. I like to try to understand what caused PLE to drop during certain times of the day to see if those processes can be tuned. Many times, someone is doing a table scan and flushing the buffer pool. Being able to properly index those queries can help. Just make sure you’re monitoring the right PLE counter – see here.

CPU Utilization: Having a baseline for CPU utilization lets you know if your system is suddenly under CPU pressure. Often when a user complains of performance issues, they’ll observe that CPU looks high. For example, if CPU is hovering around 80% they might find that concerning, however if CPU was also 80% during the same time the previous weeks when no issues were being reported, the likelihood that CPU is the issue is very low. Trending CPU isn’t only for capturing when CPU spikes and stays at a consistently high value. I have numerous stories of when I was brought into a severity one conference bridge because there was an issue with an application. Being the DBA, I wore the hat of “Default Blame Acceptor.” When the application team said there was an issue with the database, it was on me to prove that it wasn’t, the database server was guilty until proven innocent. I vividly recall an incident where the application team was confident that the database server was having issues because users could not connect. They had read on the internet that SQL Server could be suffering from thread pool starvation if it was refusing connections. I jumped on the server and start looking at resources, and what processes were currently running. Within a few minutes I reported back that the server in question was very bored. Based upon our baseline metrics, CPU was typically 60% and it was idle around 20%, page life expectancy was noticeably higher than normal, and there was no locking or blocking happening, I/O looked great, no errors in any logs, and the session counts were about 1/3 of their normal count. I then made the comment, “It appears users are not even reaching the database server.” That got the network folks' attention and they realized that a change they made to the load balancer wasn’t working properly and they determined that over 50% of connections were being routed incorrectly and not making it to the database server. Had I not known what the baseline was, it would have taken us a lot longer to reach the resolution.

Disk I/O: Capturing disk metrics is very important. The DMV sys.dm_io_virtual_file_stats is cumulative since the last server restart. Capturing your I/O latencies over a time interval will give you a baseline of what is normal during that time. Relying on the cumulative value can give you skewed data from after business hour activities or long periods where the system was idle. Paul discussed that here.

Database file sizes: Having an inventory of your databases that includes file size, used size, free space, and more can help you forecast database growth. Often I am asked to forecast how much storage would be needed for a database server over the coming year. Without knowing the weekly or monthly growth trend, I have no way of intelligently coming up with a figure. Once I start tracking these values I can properly trend this. In addition to trending, I could also find when there was unexpected database growth. When I see unexpected growth and investigate, I usually find that someone either duplicated a table to do some testing (yes, in production!) or did some other one-off process. Tracking this type of data, and being able to respond when anomalies occur, helps show that you are proactive and watching over your systems.

Wait statistics: Monitoring wait statistics can help you start figuring out the cause of certain performance issues. Many new DBAs get concerned when they first start researching wait statistics and fail to realize that waits always occur, and that is just the way that SQL Server’s scheduling system works. There are also a lot of waits that can be considered benign, or mostly harmless. Paul Randal excludes these mostly harmless waits in his popular wait statistics script. Paul has also built a vast library of the various wait types and latch classes with descriptions and other information about troubleshooting the waits and latches.

I’ve documented my data collection process, and you can find the code on my blog. Depending on the situation and types of issues a client may be having, I may also want to capture additional metrics. Glenn Berry blogged about a process he put together that captures Average Task Count, Average Runnable Task Count, Average Pending I/O Count, SQL Server process CPU utilization, and Average Page Life Expectancy across all NUMA nodes. A quick internet search will turn up several other data collection processes that people have shared, even the SQL Server Tiger Team has a process that utilizes T-SQL and PowerShell.

Using a custom database and building your own data collection package is a valid solution for capturing a baseline, but most of us are not in the business of building full-on SQL Server monitoring solutions. There is much more that would be helpful to capture, things like long running queries, top queries and stored procedures based on memory, I/O, and CPU, deadlocks, index fragmentation, transactions per second, and much more. For that, I always recommend that clients purchase a third-party monitoring tool. These vendors specialize in staying up to speed on the latest trends and features of SQL Server so that you can focus your time on making sure SQL Server is as stable and fast as possible.

Solutions like SQL Sentry (for SQL Server) and DB Sentry (for Azure SQL Database) capture all these metrics for you, and allow you to easily create different baselines. You can have a normal baseline, month end, quarter end, and more. You can then apply the baseline and see visually how things are different. More importantly, you can configure any number of alerts for various conditions and be notified when metrics exceed your thresholds.

SQL Sentry : Baseline on SQL ServerLast week's baseline applied to several SQL Server metrics on the SQL Sentry dashboard.

DB Sentry : Baseline on Azure SQL DatabaseThe previous period's baseline applied to several Azure SQL Database metrics on the DB Sentry dashboard.

For more information on baselines in SentryOne, see these posts over on their team blog, or this 2 Minute Tuesday video. Interested in downloading a trial? They've got you covered there, too.

The post The Importance of Baselines appeared first on SQLPerformance.com.

]]>
https://sqlperformance.com/2018/05/baselines/importance-of-baselines/feed 2
Bridging the Azure gap : Managed Instances https://sqlperformance.com/2018/02/azure/bridging-the-azure-gap-managed-instances https://sqlperformance.com/2018/02/azure/bridging-the-azure-gap-managed-instances#comments Mon, 05 Feb 2018 13:09:40 +0000 https://sqlperformance.com/?p=9175 Tim Radney of SQLskills gives an overview of Managed Instances and explains how they bridge the gap between on-premises SQL Server and Azure SQL Database.

The post Bridging the Azure gap : Managed Instances appeared first on SQLPerformance.com.

]]>
If you’ve been considering moving your SQL Server environment to Azure, you’ve only had a couple of options. First you could utilize the PaaS solution of Azure SQL Database and move a single database or use an elastic pool. Your other option has been an IaaS solution using Azure Virtual machines running Microsoft SQL Server. We will soon have a third option called a SQL Database Managed Instance.

Managed Instances bridge the gap between on-prem SQL Server and Azure SQL Database
Managed Instances bridge the gap between on-prem SQL Server and Azure SQL Database

Managed Instances were first introduced at the Microsoft Build conference in Spring 2017, and so far the preview has been limited to a small number of customers and consultants. Managed Instances can be considered a hybrid between a full version of SQL Server and Azure SQL Database. Single and elastic databases are built upon a database-scoped programming model and Managed Instances are built on an instance-scoped programming model. This makes Managed Instances more compatible with on-premises SQL Server.

Managed Instances provide much more of a feel of an on-premises SQL Server, yet are built on the same infrastructure as Azure SQL Database. What sets it apart from Azure SQL Database is that it presents an entire SQL Server instance to the customer. In Azure SQL Database, you configure a server, which is really a container, and then can have multiple databases in that server, but they can’t easily talk to one another. With Managed Instances, all databases within the instance are on the same actual SQL Server, so you have full support for cross-database queries. This is a huge feature for many applications that otherwise were not a good fit for Azure SQL Database, and I think that’s going to allow many more SQL Server applications to move into the cloud.

Functionality that is instance-level is now supported. This includes things like global temp tables, SQL Server Agent, Service Broker, replication, SQL Audit, and Common Language Runtime (CLR). Managed instances can also support databases up to 35TB in size. Currently the largest capacity for an Azure SQL Database is 4TB in the top tier. I suspect this may change soon, and again, will open up the cloud for more applications.

Managed Instances also get to take advantage of all the features of the PaaS platform, to include automatic backups, threat detection, vulnerability assessments, high availability, geo-replication, database advisor, and much more. I’ve seen a presentation that was discussing how the automatic failover process works and learned that server-level objects are replicated to the failover instance. This means that things like logins and jobs – pain points for many of our environments today – are handled for you.

Over the past year, I have helped numerous customers migrate to Azure SQL Database, and one of the primary challenges is migrating the data. You can’t just perform a SQL Server backup and restore to Azure SQL Database. I was very pleased to hear that with SQL Database Managed Instances, we can use native SQL Server backups and restore to Managed Instances, however, you must use the backup to URL mechanism. This will make migrations to Managed Instances much easier, however, since Managed Instances are built on Azure SQL Database, this is a one-way ticket: You can’t back up your Managed Instance databases and restore back to on-premises. If you ever decided to bring your database back to on-premises or out of Managed Instances, you will have to export your data.

Databases on Managed Instances are much more ready to migrate to Azure SQL Database
Databases on Managed Instances are much more ready to migrate to Azure SQL Database

On the other hand, since they are built on the Azure SQL Database platform, the individual databases you put into a Managed Instance can be migrated to their own individual Azure SQL Databases. This makes a Managed Instance a perfect stepping stone, where you can work out the isolation complications that prevent you from migrating directly to PaaS.

I am curious about replication being supported. I haven’t been able to find out yet if a Managed Instance database can be a publisher or whether it can only be a subscriber, like an Azure SQL Database. If it can be a publisher, then that could be an effective way of migrating back to on-premises. I am really hoping that in the near future, we will have the ability to also restore native SQL Server backups to singleton Azure SQL Databases. It appears the technology is there, it just needs to be expanded for the existing PaaS environment.

Another interesting observation about Managed Instances is that since the technology is built on the Azure SQL Database model, the SQL Server version will follow that of Azure SQL Database. This might complicate things with vendor support. Many vendors will state that they certify their product on SQL Server version X. Even though Managed Instances will support almost all the features of SQL Server 2017, it will not use the same build version, so programmatic version checks will be complicated. Your best course of action here is to push back on the vendor, since Microsoft is unlikely to waver on this stance, and I have no doubt some of those conversations will be challenging.

Will vendors go through the effort of certifying their products on Managed Instances, or will this become an issue like we experienced with virtualization? In the early days of virtualization, many vendors stated they didn’t support their products running virtualized, however Microsoft fully supported Windows X and SQL Server X being virtualized. Hopefully we will see vendors getting on board and certifying their products on Managed Instances. I certainly see some SQL Server pioneers out there that will move to Managed Instances after their own testing.

Any time a customer is looking to migrate to the cloud, security is a big concern. Managed Instances offer VNET support with private IP addresses and VPN to on-premises networks. This can allow a client to protect their environment from the public internet and have full isolation.

I am excited about Managed Instances and really can’t wait until it is more widely available. For clients that would like a managed environment, but need a more feature-rich solution than a singleton or elastic Azure SQL Database, I feel that Managed Instances would be the perfect fit. There has been a gap between Azure SQL Database and SQL Server on an Azure VM, as many customers need more than Azure SQL Database offers, but SQL Server on an Azure VM is still more maintenance and responsibility than they wanted. Managed Instances really bridge that gap. They support much larger database, allows for easier data migrations, allow for cross-database queries, and shouldn’t require any code changes since the platform is so highly compatible with on-premises SQL Server.

In summary, if your organization is considering a move to a hosted environment within the Azure SQL Database platform, you will be able to choose between individual Azure SQL Databases, elastic pools, or Managed Instances. Depending on your application needs, one of these solutions should be a great fit. Otherwise, you also have the option of running a traditional SQL Server instance on an Azure virtual machine, which offers nice features like managed backups, geo-replication, Azure Site Recovery, and so much more. Microsoft continues to invest in the Azure platform by delivering new products and features that their customers need, and the upcoming release of Managed Instances is continued proof of that focus. Stay tuned, as we’ve been promised a public preview in the very near future.

The post Bridging the Azure gap : Managed Instances appeared first on SQLPerformance.com.

]]>
https://sqlperformance.com/2018/02/azure/bridging-the-azure-gap-managed-instances/feed 6
New Azure SQL Database Standard Tier Sizes https://sqlperformance.com/2017/12/azure/new-standard-tier-sizes https://sqlperformance.com/2017/12/azure/new-standard-tier-sizes#comments Mon, 04 Dec 2017 14:08:56 +0000 https://sqlperformance.com/?p=9132 Tim Radney of SQLskills talks about the various Standard tiers available in Azure SQL Database - what they get you, and how much they cost.

The post New Azure SQL Database Standard Tier Sizes appeared first on SQLPerformance.com.

]]>
Azure SQL Database currently has three service tiers to choose from for your workload. These tiers consist of Basic, Standard, and Premium. Basic supports only one size of 5 DTUs. Premium starts at 125 DTUs and goes up to 4,000 DTUs. The Premium tier is the top tier which is built for higher I/O workloads and provides lower latency per I/O, and an order of magnitude more IOPS per DTU, than in the Standard tier.

Prior to August 2017, the Standard tier only supported DTU sizes between 15 and 100 DTUs. Currently available in preview are new performance levels and storage add-ons that offer price optimization benefits for CPU-intensive workloads. With those, the Standard tier now supports up to 3,000 DTUs.

At this point, you may be asking yourself what, exactly, is a DTU? A DTU is a Database Transaction Unit and is a mixture of CPU, memory, and data and transaction log I/O. (Andy Mallon, @AMtwo, recently addressed this in "What the heck is a DTU?") You can hit your DTU limit by maxing out CPU, memory, or I/O.

Previously, the Standard tier only offered 4 levels: 15, 30, 50, and 100 DTUs, with a database size limit of 250GB, with standard disk. If you had a database that was larger than 250GB, however did not need more than 100 DTUs for CPU, memory, or I/O, you were stuck paying a Premium price just for database size. With the new changes, you can now have up to a 1TB database in the Standard tier; you just have to pay the extra storage. Currently storage is being billed at $0.085/GB during the preview. Increasing from the included size of 250GB to 1TB increases by 774GB at a cost of $65.79 per month.

The new Standard preview DTU sizes support 200, 400, 800, 1,600, and 3,000 DTU options. If you have a SQL Server database workload that is more CPU-bound than I/O, these Standard tier options have the potential to save you a lot of money; however, if your workload is I/O bound, the Premium tier is going to outperform the Standard tier.

I decided to try two different workloads to see how different the Standard and Premium tiers compared to each other. I wanted to create simple and reproducible test so that others can try to validate for themselves. For my first test, I wanted to generate a healthy mix of CPU and I/O. I was hoping that I would be pushing more CPU than I/O, and be able to show that the expanded Standard tier would outperform a Premium tier with the same DTU size. I didn’t exactly get the results I was hoping for.

To setup this demo, I created a table with three GUID columns, inserted 1 million rows, and then updated two of the three columns with new IDs. The sample code is below:

CREATE TABLE dbo.TestTable
(
  Table_id UNIQUEIDENTIFIER DEFAULT NEWID(),
  Customer_id UNIQUEIDENTIFIER DEFAULT NEWID(),
  Cust_Name VARCHAR(40) DEFAULT CAST(NEWID() AS VARCHAR(40))
);

SET NOCOUNT ON;
GO

INSERT INTO dbo.TestTable DEFAULT VALUES;
GO 1000000

CREATE CLUSTERED INDEX [ClustTestTable] ON [dbo].[TestTable]
(
  [Table_id] ASC,
  [Customer_id] ASC
);

SET STATISTICS TIME ON;

UPDATE TestTable
  SET Table_id = NEWID(), Customer_id = NEWID();

As I ran through the series of tests, performance steadily improved in the Standard tier until I got to the S12 option where, oddly, CPU and elapsed time increased. I ran the test multiple times and S12 was consistently 54 seconds. It is pretty clear with my first test, that the Premium tier outperformed the Standard tier. For example, the S9 and P2 are closest in time, however the DTU size for Standard is 1,600 compared to 250 for the P2. This test is more about the I/O capabilities. The chart below shows the size, DTU level, cost, CPU time, elapsed time, and time in seconds for each test:

As the tests were being executed, I observed in the monitor dashboard that data I/O and log I/O percentage were the driving force behind the DTU percentage. The following chart was from a run against an S4 database:

I then decided to try another series of tests that would be more CPU-heavy. For that test I used the following script:

SET STATISTICS TIME ON;

SELECT SUM(CONVERT(BIGINT, t1.object_id) 
         + CONVERT(BIGINT, t2.object_id) 
         + CONVERT(BIGINT, t3.object_id) 
         + CONVERT(BIGINT, t4.object_id))
  FROM sys.objects t1
  CROSS JOIN sys.objects t2
  CROSS JOIN sys.objects t3
  CROSS JOIN sys.objects t4;

What I observed in the monitor dashboard on this series of tests is that CPU percentage was the sole driver of DTU percentage. As I went through the series of tests in the Standard tier, the test seemed to plateau at roughly 27 seconds, and started at the S4 size. What struck me as odd is that an S4 at 200 DTU took 27 seconds to complete and the corresponding P2 at 250 DTU took 38 seconds; a P4 at 500 DTU was more comparable. If we look at the cost differential for this demo, an S4 during preview only cost $150.01, while a P4 cost $1,860; the S4 provides a cost savings of just over $1,700. Let’s imagine that a P2 at 250 DTUs performed like we had expected; a P2 costs $930 and would still cost $780 more than an S4.

The full results of all tests in the second demo are included in the following chart:

Unlike the first demo, this was 100% CPU-driven. I had tried to include one additional cross join, but the demo then took hours per session instead of minutes. For a future test I’ll try to come up with a few additional scenarios that push a more realistic OLTP workload; one that is higher CPU, and one that is more I/O bound, and then a decent blend of the two.

You can see from the graph below that, on this run against an S4 database, CPU spiked at nearly 50%, while DTU percentage matched exactly:

From the two different workloads that I tested, it is very evident that if you have any significant I/O workload, you will need the Premium tier, but if your workload is mostly CPU-bound without any significant I/O needs, the higher Standard tiers can provide you substantial savings over the Premium tier.

If you are considering a migration to an Azure SQL Database, the DTU calculator is a great place to start to get an idea of a starting point for sizing; however, at the time of writing, the DTU calculator doesn’t take into consideration the expanded Standard tier. What is great about the DTU calculator is that it will break out CPU, IOPs, and log utilization to let you know what the driving factor for the DTU level recommendation is. For example, I ran the last demo on a 4 vCPU, 4GB virtual machine, and the DTU calculator recommended a P2. When I chose to ‘view more details,’ I got the following messages.

Service Tier/Performance Level for CPU – Based solely on CPU utilization, we recommend you migrate your SQL Server workload to Premium – P2. This Service Tier/Performance Level should cover approximately 100.00 % of your CPU utilization.

Service Tier/Performance Level for Iops – Based solely on Iops utilization, we recommend you migrate your SQL Server workload to Basic. This Service Tier/Performance Level should cover approximately 89.92 % of your Iops utilization.

NOTE: There is approximately 10.08 % of your workload that falls into a higher Service Tier/Performance Level. After migrating your database to Azure, you should evaluate your database's performance using the guidance mentioned in the information section above.

Service Tier/Performance Level for Log – Based solely on Log utilization, we recommend you migrate your SQL Server workload to Basic. This Service Tier/Performance Level should cover approximately 100.00 % of your Log utilization.

Since I know this workload is heavily CPU-bound, if I can’t tune the workload to decrease the CPU requirement, I have up to 3,000 DTUs available in Standard tier. Rather than spend $930 per month for a P2 with 250 DTUs, an S4 with 200 DTUs at $150 per month (or an S6 with 400 DTUs at $300.02 per month) would be a much more economical option.

In conclusion, there are tools available to help you determine a good starting point for the size of your Azure SQL Database migrations, however the absolute best method is to test your workload. Migrating a copy of your production database, capturing a production workload, and replaying that workload against the Azure SQL Database will give you a much better understanding of what DTU size you truly need.

The post New Azure SQL Database Standard Tier Sizes appeared first on SQLPerformance.com.

]]>
https://sqlperformance.com/2017/12/azure/new-standard-tier-sizes/feed 1