What is T-SQL Tuesday?
T-SQL Tuesday is a monthly blog party hosted by a different blogger each month. This blog party was started by Adam Machanic (blog|twitter). You can take part by posting your own participating post that fits the topic of the month and follows the requirements (click adjacent image to know more). Additionally, if you are interested in hosting a future T-SQL Tuesday, contact Adam Machanic on his blog.
A few years ago, when I started of in a large project, the sheer number of instances we were supporting boggled my mind. What was even more interesting was that the installation of SQL*Server was semi-automated (it involved a configuration file and a Perl script andddd..we had some interesting calls with the good folks at MS support because of this
). This whole setup really got me thinking about how and what we could automate. Until then, I was just writing some scripts and saving them in a folder to use later on but, nothing at this scale. While I was still in awe at the setup and architecture of the solution I started to realize few things:
So, I started looking at Perl and got so engrossed in it that I started looking at automating summarizing perfmon logs using couple of modules from CPAN. After spending a week of sleepless days and nights on it, I had a working version which found only two interested people in our team who would even use it. That was my second set of lessons:
Since, I did not see any point in developing the module further so left it as is. But, I was not done with the summarizing CSV files business, I started looking at what other tools were available to do this kind of work and that is when I found PowerShell. The very first version was a little uncomfortable to work with but, I found what I was looking for, “*Import-Csv”
.
I went through the first 4 steps, or did a feasibility analysis and then I went through the last 2 steps of implementation and introduction. Life was right again.
I was chatting with a colleague about why it worked the second time around and that too with a scripting language hardly anyone knew a that time. His response was that the second solution was more innovative. That was another important lesson I learnt about automation, Innovation.
Innovation, is all about doing “stuff” in a different way than the usual. It could be as simple as taking a new route to work or as complex as rethinking the way you commute to work. Well, these maybe bad examples but, the gist is that all innovations need not be discoveries.
I mostly work on SAP on SQL systems. Recently, we had a requirement at one of our clients to track the database growth on a daily basis. The initial thought was to use one of the canned scripts that I had written a while back. Something as elementary as the below would’ve worked just fine.
DECLARE @dbDataSize INT, @dbDataUsed INT, @dbLogSize INT, @dbLogUsed INT SELECT @dbDataSize = SUM(size) FROM sys.database_files WHERE [type] = 0 SELECT @dbDataUsed = SUM(FILEPROPERTY(name,'SpaceUsed')) FROM sys.database_files WHERE [type]= 0 SELECT @dbLogSize = SUM(size) FROM sys.database_files WHERE [type] = 1 SELECT @dbLogUsed = SUM(FILEPROPERTY(name,'SpaceUsed')) FROM sys.database_files WHERE [type]= 1 SELECT DBName = DB_NAME(), DataSize_MB = @dbDataSize/128, DataFree_MB = (@dbDataSize - @dbDataUsed)/128, DataFreePct = CAST((CAST((@dbDataSize - @dbDataUsed) AS DECIMAL(15,2))/CAST(@dbDataSize AS DECIMAL(15,2)))*100 AS DECIMAL(4,2)), LogSize_MB = @dbLogSize/128, LogFree_MB = (@dbLogSize - @dbLogUsed)/128, LogFreePct = CAST((CAST((@dbLogSize - @dbLogUsed) AS DECIMAL(15,2))/CAST(@dbLogSize AS DECIMAL(15,2)))*100 AS DECIMAL(4,2))
But, if you had worked with SAP systems long enough you know the BASIS admins get a lot of cool reports in DB02, one of which incidentally is a DB growth report. So, I started looking through the stored procedures in the database and found this beauty called ‘sap_dbszhist‘. Now this procedure takes four parameters ‘db_name’, ‘object_scheme’, ‘from_dt’ and ‘to_dt’. But, we do not need to pass any of these parameters. The default behavior is:
So, without a lot of effort I now have a historical record of DB growth by day. All we need now is a process to put this in a central repository for some SSRS fun.
What I love about this is that the approach combined all the lessons learned over a period time and gave me a chance to think outside the normal.That is the final lesson, I learnt:
Automation, it’s all in the mind.
]]>
T-SQL Tuesday #046: hosted by Rick Krueger
This month the T-SQL Tuesday invitation is from Rick Krueger (B|T), who want us to dust out the closet and tell how how we escaped from the jungles of Borneo, swam across the vast Indian ocean and saved the tiger with nothing but a few clips and rubber band a.k.a Rube Goldberg Machine.
While working for a Big blue box electronic retailer, we had to support a few SQL 2000 instances. And we had to figure out a way to accurately determine if certain jobs had finished or were still executing. There were other jobs that needed to be put on hold until the job of interest was executing.
The constraints placed were:
This is how I went about it:
This whole thing, was put in a store procedure and then called from a job step for a long running job. For example, like a database backup job. And additional logic was written around the return value to write proper error numbers to application log. These event-ids where then picked up by monitoring applications and alerts where raised.
I have mixed feelings about this solution. Although, it worked without a problem (I am secretly proud that I was able to write something like this); I would have loved to just upgrade the servers to SQL 2008 and be done with it.
]]>If you are looking for really cool ways to interact with, or use ‘Invoke-RestMethod’ cmdlet then, I would suggest reading through Doug Finke’s (B|T) post on getting the newest powershell questions from stackoverflow.
Or, my own post on getting google search results using the custom search API
Note: Both Doug and Max are PowerShell MVPs and have great blogs that have a ton of information on them.
If you are already set with a Windows Azure subscription and have your details at hand, let’s start the journey.
While I was reading the first part of Max’s blog, I was reading through msdn to catch up on some of the fundamentals of Windows Azure, why we need to upload a certificate, what are the different types of certificates and what SQL database was etc… you know, basic stuff; so that I do not make silly mistake and have to rework from scratch. So, anyways while I was url-hopping I found Management REST API reference. Since, I was fresh out of writing blog about REST API and Invoke-RestMethod, this piqued my interest a lot. But, I had to resist the temptation to branch out. After finishing through Max’s blog and exploring on my own for a couple of hours, I was able to setup a SQL database server and SQL database.
Although, it is easier to use the ‘Windows Azure PowerShell’ to setup and manage your Azure services; the allure doing it using web API finally caught up with me.
The Database Management API is a REST API. All operation requests are sent encrypted over the Secure Sockets Layer (SSL) and authenticated using X.509 v3 certificates. Database Management API requests can be initiated from within Windows Azure, or directly over the internet from any application that can send HTTPS requests and receive HTTPS responses. All requests are sent to the SQL Database Management Service on port 8443 using the following base URL: https://management.database.windows.net:8443
Since, I had the REST API URL reference I started working through the URLs they had given. I was naive to think that I could get away easily :-). One thing to remember however is that, ‘The SQL Database Management Service does not support asynchronous REST API calls’; do not know what it means but, it seemed important to point out (actually, just saw it on the reference page and was trying to be funny).
The approach is straight forward, there were a few gotchas and there is risque behavior with blatant disregard for security; flame shield on as this a proof-of-concept implementation. Keeping that in mind, let us start with:
The url for this is:
https://management.database.windows.net:8443/{SubscriptionID}/servers
Doesn’t look like much but, let us look at what the documentation says. It says, we need a pre-formatted request body that looks like this
<?xml version="1.0" encoding="utf-8"?> <Server xmlns="http://schemas.microsoft.com/sqlazure/2010/12/"> <AdministratorLogin>MyAdminAccount</AdministratorLogin> <AdministratorLoginPassword>MyAdminPassword</AdministratorLoginPassword> <Location>East US | North Central US | South Central US | West US | North Europe | West Europe | East Asia | Southeast Asia</Location> </Server>
OK, not bad. We have here strings and we know PowerShell loves XML for more than one reason. Let us quickly hack a request togehter and throw it at the server.
#The setup
$subscriptionid = "xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx"
$thumbprint = Get-ChildItem Cert:\CurrentUser\My | Where-Object {$_.Subject -LIKE "CN=sqlchow80it032"} | Select-Object Thumbprint
$certificate = Get-Item Cert:\CurrentUser\My\$($thumbprint.Thumbprint)
$admindetails = Get-Credential -Message "What credentials do you want to use while accessing Windows Azure"
$location = "Southeast Asia"
#The request body, uri and headers
$xmlBody = @"
<?xml version="1.0" encoding="utf-8"?>
<Server xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
<AdministratorLogin>$($admindetails.UserName)</AdministratorLogin>
<AdministratorLoginPassword>$($admindetails.GetNetworkCredential().password)</AdministratorLoginPassword>
<Location>$location</Location>
"@
$resturi = "https://management.database.windows.net:8443/$subscriptionid/servers"
[byte[]]$bytebody = [System.Text.Encoding]::UTF8.GetBytes($xmlBody)
$contenttype = "application/xml;charset=utf-8"
$headers = @{"x-ms-version" = "1.0"}
#Lift-off
[xml]$createresponse = $(Invoke-RestMethod -Method POST -Uri $resturi -Certificate $certificate -Headers $headers -Body $bytebody -ContentType $contenttype -Verbose).Remove(0,1)
$createresponse.ServerName
xmlns #text
----- -----
http://schemas.microsoft.com/sqlazure/2010/12/ vo0iyrxe4i
It looks like we have a lot going on in this one request. So, let us break it down by region.
As expected (lucky, there were no errors :)), we have a response that conforms with the expected response body:
<?xml version="1.0" encoding="utf-8"?>
<ServerName xmlns="http://schemas.microsoft.com/sqlazure/2010/12">
TheNewServerName
</ServerName>
Our new server name is “vo0iyrxe4i”. As much as I would love to control the naming, at this point it is pretty random. The question that needs to be answered now is, how do we know if the database server was actually created. Let us find out.
The request URL is the same as, the one we used for creating the Server.
https://management.database.windows.net:8443/{SubscriptionID}/servers
The major difference is that we do not have a request body and the response is going to look different.
The expected response is in the following format:
<Servers xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
<Server>
<Name>ServerName</Name>
<AdministratorLogin>AdminAccount</AdministratorLogin>
<Location>North Central US | South Central US | North Europe | West Europe | East Asia | Southeast Asia</Location>
</Server>
</Servers>
The code for this would be:
$subscriptionid = "xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx"
$thumbprint = Get-ChildItem Cert:\CurrentUser\My | Where-Object {$_.Subject -LIKE "CN=sqlchow80it032"} | Select-Object Thumbprint
$certificate = Get-Item Cert:\CurrentUser\My\$($thumbprint.Thumbprint)
Invoke-RestMethod -Method Get -Uri "https://management.database.windows.net:8443/$subscriptionid/servers" -Certificate $certificate
VERBOSE: GET https://management.database.windows.net:8443/03xxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx/servers with 0-byte payload
Invoke-RestMethod :
<Error xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
<Code>40643</Code>
<Message>The specified x-ms-version header value is invalid.</Message>
<Severity>16</Severity>
<State>1</State>
</Error>
Oh! no. But, this was expected and you trudge back a ‘page-up’ you will see that the header was a mandatory parameter and had to have a specific value of 1.0 assigned to it. Let us behave and send the request in proper format.
$subscriptionid = "xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx"
$thumbprint = Get-ChildItem Cert:\CurrentUser\My | Where-Object {$_.Subject -LIKE "CN=sqlchow80it032"} | Select-Object Thumbprint
$certificate = Get-Item Cert:\CurrentUser\My\$($thumbprint.Thumbprint)
$headers = @{"x-ms-version" = "1.0"}
Invoke-RestMethod -Uri "https://management.database.windows.net:8443/$subscriptionid/servers" -Certificate $certificate -Headers $headers -Method "GET"
VERBOSE: GET https://management.database.windows.net:8443/xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx/servers with 0-byte payload
VERBOSE: received 354-byte response of content type application/xml; charset=utf-8
?<Servers xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
<Server>
<Name>gobbl3go0k</Name>
<AdministratorLogin>sqlchow</AdministratorLogin>
<Location>West US</Location>
<Features>
<Feature>
<Name>Premium Mode</Name>
<Value>false</Value>
</Feature>
</Features>
</Server>
</Servers>
We got our response but, if you look closely the response has a ‘?’ on it. I did not find a reason why the response content has a ‘?’ (in this case it was 65279) in it. Reading through a few stackoverflow questions I thought adding the ‘@{“Accept”=”application/xml”}’ would solve it but, that results in an error:
Invoke-RestMethod : The 'Accept' header must be modified using the appropriate property or method. Parameter name: name At line:1 char:1 + Invoke-RestMethod -Uri "https://management.database.windows.net:8443/$subscripti ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
So, as a temporary workaround; I put the response into a text file and then read from it. This time there is no ‘?’ and I get xml object.
Invoke-RestMethod -Uri "https://management.database.windows.net:8443/$subscriptionid/servers" -Certificate $certificate -Headers @{"x-ms-version"="1.0"} -OutFile "C:\temp\response.txt"
$data = [xml](get-Content C:\temp\response.txt)
$data.Servers.GetElementsByTagName('Server')
Name AdministratorLogin Location Features
---- ------------------ -------- --------
gobbl3go0k sqlchow West US Features
$data.Servers.GetElementsByTagName('Server').features.feature
Name Value
---- -----
Premium Mode false
But, again we could put the response into a variable and used Remove() to get rid of the ‘?’, like we did for the create server part.
Note: if anyone knows why we have a ? in the response please let me know. I would really appreciate it.
After running the create scripts a couple of times, I ended up with three servers:
Invoke-RestMethod -Uri "https://management.database.windows.net:8443/$subscriptionid/servers" -Certificate $certificate -Headers @{"x-ms-version"="1.0"} -OutFile "C:\temp\response.txt"
$data = [xml](get-Content C:\temp\response.txt)
$data.Servers.Server
Name AdministratorLogin Location Features
---- ------------------ -------- --------
gkbbwrq8ew sqlchow West US Features
hk5r8andyn sqlchow Southeast Asia Features
vo0iyrxe4i sqlchow Southeast Asia Features
The only other operation we can do with these database servers are delete and update. Both of them do not have response body :). Let us check these out.
The URL for managing the password of a database server is a little different than the previous two we saw:
https://management.database.windows.net:8443//servers/?op=ResetPassword
The request body also differs a bit, as it only has the AdministratorLoginPassword node in it.
<?xml version="1.0" encoding="utf-8"?> <AdministratorLoginPassword xmlns="http://schemas.microsoft.com/sqlazure/2010/12/"> TheNewPassword </AdministratorLoginPassword>
The drill is pretty much the same, with the only difference being that we do not have a response body. If we do not get any errors then we are good to go.
$subscriptionid = "xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx"
$thumbprint = Get-ChildItem Cert:\CurrentUser\My | Where-Object {$_.Subject -LIKE "CN=sqlchow80it032"} | Select-Object Thumbprint
$certificate = Get-Item Cert:\CurrentUser\My\$($thumbprint.Thumbprint)
$headers = @{"x-ms-version" = "1.0"}
$password = "v3ed!kip)wEr$hellP1cchi" #it just means that this guy is mad about powershell.
$servername = "vo0iyrxe4i"
$resturi = "https://management.database.windows.net:8443/$subscriptionid/servers/$servername" + "?op=ResetPassword"
$contenttype = "application/xml;charset=utf-8"
$xmlBody = @"
<?xml version="1.0" encoding="utf-8"?>
<AdministratorLoginPassword xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
$password
</AdministratorLoginPassword>
"@
[byte[]]$bytebody = [System.Text.Encoding]::UTF8.GetBytes($xmlBody)
[xml]$delresponse = Invoke-RestMethod -Method POST -Uri $resturi -Certificate $certificate -Headers $headers -Body $bytebody -ContentType $contenttype -Verbose
VERBOSE: POST https://management.database.windows.net:8443/xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx/servers/vo0iyrxe4i?op=ResetPassword with 165-byte payload
VERBOSE: received 0-byte response of content type
That’s it snap! the password is changed. But, if we want a response code then we can use Invoke-WebRequest which gives a status code for each request. We shall check how, in a bit after we look at deleting the server.
Delete SQL Database Server
The url for deleting a server is very simple.
https://management.database.windows.net:8443/{SubscriptionID}/servers/{ServerName}
The code:
$servername = "hk5r8andyn"
$resturi = "https://management.database.windows.net:8443/$subscriptionid/servers/$servername"
$contenttype = "application/xml;charset=utf-8"
$deleteresponse = Invoke-RestMethod -Method DELETE -Uri $resturi -Certificate $certificate -Headers $headers -ContentType $contenttype -Verbose
VERBOSE: DELETE https://management.database.windows.net:8443/xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx/servers/hk5r8andyn with 0-byte payload
VERBOSE: received 0-byte response of content type
Remove-Item "C:\temp\response.txt"
Invoke-RestMethod -Uri "https://management.database.windows.net:8443/$subscriptionid/servers" -Certificate $certificate -Headers @{"x-ms-version"="1.0"} -OutFile "C:\temp\response.txt"
$data = [xml](get-Content C:\temp\response.txt)
$data.Servers.Server
Name AdministratorLogin Location Features
---- ------------------ -------- --------
gkbbwrq8ew sqlchow West US Features
vo0iyrxe4i sqlchow Southeast Asia Features
Using Invoke-WebRequest is not much different than using Invoke-RestMethod. One important advantage of Invoke-WebRequest is that you get the status code and the raw content actually contains x-ms-request-id, which will come in handy in case there is an issue you have to contact ms support.
Invoke-WebRequest -Uri $resturi -Certificate $certificate -Headers @{"x-ms-version"="1.0"} -Method GET -Verbose
VERBOSE: GET https://management.database.windows.net:8443/xxxxxxxx-xxbx-xaxc-xxxx-xxxxxxxxxxxx/servers with 0-byte payload
VERBOSE: received 636-byte response of content type application/xml; charset=utf-8
StatusCode : 200
StatusDescription : OK
Content : <Servers xmlns="http://schemas.microsoft.com/sqlazure/2010/12/">
<Server>
<Name>gkbbwrq8ew</Name>
<AdministratorLogin>sqlchow</AdministratorLogin>
<Location>West US</Location>
...
RawContent : HTTP/1.1 200 OK
x-ms-request-id: be318ebc-bf8a-4e6d-bfdd-e204dd01efff
Content-Length: 636
Content-Type: application/xml; charset=utf-8
Date: Sun, 01 Sep 2013 14:51:11 GMT
Server: Microsoft-HTTPAP...
Forms : {}
Headers : {[x-ms-request-id, be318ebc-bf8a-4e6d-bfdd-e204dd01efff], [Content-Length, 636], [Content-Type, application/xml; charset=utf-8], [Date, Sun, 01 Sep 2013 14:51:11 GMT]...}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 636
If you reached here, I appreciate your patience and time. I would like to let you know that I am working on a new post using the HttpClient to manage these services. It may take sometime before it will be ready.
]]>How to filter line + following line using Get-SqlErrorLog? Something like Select-String -Context 1
From the PowerShell help:
The Context parameter captures the specified number of lines before and after the line with the match. This allows you to view the match in context.
If you enter one number as the value of this parameter, that number determines the number of lines captured before and after the match. If you enter two numbers as the value, the first number determines the number of lines before the match and the second number determines the number of lines after the match.
I was not sure if Get-SqlErrorLog supported a parameter like Context. So, I decided to check if we can do something like this, without too much code.
Add-type -AssemblyName "Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91" -EA Stop $server = New-Object Microsoft.SqlServer.Management.Smo.Server $env:COMPUTERNAME $server.ErrorLogPath
Output is something like so, on my machine:
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log
From here it is pretty straight forward:
Get-ChildItem -Path $server.ErrorLogPath -Filter ERRORLOG* |
ForEach-Object -Process { Select-String -Path $_ -Pattern "Starting up database 'sqlchow'" -Context 1}
The output is something like:
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG:54:2013-08-17 08:33:58.26 spid52
Using 'dbghelp.dll' version '4.0.5'
> C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG:55:2013-08-17 12:55:59.64 spid53
Starting up database 'sqlchow'.
C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\ERRORLOG:56:2013-08-17 12:56:01.78 spid53
Setting database option COMPATIBILITY_LEVEL to 100 for database sqlchow.
Note: If you want to search only in the first error-log file then, just remove change the code to:
Get-ChildItem -Path $server.ErrorLogPath -Filter ERRORLOG |
ForEach-Object -Process { Select-String -Path $_ -Pattern "Starting up database 'sqlchow'" -Context 1}
Interesting stuff eh!
]]>After the monthly patching cycle completes, we used to spend a considerable amount of time figuring out if all the clustered SQL*Server instances were running on their preferred owners or not. During the latest cycle of patch deployment, I was hard pressed for time and wanted to see if I could put something together, quick and dirty, to display the information easily.
Basically, what I did was improve an old script that I had written back in Jan, 2012; which got me the preferred owner names of a given cluster group. I made some changes to that and started building around it. The script presented below is essentially a simple looped re-write of the original. The script works in the following way:
Function Search-ClusterNodesForStacking
{
<#
.SYNOPSIS
This script provides options for an administrator to check if any cluster
services are stacked i.e. not running on their preferred nodes.
.DESCRIPTION
The script works in the following way:
1. Connect to a cluster and get a list of nodes that belong to it.
2. For each node in the cluster check which services are available on a
given node.
3. For each clustered service on the node, check if the current node is
its preferred node.
.PARAMETER ClusterName
Name of the Cluster which needs to be validated.
Required true
Position named
Default value
Accept pipeline input false
Accept wildcard characters false
.EXAMPLE
Search-ClusterNodesForStacking -ClusterName SKYNETCL
------------------------------------------------------------
ClusterName: SKYNETCL
------------------------------------------------------------
NodeName: SKYNETA
NodeState: ONLINE
ActiveGroup: Available Storage
GroupState: OFFLINE
Preferred Node:<-StackingDetected
PhysicalNode: SKYNETA
ActiveGroupName: Available Storage
ActiveGroupDescription:
PreferredNode:
ActiveGroup: SKYSQL014
GroupState: ONLINE
Preferred Node: SKYNETA <- Good
ActiveGroup: SKYSQL015
GroupState: ONLINE
Preferred Node:SKYNETB<-StackingDetected
PhysicalNode: SKYNETA
ActiveGroupName: SKYSQL015
ActiveGroupDescription:
PreferredNode: SKYNETB
------------------------------------------------------------
NodeName: SKYNETB
NodeState: ONLINE
ActiveGroup: Cluster Group
GroupState: ONLINE
Preferred Node:SKYNETA<-StackingDetected
PhysicalNode: SKYNETB
ActiveGroupName: Cluster Group
ActiveGroupDescription:
PreferredNode: SKYNETA
------------------------------------------------------------
NodeName: SKYNETC
NodeState: ONLINE
ActiveGroup: SKYSQL017
GroupState: ONLINE
Preferred Node: SKYNETC <- Good
ActiveGroup: SKYSQL018
GroupState: ONLINE
Preferred Node: SKYNETC <- Good
ActiveGroup: SKYSQL019
GroupState: ONLINE
Preferred Node: SKYNETC <- Good
ActiveGroup: SKYSQL016
GroupState: ONLINE
Preferred Node: SKYNETC <- Good
------------------------------------------------------------
.NOTES
Since, the function shows the information on screen we will not accept
pipeline input.
#>
[CmdletBinding()]
param(
[Parameter(Mandatory = $true, HelpMessage="Enter cluster name which needs to be validated.")]
[ValidateNotNullOrEmpty()]
[string]$ClusterName
)
BEGIN
{
Function Get-ClusterState
{
param(
[int]$state = $( Throw "Please specify cluster state.")
)
$getState = "";
switch($state){
-1 { $getState = "STATE_UNKNOWN"}
0 { $getState = "ONLINE"}
1 { $getState = "OFFLINE"}
2 { $getState = "FAILED"}
3 { $getState = "PARTIAL_ONLINE"}
4 { $getState = "PENDING"}
default { $getState = "ERROR GETTING STATE"}
}
$getState
}
}
PROCESS
{
[string] $virtualClusName = "";
[boolean] $bStacking, $bFound = ($False, $False);
[string] $sPrefNode = "";
#Get the cluster nodes and name.
$Cluster = Get-WmiObject -namespace "Root\MSCluster" -Class "MSCluster_Cluster" -computerName $ClusterName -Authentication "PacketPrivacy" -Impersonation "Impersonate";
$virtualClusName = $Cluster.Name.ToString();
$clusterNodes = Get-WmiObject -namespace "Root\MSCluster" -query "Select * from MSCluster_Node" -computerName "$($virtualClusName)" -Authentication "PacketPrivacy" -Impersonation "Impersonate";
$clusNodeNames = $clusterNodes | Select-Object Name, Description, @{Name="ClusState"; Expression={Get-ClusterState $_.State}}
Write-Host ("-" * 60) -ForegroundColor Yellow -BackgroundColor Black
Write-Host "ClusterName: $($virtualClusName)" -ForegroundColor Green
Write-Host ("-" * 60) -ForegroundColor Yellow -BackgroundColor Black
#Loop through the collection of physical cluster nodes
$bStacking = $False
Foreach ($node in $clusNodeNames)
{
Write-Host "`tNodeName: " $node.Name -ForegroundColor Yellow
Write-Host "`tNodeState: " $node.ClusState -ForegroundColor Yellow
#Get the Active Group for each node
$qryStr = "ASSOCIATORS OF {MSCluster_Node='" + $node.Name + "'} WHERE AssocClass=MSCluster_NodeToActiveGroup"
$clusNodeAssoc = Get-WmiObject -namespace "Root\MSCluster" -query $qryStr -computerName $virtualClusName -Authentication "PacketPrivacy" -Impersonation "Impersonate"
$clusGroupNames = $clusNodeAssoc | Select-Object Name, Description, @{Name="ClusState"; Expression={Get-ClusterState $_.State}}
#Loop through each Active group
Foreach ($group in $clusGroupNames)
{
If($group.Name -ne $null)
{
If ( ($group.Description -ine "") -or ($group.Description -eq $null)){
Write-Host "`t`tActiveGroup: $($group.Name)" -ForegroundColor Magenta
Write-Host "`t`tDescription: $($group.Description)" -ForegroundColor Magenta
} Else{
Write-Host "`t`tActiveGroup: $($group.Name)" -ForegroundColor Magenta
}
Write-Host "`t`tGroupState: $($group.ClusState)" -ForegroundColor Magenta
#Get the preferred node for this Active Group.
#If there isn't one we just skip this because we can't detect stacking.
$qryStr = "ASSOCIATORS OF {MSCluster_ResourceGroup='" + $group.Name + "'} WHERE AssocClass=MSCluster_ResourceGroupToPreferredNode"
$clusGroupAssoc = Get-WmiObject -namespace "Root\MSCluster" -query $qryStr -computerName $virtualClusName -Authentication "PacketPrivacy" -Impersonation "Impersonate"
$clusPrefNodes = $clusGroupAssoc | Select-Object Name, Description, @{Name="ClusState"; Expression={Get-ClusterState $_.State}} -First 1
#Loop through this collection to determine if this Active Group is on a preferred node
Foreach ($prefNode in $clusPrefNodes){
if($prefNode -ne $null)
{
$sPrefNode = [string]$prefNode.Name
If ([string]$prefNode.Name.ToUpper() -eq [string]$node.Name.ToUpper()){
#$sPrefNode = [string]$prefNode.Name
$result = "`t`tPreferred Node: " + $sPrefNode + " <- Good"
Write-Host $result -ForegroundColor Green
$bFound = $True
}else{
$bFound = $False
}
}
}
If ((-not $bFound) -and ($clusGroupNames.Count -ine 0)){
$stckDet = "`t`tPreferred Node:" + $sPrefNode + "<-StackingDetected"
Write-Host $stckDet -ForegroundColor Red
$bStacking = $True
Write-Host "`t`tPhysicalNode: " $node.Name -ForegroundColor Red
Write-Host "`t`tActiveGroupName: " $group.Name -ForegroundColor Red
Write-Host "`t`tActiveGroupDescription: " $group.Description -ForegroundColor Red
Write-Host "`t`tPreferredNode:" $sPrefNode -ForegroundColor Red
}
$bFound = $False
}else{
Write-Host "`t`tActiveGroup: Unable to get group name" -ForegroundColor Cyan
}
}
Write-Host ("-" * 60) -ForegroundColor Yellow -BackgroundColor Black
}
}
END
{
}
}
As always, feedback and comments are welcome.
]]>
T-SQL Tuesday #040: hosted by MidNightDBA
This month the T-SQL Tuesday invitation is from Jen&Sean McCown (B|T), asking us to talk about FileGroups or a related area. I thought FILESTREAMs was a good choice to talk about because it needs its own file group. Let’s get rolling then:
FILESTREAM was a new SQL Server feature (in SQL2008) that lets you store unstructured BLOB data directly in the file system in a set of folders, access to which is provided via SQL*Server and a special file system driver.
Thumbs-Up:
Bleh!:
“(…) objects smaller than 256K are best stored in a database while objects larger than 1M are best stored in the filesystem. Between 256K and 1M, the read:write ratio and rate of object overwrite or replacement are important factors.”[1]
So, the general recommendation is to use it when your BLOBs are greater than 1MB and you have additional logic that needs to be performed on the BLOB (like scanning the barcodes in the document before sending the document out) and fast read access is required. If all the fore mentioned conditions are not met, you are better off storing the BLOB in the database.
To put it very simply:
The first method was to use the file system i.e. store the paths in the DB and the files on the NTFS shares. Some of the problems with approach were that the backup of the files is not always synchronized with the backup of the database, transactional consistency issues, support for FT Search is hard and finally creating a consistent DR plan was a nightmare.
The second method was to store the file into a BLOB, now called a VarBinary(max) where you have the benefits of the backups, the full text search but you are limited to 2GB per file. Operations on the database (backups and other maintenance tasks) take more time.
First things first, you need to enable FILESTREAM at windows level. You do this on the ‘FILESTREAM’ tab in the SQL*Server configuration manager for the SQL instances whose properties you want to change.
Then at the SQL*Server instance level: By running sp_configure with ‘filestream_access_level’ configuration option.
Then you create a new FileGroup for the FILESTREAM container. There is a 1:1 mapping between the FILESTREAM FileGroups and the FILESTREAM container. You can validate this by using the below query.
SELECT file_id AS fileid, type_desc AS filetype, name AS name, physical_name AS physicalname FROM sys.database_files WHERE type_desc = 'FILESTREAM' GO
Example:
CREATE DATABASE DiscountItems ON PRIMARY ( NAME = DiscountItemsData1, FILENAME = 'D:\Sql\MSSQL\Data\DiscountItemsData1.mdf'), FILEGROUP DiscountItemsDB2( NAME = DiscountItemsData2, FILENAME = 'D:\Sql\MSSQL\Data\DiscountItemsData2.ndf'), FILEGROUP DiscountItemsFS1 CONTAINS FILESTREAM DEFAULT( NAME = DiscountItemsFS1, FILENAME = 'D:\Sql\MSSQL\Data\DiscountItemsFS1'), LOG ON ( NAME = DiscountItemsLOG, FILENAME = 'D:\Sql\MSSQL\Data\DiscountItemsLOG.ldf') GO
Once the database has a FILESTREAM filegroup, tables can be created that contain FILESTREAM columns. As mentioned earlier, a FILESTREAM column is defined as a varbinary (max) column that has the FILESTREAM attribute.[2]
Example:
CREATE TABLE [ForSaleItems]( [ItemID] [int] IDENTITY(1,1) PRIMARY KEY [ItemGuid] [uniqueidentifier] ROWGUIDCOL NOT NULL UNIQUE ON [PRIMARY], [ItemDescription] [varchar](50) NULL, [ItemImage] [varbinary](max) FILESTREAM NULL ) FILESTREAM_ON DiscountItemsFS1 GO
As a requirement, you need to have a ROWGUID column like the ItemID column. If you try to create a plain old Identity column, the table won’t be created and you will get an error (Msg 5505, Level 16, State 1, Line 1 – A table with FILESTREAM column(s) must have a non-NULL unique ROWGUID column).
The column that will be used as a pointer to the real file on the disk needs to be created as a VarBinary(max) with the FileStream attribute. In our case this is the ItemImage.
The following query will insert a new row into our table and it will also create a file into our folder.
INSERT INTO ForSaleItems([ItemGuid], [ItemDescription], [ItemImage])
VALUES(NEWID(), 'Something random somebody said', CAST('Hope is a heuristicSearch' AS VARBINARY(MAX)))
The path to the file is not immediately available, you need to use Pathname() function to get the path. But, remember that PathName() returns only a file handle so you cannot navigate to the path.
SELECT [ItemDescription], pathname = [ItemImage].PathName() FROM ForSaleItems
If you really want to see the files, you need to goto the FILESTREAM container (the folder we specified for FILESTREAM when creating the DB). Also, when you delete files that are no longer needed, they are removed by a garbage collection process (an automatic background process).
All backup and recovery methods are available with FILESTREAM data. If you already have a backup process in place then remember that the FILESTREAM data is backed up with the structured data in the database.
If you do not want to back up FILESTREAM data with relational data, you can use a partial backup to exclude FILESTREAM filegroups.
So, hopefully this has given you a very quick introduction to FILESTREAMs in SQL*Server.
Most of the stuff I presented in the post came from the following WhitePaper (Or at least, what I understood from the paper): FILESTREAMStorage by Paul Randal.
If there are any mistakes, they are my own and should not be attributed to anyone else. Comments, suggestions and telling off is always welcome.
]]>To work around this problem, we added extended properties to all the user databases that we have on the instances. After I did this on about 5 databases in an instance; I was bored and wanted to see if there was any other way of doing this other T-SQL. Sure, there is and it’s in PowerShell, I couldn’t ask for more. It was good to walk off the beaten path and see explore new things but, it did not take me too long to figure out why documentation is boring; even PowerShell could not help me out much here.
The script is pretty straightforward though:
“Connect to the SQL*Server instance, Loop through all users databases and if a database does not have extended properties; Create a new extended properties object, Attach the object to the current DB and type in the values required.”
# Load required assemblies
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | Out-Null;
[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMOExtended")| Out-Null;
try
{
$smoConn = New-Object ('Microsoft.SqlServer.Management.Smo.Server') $sqlServer
$smoConn.ConnectionContext.ConnectTimeout = $commandTimeout
$smoConn.ConnectionContext.Connect()
$sqlServerDB = $smoConn.Databases
foreach($db in $sqlServerDB){
#Write-Host "Processing DB: " $db -Fore Magenta
[String]$dbName = $db.Name
if( "master", "model", "msdb", "tempdb", "pubs", "NorthWind", "AdventureWorks" -notcontains $dbName )
{
Write-Host "Working on DB ->" $db -Fore Green
$xPropCount = $db.ExtendedProperties.Count
if($xPropCount -eq 0){
$ownrExtProp = New-Object Microsoft.SqlServer.Management.Smo.ExtendedProperty
$ownrExtProp.Parent = $db
$ownrExtProp.Name = 'Owned By'
$ownrExtProp.Value = Read-Host "Enter the Name of the team that owns the DB: "
$ownrExtProp.Create()
$cntctExtProp = New-Object Microsoft.SqlServer.Management.Smo.ExtendedProperty
$cntctExtProp.Parent = $db
$cntctExtProp.Name = "Contact"
$cntctExtProp.Value = Read-Host "Enter the E-mail address of the group that owns the DB: "
$cntctExtProp.Create()
}
foreach ($xProperty in $db.ExtendedProperties){
Write-Host " " $xProperty "->" $xProperty.Value -Fore Yellow
}
}
}
}
catch
{
$_ | fl * -Force
}
<# name value OwnedBy SqlChow Contact [email protected] #>
Since, we put in so much work documenting this stuff I would have liked to add a nightly job that would read these properties and keeps a central repository updated with the latest changes to the DB. Maybe…that is an overkill.
]]>The major advantage is that Slipstreaming allows you to get to the latest supported version without having to patch upgrade your installation. Install; Upgrade; AddNode scenarios which are supported by the original media are still supported when you slipstream.
Before you use the script, kindly read the msdn entry; on how to do this manually.
There are couple of pre-requisites:
The script is pretty straight forward but, there are certain assumptions. C:\SQL2008 or C:\2008R2 would be the folders where the install media is stored; you can change this. And the service pack will be stored in a folder called ‘SP’ inside the base folder. Also, the slipstreamed media will be stored in C:\SQL2008_SlipStreamNNNN; where NNNN is the version number of SQL Server after upgrade (the variable is $verPCU). So, without further delay; here we go:
Begin
{
$isPCU = $true
$isCU = $false
$verPCU = "5500" # service pack version
$verCU = $null
$archArray = @('x64', 'x86', 'ia64')
$prodPartialName = '08' # or it could be 08R2
#Basepath is where you have the executables
$basePath = "C:\SQL20" + $prodPartialName + "\"
$patchExePath = "C:\SQL20" + $prodPartialName + "\SP"
$basePathPCU = $basePath + "SQL2008_slipstream" + $slpStrmVer + "\PCU"
$basePathCU = $basePath + "SQL2008_slipstream" + $slpStrmVer + "\CU"
if($isPCU){
$slpStrmVer = $verPCU
}elseif($isCU){
$slpStrmVer = $verCU
}else{
exit -1;
}
}
Process
{
#Copy your original SQL Server 2008 source media to "C:\SQL2008\SQL2008_slipstreamNNNN"
$srcPath = $basePath + "SQL20" + $prodPartialName
$destPath = $srcPath + "_slipstream" + $slpStrmVer
try{
#New-Item -Type directory -path $destPath -force ->not needed for now.
Write-Verbose -message "Copying source media from: `n `t $srcPath `n `t(to) `n `t$destPath." -verbose
Copy-Item $srcPath -Destination $destPath -recurse -force #copy-item creates the destination folder if it does not already exist.
}catch{
Write-Error "Failed to copy files source media from: `n `t $srcPath to $destPath"
return -1;
}
if($isPCU)
{
try{
#Extract the patchfiles to the PCU folder
$patchPath = $patchExePath + $verPCU
$patchFiles = Get-ChildItem -Path $patchPath -Filter "*.exe"
foreach($patchFile in $patchFiles){
$fullPath = $patchFile.FullName
cmd /c "$fullPath /x:$basePathPCU"
if($LASTEXITCODE -ne 0){throw "$LASTEXITCODE"}
}
#copy the setup.exe and setup.rll
Copy-Item "$basePathPCU\Setup.exe" -Destination $destPath -Force
Copy-Item "$basePathPCU\Setup.rll" -Destination $destPath -Force
#Copy all files not the folders, except the Microsoft.SQL.Chainer.PackageData.dll
foreach($archType in $archArray){
$sourcePath = $basePathPCU + '\' + $archType
$destinationPath = $destPath + '\' + $archType
Get-ChildItem -Path $sourcePath -Exclude "Microsoft.SQL.Chainer.PackageData.dll" |
Where-Object {$_.PSIsContainer -eq $false} |
Foreach-Object { Copy-Item -force -path $_.Fullname -destination $destinationPath}
#Update defaultsetup.ini file
echo "PCUSOURCE=`".\PCU`"" >> "$destinationPath\DefaultSetup.ini"
}
}catch{
Write-Error "Failed to extract patch files`n `t Errorcode returned is: $_"
return -1;
}
}
}
]]>