Now onto the bigger elephant in the room, why on earth would you have all of these fields set to the maximum storage amount of two gigabytes (1GB for NVARCHAR taking up 2GB of space). Are you really storing that much data in six fields of EACH record? Keep in mind that SQL Server stores records in 8K pages. When you exceed that the data goes to an overflow page with a pointer so that SQL knows where to find the rest of the data. I understand that it will not take up that much space if you are not actually storing that much data, but there is a potential here for these table to be problematic. Granted this might be a good design for one field if it stores a large amount of text or notes, but six fields in the same record? I looked at the length of the records currently entered for these six fields and I found nothing larger than 100 characters. Overkill? What do you think?
Enjoy!
]]>
This month’s topic for T-SQL Tuesday #38 hosted by Jason Brimhall (b|t) is an aptly themed variation on the word resolution. I personally chose the word resolve.
In 2012, I made a resolution to begin presenting at the local level. That sounds like a normal enough resolution for most, however being an introvert standing in front of people is quite terrifying. After attending many user group meetings and SQL Saturdays, I decided that since I have met many of these speakers and they are not much different from me, I should be able to stand up and do that too!
In the past, when called upon to speak in public or in front of a group my fight or flight instinct would kick in and generally I would want to run with every fiber of my being. Sometimes, I did run shamefully. Interviews are no different in many cases therefore it is safe to assume that I do not interview well. In the past, I have probably missed out on some good job opportunities simply because they thought I was an idiot based solely upon my presentation skills. It is because of this that I resolved to improve upon the skill.
Sounds good, huh? Make a resolution and then you do it, right? Easy as pie.
Not so fast, my friend. I still wanted to run Forrest run! Karla Landrum (b|t), the SQL Community Evangelist for SQL PASS, realized what was going on and she gently nudged me along to speak at the Pensacola SQL Saturday last summer. This is not unusual as historically I have been talked into doing stuff by women (and I probably should not have put that out there, so now you know).
That day in Pensacola, my fight or flight instinct kicked in again all morning long, but now I was on the hook and I did not want to disappoint Karla. Plus many of my colleagues knew I was speaking. Quite honestly this held me accountable for I did not want to ruin my reputation in the SQL community before I really even had one. My resolve that day held firm and I spoke quite nervously. But I did not run. I climbed the proverbial mountain and planted my flag upon its peak.
Since that time I have spoken at SQL Saturday Orlando and a user group meeting. It is getting easier with each event, but I still need to resolve myself to get better each time. Maybe one day I can speak at the PASS Summit? I will resolve myself to accomplish that goal and now you can hold me accountable. Enjoy!
]]>The problem we had been that when we tried to verify the backup through restoration and DBCC CHECKDB on a certain database, that was considerably larger than all of the other databases on this server, we would not have enough space on the drive using our restore script we created last week. That scripting solution is great as long as there is enough space on the drives to attach the database files from backup and then drop them.
The beauty of the Virtual Restore product, in case you did not already know, is that it can restore these files with a much smaller footprint than the actual database files take up in the exiting format. However, the problem is that it is strictly a wizard at this point that will grab a specific backup and run the restore manually and then drop it once it checks the integrity and consistency. This is a great product but this is feature should be included or else it does not help much especially when you have hundreds of databases.
We ran the wizard and then viewed the script:
RESTORE DATABASE [TestDB_Virtual] FROM
DISK=N’I:\TestDB\FULL\SQLPROD40_TestDB_FULL_20130101_213217.sqb’
WITH MOVE N’TestDB_Data’ TO N’F:\VirtualRestore\TestDB_Data_TestDB_Virtual.vmdf’,
MOVE N’TestDB_Index’ TO N’F:\VirtualRestore\TestDB_idx_TestDB_Virtual.vmdf’,
MOVE N’TestDB_Log’ TO N’F:\VirtualRestore\TestDB_Log_TestDB_Virtual.vldf’, NORECOVERY, STATS=1,REPLACE
GORESTORE DATABASE [TestDB_Virtual] WITH RECOVERY, RESTRICTED_USER
GODBCC CHECKDB ([TestDB_Virtual])
GODROP DATABASE [TestDB_Virtual]
GO
This script did not work when we ran it via T-SQL because of a lack of disk space which is the same problem we encountered using the Backup Pro restore script, however it did work with the Red Gate Virtual Restore Wizard. We contacted support to find out why there was a difference on the same box. Basically SQL Server does not know that the HyperBac service is running in the background. The wizard is smart enough to check your disk space and if you do not have enough it temporarily adds an entry into the HyperBac configuration that tricks SQL Server into thinking that you have the right amount of disk space in order to complete the task.
The parameter is “VirtualDiskSize=” where the value is an integer to represent the disk size in megabytes. You add the parameter to the bottom of the to the hyper.conf file found in the C:\Program Files (x86)\Red Gate\HyperBac\Bin folder. At this point then you restart the HyperBac service and the problem is solved! Next I will try to automate this script….stay tuned!
]]>If you are not familiar, this tool is excellent for many reasons but one of the reasons that we like it is for the simple fact that if a new database is created, it will be maintained automagically. I do not have to create any maintenance plans or agent jobs each time I add a database to a server. We have several servers that get new small databases all the time and this is a time saver as well as a life saver. Now this scripts will do the same, it will iterate through the database names and restore the latest backup set as a new database with the name ‘_Restored’ tacked on the end of the existing name and place the data and log file in a separate directory to avoid overwriting your existing databases.
Do not worry about failure reporting with the job as Red Gate will send you an error for each database, pass or fail. If you wish to see only the failures then change ‘MAILTO’ to ‘MAILTO_ONERRORONLY’ in the script.
–This script will restore multiple databases where each database has it’s own Full Backup to be restored.
USE masterDECLARE @dbname NVARCHAR(260)
— Add any databases in that you want to exclude
DECLARE cdatabases CURSOR FOR
SELECT name
FROM sysdatabases
WHERE name != ‘tempdb’
AND name != ‘master’
AND name != ‘msdb’
–Set @restorepath to be the path of where your backups are located, in my example this is ‘D:\Backup\’
DECLARE @restorepath VARCHAR(500)
–@filename will need to be modified depending how the backup files are named
DECLARE @filename VARCHAR(500)
DECLARE @restorestring VARCHAR(1000)
–Not needed if running in a SQL job
DECLARE @exitcode INT
DECLARE @sqlerrorcode INTOPEN cdatabases
FETCH next FROM cdatabases INTO @dbname
WHILE @@FETCH_STATUS = 0
BEGIN
SET @restorepath = ‘D:\SQLVMDEV10\’ + @dbname + ‘\Full\’
— @filename will need to be modified depending how the backup files are named
SET @filename = @restorepath + ‘SQLVMDEV10_’ + @dbname + ‘_FULL_*.sqb’
SET @restorestring = ‘-SQL “RESTORE DATABASE [‘ + @dbname + ‘_Restored] FROM DISK = ”’ + @filename
+ ”’ SOURCE = ”’ + @dbname + ”’ LATEST_FULL WITH MAILTO = ”[email protected]”, RECOVERY, DISCONNECT_EXISTING, MOVE DATAFILES TO ”G:\VirtualRestore”, MOVE LOGFILES TO ”G:\VirtualRestore”, REPLACE, ORPHAN_CHECK, CHECKDB = ”ALL_ERRORMSGS, DATA_PURITY, EXTENDED_LOGICAL_CHECKS”, DROPDB” -E’
–If you wish to apply additional backup, remember to change the WITH RECOVERY to WITH NORECOVERY,
EXEC master..Sqlbackup
@restorestring,
@exitcode output,
@sqlerrorcode output–IF (@exitcode <>0) OR (@sqlerrorcode <> 0)
— BEGIN
— RAISERROR (‘SQL Backup job failed with exitcode: %d SQL error code: %d’, 16, 1, @exitcode, @sqlerrorcode)
— END
FETCH next FROM cdatabases INTO @dbname
ENDCLOSE cdatabases
DEALLOCATE cdatabases
This will generate multiple entries in Red Gate Monitor, as mentioned in Part 4 of our journey. In order to prevent these setup a maintenance window on your monitoring tool and run this script only during the maintenance window ensuring that no other jobs run during that time so that you do not miss any important alerts from those maintenance jobs. Here is a link from Red Gate detailing how to set the maintenance windows in Monitor. Enjoy!
]]>If you are not doing all five then you cannot say confidently that your backups are verified. However, if you are doing all five keep in mind that there is no fool proof guarantee against corruption, this merely minimizes the destruction by having viable backups. I hope this helps….Enjoy!
]]>Normally this would require a script from the vendor because it is their code, but the product allows you to create some custom tables (they default to the table level collation) in the database and those are our responsibility. So why would you need to change it you say? The developers were trying to write some reports against these tables and as you would expect they received an error (Msg 468, Level 16, State 9, Line 4 Cannot resolve the collation conflict between “SQL_Latin1_General_CP1_CS_AS” and “SQL_Latin1_General_CP1_CI_AS” in the equal to operation.) It would take forever to go table by table and column by column to change the collation, hence the need for the script!
DISCLAIMER: I have had this little snippet for a while and I do not know who wrote it originally, probably found on MSDN. I also know that it is not the most updated way of doing this (you should use the sys schema) but it works and sometimes that is easier than rewriting the code.
The first thing you should do is right-click on your query window in SSMS and set your results to text so that the script outputs you a new script that you will run in another query window. Keep in mind with some of your tables you might have to drop and recreate the indexes in order to alter the collation. Always take a backup before changing something like this and know what you are doing and changing before executing any script.
SELECT ‘ALTER TABLE ‘ + syso.name + ‘ ALTER COLUMN [‘ + sysc.name + ‘]’,
syst.name + ‘ (‘ ,
sysc.length ,
‘) COLLATE SQL_Latin1_General_CP1_CS_AS’ ,
CASE sysc.ISNULLABLE
WHEN ‘1’ THEN ‘NULL’
ELSE ‘NOT NULL’
END + CHAR(13) + CHAR(10) + ‘GO’
FROM sysobjects syso ,
syscolumns sysc ,
systypes syst
WHERE syso.id = sysc.id
AND syst.xtype = sysc.xtype
AND sysc.collation = ‘SQL_Latin1_General_CP1_CI_AS’
AND syso.type = ‘U’
AND syst.name != ‘text’
AND syso.name NOT IN ( ‘SYSDIAGRAMS’ )
Enjoy and I hope this helps you!
]]>SELECT object_name(object_ID) FROM sys.columns WHERE NAME = ‘<column name>’
While Bill Fellows (blog|twitter) had a different approach:
SELECT TABLE_NAME FROMinformation_schema.COLUMNS ISC WHERE isc.COLUMN_NAME =‘<column name>’
Both worked very well and here they are here for you and for me when I need to remember how I did it. Enjoy!
]]>Moving forward, I wanted to pass along a great tip last night from Pam Shaw (blog|twitter), our fearless leader. She shared a site with us called Instant SQL Formatter that provides automatic formatting and coloring of many different flavors of coding including SQL Server and Oracle. It is a pretty nifty utility especially with all of the available formatting options. Check it out and thanks Pam!
]]>First of all, datetime2 is recommended by Microsoft as opposed to datetime as it provides a larger data range, a larger default fractional precision as well as the optionally defined user precision level. Datetime2 is also ANSI and ISO 8601 compliant, whereas datetime is not. Which would you use?
Example:
]]>Datetime2: 2012-06-25 12:45:10.1234567
Datetime: 2012-06-25 12:45:10.123
Today’s installment involves the Prime Number Challenge! This is a clever and fun T-SQL script, so check it out!
Enjoy this #SQLAwesomeness!
]]>Alright, here we go. I created three new logins that would access development and acceptance copies of a production database that I had just setup on an acceptance server using Red Gate‘s SQL Compare and SQL Data Compare (which I will blog about soon, excellent product). In my possession, actually a sharepoint list, I had the password for each of these three new logins. Today I found out that the password I had listed was out of date. No problem, easy fix!
Using SSMS I would easily change the passwords and then update my sharepoint list to the new passwords, this is DBA 101 stuff. Not so fast my friend! SSMS then presented an error stating that the password did not meet our ultra-rigid fort knox password policy (see posts on security). No problem, I forgot to uncheck the ‘Enforce password policy’ check box. That happens all the time when I get in a hurry. Now it is unchecked and SSMS presents the following error:
The CHECK_POLICY and CHECK_EXPIRATION options cannot be turned OFF when MUST_CHANGE is ON.
There was my real mistake committed on Friday; I forgot to uncheck the ‘User must change password at next login’ check box when I created the logins. It was a rookie mistake, but I was in a hurry to get things done so I could go and get my root canal in the afternoon. Luckily for me this was not a resume updating event. In order to fix this we need to do the following:
USE MASTER
GO
ALTER LOGIN [userlogin] WITH PASSWORD ‘original password’
GO
ALTER LOGIN [userlogin] WITH CHECK_POLICY = OFF, CHECK_EXPIRATION = OFF;
Then I was able to change the password, which I used ALTER LOGIN since I had it in the query window. What a way to start the week off. Enjoy!
]]>Today’s installment is from one of the funniest technical presentations that I have ever seen. If you get a chance to see it at a SQL Saturday or Code Camp near you, by all means attend. It is by Rob Volk (blog | twitter) and it is called Revenge: The SQL!
Enjoy this #SQLAwesomeness!
]]>In order to clean these items up, use the following code:
USE MSDB
EXEC sysmail_delete_log_sp @logged_before=’2012-01-13 12:00:00′
You could script this out to run nightly or weekly and remove the last 30 days. Enjoy!
sysmail_mailitems
]]>Today’s installment is Solving Soduku with SQL! This is a clever and fun T-SQL script, so check it out!
Enjoy this #SQLAwesomeness!
]]>Microsoft recommends scripting your SQL Server configuration data to a file using SELECT * FROM sys.configurations anytime you make changes to your server. I would then include this in your disaster recovery walk-away bag or remote location destinations. Personally, I would recommend documenting these settings if at all possible in a Word document explaining why you have chosen the settings and if they relate to your department’s best practices so that when you get hit by a bus, your replacement DBA will understand the intricacies of your choices and not chalk them up to the ‘this guy was an idiot who knew not the ways of the force.’
In addition, if you are using replication then you should script out all of the components for your disaster recovery. It is also a good idea to back up your master and MSDB databases for the publisher, distributor and all of your subscribers. I personally have never done that, but Microsoft recommends it…so do IT! Enjoy!
]]>To create a login in T-SQL use the following code:
CREATE LOGIN ‘AD\Ed’
To remove a login using T-SQL, use the following code:
DROP LOGIN ‘AD\Ed’
To deny a login using T-SQL, use the following code (NOTE: if the user or group does not exist in SQL Server, this command will add them first):
DENY CONNECT ‘AD\Ed’
This is a good foundation for a couple of other posts that I am working on for this week. Enjoy!
]]>Today’s installment is called Minesweeper in T-SQL! This is a clever and fun T-SQL script, so check it out!
Enjoy this #SQLAwesomeness!
]]>