T-SQL Tuesday #35 – Soylent Green

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is “Soylent Green”. If you want to read the opening post, please click the image below to go to the party-starter: Nick Haslam (Blog | @nhaslam).



The question of this month was to write down our most horrifying discovery from our work with SQL Server. If you work with SQL long enough, you will encounter some strange situations. One way or the other…

My first experience with SQL Server was at my first official IT job. Back then I worked for an insurance company, and there I was offered the opportunity to become a “Conversion Specialist”. This meant that I visited customers, advised them about their data and our software, and converted and import their data into our main system. When I started the job, I’d never wrote a single T-SQL statement. So the learning curve was pretty steep, but after some sleepless nights of studying, I got the hang of it. And during this job, I encountered my first (of many) horrifying experiences…

In this company, the main application took it’s data from one (!!!) SQL 2000 database. The system contained all the data the company had, and it was a “rebuild” DOS application in Delphi (at that time the company worked with Delphi and .NET). In order to store all of their data into one data model (yeah, I know!), they created a “flexible and dynamic” model… In one table…

The data was stored in one table (it wants to remain anonymous, so from now on we’ll call him “Foo”), and Foo contained more then 200 columns (as I recall correctly). Every time we inserted a record into Foo, SQL Server calmly mentioned that the limit of 8000 bytes (max. per record) was exceeded. How they fixed that? I still haven’t got a clue…

Every “object” stored in Foo contained properties, or a collection of properties (which obviously ended up in 1 record for each item in the collection). But as I mentioned, they were stored “dynamically”. So if you wanted to retrieve an object “Tree”, then you needed columns 15, 18, 20 and 52. When retrieving an object “Bird”, you needed columns 15, 18, 25 and 2550 for the same properties.

But I must honestly admit: I left the company after six years with tears in my eyes. They offered me a lot of opportunities, and the colleagues were awesome!

Another example I encountered on a production environment (at a different company), was an issue with currency calculations. The product data and the currency rates were loaded from 2 different data sources. To combine these in one record (and calculate the turnover), they used a CASE statement in the script, that ran in the Data Warehouse. But when I took over the Data Warehouse, they forgot to mention one thing…

If a product was sold for 100 Mexican pesos (with current exchange rates this is about € 6.00 or $ 7.80), and no exchange rate from pesos to dollar was present, the script ended up in the ELSE clause. This clause multiplied the amount with 1, “not to mess up the data”. And without anyone noticing, 100 Mexican pesos turned into $ 100! It actually took a while for people to notice this (including me!).

And I’m probably not the only one with these experiences, so I’m glad Nick asked us to share them with the rest of the #SQLFamily! And if you want to read more of the horrifying experiences, let me know. I might write another blog post about this, because this feels like therapy! And it’s another change for me to make Rob Volk (a great guy, and great example to me!) proud, by using the “Evil” tag again! 😉

Sample Databases – Ye Olde Way!

Last week I was working on a SQL Server presentation, to explain the basic of databases and how SQL Server works to a few colleagues. At the end of my presentation, I wanted to show some demo queries. Normally I would create my own tables with sample data, but I want to give them the opportunity to repeat the demos again on their own.

Nowadays Microsoft offers you the AdventureWorks database as extra download for all new versions of SQL Server. But for some examples I just want a smaller database. In “Ye Olde Days” I worked with Pubs and Northwind. Those were small databases, that were still understandable for starters. My first encounter with SQL Server was on the pubs database, and it still sticks to me as “fun and easy”.

But if you try to find them, you need to download an MSI file that extracts the files to your local system. It contains the .MDF and .LDF file of both the Pubs and Northwind databases, and a ReadMe file. But if you try to attach these databases to a SQL 2012 instance, you’ll get an error. SQL 2000 databases can’t be automatically converted to be SQL 2012 compatible.

I’m glad that they decided to add the create script to the .MSI installer. There’s only 1 thing that doesn’t work if you run the scripts. Both scripts contain a call to sp_dboption. This is a way to change database options in SQL 2000-2008. This is removed in SQL 2012, and MSDN advises you to remove this functionality as soon as possible if you still use it in old systems. So after deleting these from the script, it works perfect. One thing I added for my own use, is after the databases are created, I set them to Read-Only. You can delete this from the script, or undo this after the generation of the database(s).

I’ve also included the ERD (Entity-Relationship Diagram) for both databases. This makes it a little bit easier to start using these databases. I found the diagrams by searching in Google for the name. In this case DataMasker hosted the files I wanted.

The reason to share these scripts is because I’m probably not the only one that still wants to use these databases occasionally. So you can download them by clicking the links below. If you want a backup or the .MDF and .LDF files of the databases, please contact me and we’ll work something out.

Pubs:
PubsCreation Script (.sql)
Pubs ERD (.pdf)

Northwind:
Northwind Creation Script (.sql)
Northwind ERD (.pdf)

SET TEXTSIZE – Good or Evil?

One of the first things I try do to every morning when I’m at the office (besides getting a lot of coffee to get the engine started), is reading up on blogposts that were posted the night before or when I’m at the office. My goal is to try to learn at least 1 thing every day, by reading a blog post or article.

Today, one of those articles was written by Pinal Dave (Blog | @pinaldave). He wrote a blogpost about SET TEXTSIZE. I wasn’t familiar with that functionality, so I decided to take it out for a spin.

What SET TEXTSIZE does, is limit the size of the data returned by a SELECT statement. As Pinal describes in his blog post, it could be used as a replacement for the LEFT function on each column you retrieve from the database. But I agree: use it only for test purposes. If used in production, in a query that returns (for example) 5 columns, the SET TEXTSIZE is overlooked much easier then 5 LEFT functions. This reduces the chance that you or your colleagues wonder why the returned column value isn’t shown correctly.

The other remark I need to make, is that it’s interpreted differently by the SQL engine. A few examples of this can be found in the comments of the article Pinal wrote.

But when I used SET TEXTSIZE, I started wondering what this will do to your execution plan. According to MSDN TEXTSIZE is set at execute or run time, and not at parse time. But what does this mean for your execution plan?

To try this out, I created a table, and inserted 10.000 records in that table:

CREATE TABLE RandomData
	(ID INT IDENTITY(1,1),
	 Col1 VARCHAR(MAX),
	 Col2 VARCHAR(MAX),
	 Col3 VARCHAR(MAX),
	 Col4 VARCHAR(MAX),
	 Col5 VARCHAR(MAX))


INSERT INTO RandomData
	(Col1, Col2, Col3, Col4, Col5)
SELECT
	REPLICATE('1234567890', 100),
	REPLICATE('1234567890', 100),
	REPLICATE('1234567890', 100),
	REPLICATE('1234567890', 100),
	REPLICATE('1234567890', 100)

GO 10000

Once you’ve created the table, you can run the “old fashioned” script with the LEFT functions:

SELECT
	LEFT(Col1, 10),
	LEFT(Col2, 10),
	LEFT(Col3, 10),
	LEFT(Col4, 10),
	LEFT(Col5, 10)
FROM RandomData

If you look at the exection plan, it contains a table scan, Compute Scalar (that computes the new values of each row), and the select of the data. Nothing out of the ordinary I would say.

But if you run the same query with the SET TEXTSIZE, it results in an error:

An error occurred while executing batch. Error message is: Error processing execution plan results. The error message is:
There is an error in XML document (1, 6).
Unexpected end of file while parsing Name has occurred. Line 1, position 6.

The query actually returns the whole set 10.000 records, and the result is correct. Of every column, only the first 10 characters are returned. So what’s happening with the execution plan?

If you use either one of the statements below in your session, you can see that the execution plan is generated without any issues:

SET SHOWPLAN_TEXT ON
SET SHOWPLAN_ALL ON
SET SHOWPLAN_XML ON

There is a Connect item for this issue, but the SQL Server team decided not to fix it in SQL Server 2008. And looking at my screen, they didn’t fix it in SQL Server 2012 either…

So my best guess (without knowing what the actual code does), is that the execution plan XML isn’t completely transfered to the client. This is part of the resultset, and thus also delimited because of the SET TEXTSIZE.

So my conclusion would be: don’t use SET TEXTSIZE, unless you’re absolutely aware that the results you receive are delimited and that visualising your execution plan may cause an error (but only in the SSMS!). The query results are retrieved and shown correctly, but the execution plan XML is causing problems when using a small TEXTSIZE.

But if my conclusions are incorrect, or if I’ve overlooked something, I’d love to hear your comments on it! So don’t hesitate to correct me if necessary! 😉

T-SQL Tuesday #33 – Trick Shot

T-SQL Tuesday is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is trick shot. If you want to read the opening post, please click the image below to go to the party-starter: Mike Fal (Blog | @Mike_Fal).



The topic of this month is trick shots. Thinking about this subject, I decided to search for the literal translation on Wikipedia:

“A trick shot (also trickshot or trick-shot) is a shot played on a billiards table (most often a pool table, though snooker tables are also used), which seems unlikely or impossible or requires significant skill.”

So a trick shot is a trick that’s unlikely or impossible. Isn’t that something we hear everyday? I know I do! A few quotes I overheard last week:

“No, this can’t be done differently”

“No, this cursor is set-based”

“I have this query (500 lines of code), and it doesn’t do what I want/what I build. Can you fix it?”

In at least 2 cases it ended up with me being right (unfortunately for them, and our company). But fixing the issues took some time, a lot of talking (or, as a manager would call it, coaching), and a fair deal of patience. But then came the hard part: rewriting the code. How do you rewrite a query, based on a cursor, into a set-based operation? There’s your trick shot! 🙂

Another example of a “trick shots” was creating a solution for a spatial data problem. My colleagues created an application that saves polygons into the database. But unfortunately, it we didn’t use the .MakeValid() function. That meant that some polygons were invalid, and some objects actually contained more then 1 polygon (which should have been stored as multi-polygon). When querying this data, the execution of the query retrieving the geography object failed because of the invalid objects. Finding these objects it the biggest issue.

Well, surprise, a cursor was the solution! For every string of coordinates retrieved, I entered a TRY-CATCH block. There I tried to convert the array of coordinates int a valid polygon. And if it failed, I added it to a memory table of invalid polygons that I declared in the query. The results from the table were printed at the end of the script, so the developer that ran the script could fix them. Normally I’m against the usage of cursors, but sometimes a cursor is usefull to find a problem, as you just saw.

What I’m actually trying to say, it that is that a trick shot isn’t unlikely or impossible, as long as you know what you’re doing. It takes a lot of practice, trial and error, and patience to master a certain skill. And, like in most cases, the only thing that’s holding you back is your own mind and imagination. So try to search for new things, keep challenging yourself to learn new things!

T-SQL Toolbelt – Search for objects in databases – V 2.1.0

A few weeks ago, I received a message from an old colleague and friend Eric (Blog | @saidin). He wanted to know if I had a query in my (and I quote) “magic bag of SQL tricks”, to search through object in SQL server. The company he works for (he is a software developer, and independant consultant) wanted to change all stored procedures, that contained functionality to calculate VAT (Value Added Tax).

I remembered that a few years ago, I needed that same functionality, and I wrote a few scripts to search for specific dependencies in views and stored procedures. Next to a query that gets information from
sys.tables and sys.columns, I used these queries to get view and stored procedure content:

SELECT *
FROM sys.syscomments
WHERE text LIKE '%<SearchTerm>%'


SELECT *
FROM INFORMATION_SCHEMA.VIEWS
WHERE VIEW_DEFINITION LIKE '%<SearchTerm>%'

The first query uses information from sys.syscomments. Which, according to MSDN:

“Contains entries for each view, rule, default, trigger, CHECK constraint, DEFAULT constraint, and stored procedure within the database. The text column contains the original SQL definition statements.

The seconds query uses INFORMATON_SCHEMA, that contains SQL Server metadata (see MSDN article):

An information schema view is one of several methods SQL Server provides for obtaining metadata.

The VIEWS view (a view on all views?) returns a row for each view that can be accessed by the current user, in the current database. So this means that the view only returns rows for objects that you have permissions on.

Then I decided to write a script that does this in one query, and more… When working on this script, I thought about adding more functionality to it. Why wouldn’t you want to search for primary or foreign key columns, triggers, functions, etc? But adding more information to the resultset often means that the overview is lost along the way. Because of that I created a switch system. By setting a few bits you can turn on what you need to see, and turn off what you don’t want to see. This way the result is kept clean, and you’re not bothered with unnecessary information.

One of the issues I ran into is how to search for a specific string. Because I wanted to let the user enter the searchterm once, I needed to use a variable. But if you use a variable, and you ad a wildcard (%) as the first and last character, the query returns all objects. It has the same effect as returning all objects, instead of returning objects based on a specific searchterm.

So because of this, I used dynamic SQL to search through the list of objects. In dynamic SQL it’s possible to work with wildcards in a like statement. The only thing I needed to do is change one more table from memory to physical temp table, because it’s used in the dynamic SQL. Apparently dynamic SQL can’t use a memory table (DECLARE @Object TABLE) as a datasource.

So this is what I could build in the past few weeks. The only problem is that fixing issues that I found resulted in adding more and more new functionality to the script. With that in mind, I want to create a toolbelt with useful SQL scripts for myself. But of course, I want to share it with the community, so they can use it if they like.

So the upcoming weeks, I hope to build as much functionality in this script as I can. There are still a few wishes for the future, and a few features that I want to build in, just because they’re cool! For every new version, I will write a blog with releasenotes, so you’re aware of the changes in the script.

For further questions and wishes, please contact me via twitter or this blog. I’d love to hear your ideas and wishes, so that I can implement it in the script!

You can download the script by clicking on the image below.

Downloads

Version 2.1.0:


Strange behavior of spatial data

As of today, I’m kind of forced to admit I have a problem… I’m in love with spatial data. And once you’re hooked, there’s no turning back. It could be worse of course! And in these circumstances, you come across the most interesting cases…

After trying to draw a geometry polygon in SQL Server 2008, I wondered what the difference is between a polygon and a multipolygon. Polygon also accepts more then 1 polygon just like a multipolygon. So what’s the difference then? I think I found the difference, with the help of Andreas Wolter (Blog | @AndreasWolter).

Just look at the query below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

SELECT @Obj.ToString(), @Obj

This query produces a valid set of polygons, even though it’s two objects in a single polygon. It draws the polygons, even though a polygon should only consist of 1 object, and not two as the example above.

This isn’t the weirdest I’ve seen. Just look at the example below:

DECLARE @Obj GEOMETRY
SET @Obj = GEOMETRY::STGeomFromText('POLYGON((10 0, 10 10, 0 10, 0 0, 10 0),
											 (10 15, 10 25, 0 25, 0 15, 10 15, 10 15))'
									,4326)

--==================================================

SELECT
	'Geo Object'					AS Description,
	@Obj							AS GeoObject,
	@Obj.ToString()					AS GeoObject_ToString,
	@Obj.STIsValid()				AS GeoObject_IsValid

UNION ALL

SELECT
	'Geo Object + MakeValid()'		AS Description,
	@Obj.MakeValid()				AS GeoObject,
	@Obj.MakeValid().ToString()		AS GeoObject_ToString,
	@Obj.MakeValid().STIsValid()	AS GeoObject_IsValid

If you run the example, you’ll see a description of the objects, the spatial object itself, the object ToString(), and a bit (boolean) for STValid(). The first record in the resultset is just the same as in the previous example. The second row contains the method .MakeValid().

As you see the first record (2 objects in 1 polygon) is not a valid geometry object. The second records shows that .MakeValid() converts your polygon into a multipolygon. And if you check if the multipolygon is valid, it returns true.

This example was getting weird, but now run the example on a SQL 2012 instance. You will see a difference in the multipolygon coordinates. I’ve ran them on both versions, and you can see the differences in the screenshot below:

SQL Server 2008:

SQL Server 2012:

The conversion to multipolygon causes a different rounding of coordinates. It seems like the converison in SQL 2008 is changed quite a bit for SQL 2012.

So what does this mean for the usage of geometry and geography data? I don’t know… This might mean we always need to use .MakeValid() before storing a polygon. Part of the issue is, that spatial data is pretty new stuff. This means that there are no best practices yet. So time will tell what the best practices will become…

Spatial Index – No catalog entry found for partition

Last week, I came across a problem with spatial indexes. Apparently there’s still a bug in SQL Server 2012 (the SQL Server team thought they’d fixed it in 2012), when building an index and using the DMVs after that.

I tried to build an index on a world map. This map contained all the country, province/state and city information of the whole world. We needed the data to feed a new company portal for our customers. To speed up the queries we ran on the data, I tried to add a spatial index. To see which index-setting worked best for us, I used the sp_help_spatial_geography_index DMV (Dynamic Management View) to look at performance statistics. This worked fine, untill I tried a number of different index setups. Then I got this error:

 

Msg 608, Level 16, State 1, Line 1
No catalog entry found for partition ID 72057594041073664 in database 9. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption.

 

The first thing I did was search for the error number and specific issue. Apparently there were few problems documented. Eventually I ended up with a Microsoft Connect issue, started by my colleague and friend Bob Beauchemin (Blog | @bobbeauch).

After contacting and consulting Bob, I tried to install SQL Server 2012 CU1 (Consecutive Update) and CU2, and after both updates the error still occurred.

The conclusion is that the SQL Server team is trying to fix this in the next big update. Looking at the status of the Connect item, I personally don’t see this happening in a hotfix.

In the Connect issue I also posted a workaround. Because the building of the index is working, and only the DMV isn’t doing what it’s supposed to do, you can just restart the SQL Service and the DMV will work again. This isn’t a fix for production servers, but I assume you will only create an index once, and then the DMV will work!

 

UPDATE

With the release of SQL Server 2012 SP1 (announced at the PASS Summit 2012), this bug has been solved. You can download SP1 here. To see what’s new in SP1, read this MSDN article.

T-SQL Tuesday #31 – Logging

A few weeks ago I heard about T-SQL Tuesday. This is a recurring blog party, that is started by Adam Machanic (Blog | @AdamMachanic). Each month a blog will host the party, and everyone that want’s to can write a blog about a specific subject.

This month the subject is logging. If you want to read the opening post, please click the image below to go to the initial party-starter: Aaron Nelson (Blog | @SQLvariant).



When I read the subject of this month is logging, the first thing that came up in my mind was: what do you mean with logging? Logging application performance? Logging system health? Logging the outcome of a process? Some of these you use every day, and some of these you abuse every day.

Instead of writing about specific SQL server logging, or technical solutions or scripts, I decided to write about the function of logging. When to use logging, and how to use it is (in my opinion) just as important as the technical implementation of it.

Monitoring vs Logging
One of the projects I saw in the last couple of years shows an example of abuse. The company wanted to know if the products they sold where actually processed throughout the back-end systems. So they bought some big TV’s to show dashboard software they bought. Quickly, the dashboard were filled with numbers regarding product sales, process time needed from order to shipment, etc.

As soon as these dashboards were up, they started to alter them. More and more information was added to the dashboard, and soon after that the information-overkill was immense. Then the inevitable happened… Management started to use these numbers to monitor the health of the back-end systems! If the process timings were climbing, they ordered the IT department to check out the production environment for disturbances or bottlenecks.

This is a typical example of abusing logging, to monitor the complete production environment. Instead of creating “checkpoints” for their data-movements, they created a top level view that showed something went wrong, but only at the very last moment. So instead of concentrating on the smallest issue or bottleneck at first, they needed to drill through tons of information and errors to get to the actual issue.

What do you want to achieve?
As you can see in the example above, you need to ask yourself a question before building logging: what do you want to achieve with it? Does the business want to see the money flow from front-end application all the way to your database? Do you want to adjust business processes via the information you extract from logs (maybe in a sort of Data Warehouse)? Do you want to monitor the health of your systems (being a developer or a guy working in the IT department)?

Once you’ve established that, the next question is: what is the scope of the logging? Do you want to monitor up-time? Do you want to see the outcome of a process, or do you just want to log errors? It might even be both, telling you that the process finished with a certain status, so that you can store timings of certain processes.

Logging location
The last question you want to answer is where to store your logging information. It’s easy to store your logging information into a SQL server database, so you can easily query the information you want. On the other hand, the information you store might slow down the rest of the SQL server because you share I/O cycles on the same physical disks. Then a file (for example CSV) on a different server might be interesting.

Another solution is to e-mail the error message to a certain mailbox or user, if the process finished unexpected. But if you want to log or store all messages, this might not be the best approach for your users and your Exchange server.

Production vs Logging
When logging SQL server messages, I’m always very careful what I log. A past experience learned me to ask myself one question in particular: Do you want to run production without logging, or do you want to log you’re not running production? What I mean by that is whenever you need to log timings or other process messages, you slow down the production environment. Whenever you see someone running SQL profiler on a production machine, you know he’s doing a bad job. In my opinion, this is only a valid option when you’ve used and tried all of your other options to find the bottleneck.

A good example of this I’ve experienced before. The company had some strange transactions on their database. So naturally, they wanted to know who constantly added SQL server users (with sysadmin permissions) to the SQL instance on the production machine.

One of the system engineers thought it was a good idea to run SQL profiler on the instance. The adding of the user occurred at random moments, so they decided to let profiler run for a while. After a few weeks (!!!) of constantly running profiler, another system administrator (that didn’t know that profiler ran from another machine) was wondering why the production machine was so slow. Eventually he asked me to help him, we looked at the machine, and turned off the profiler.

Curious about what the profiler logged, we checked out the database that was used as log destination. It was a table with millions and millions of records, and almost none of them was useful. So they had to make up a lot of lost time, while the person responsible for adding the users was still missing.

Eventually we solved the mystery; it was a senior developer who added the user. After he added the user for his application (it was used for billing once a month), he was pissed of that someone deleted his user again. So this vicious circle cost them a lot of time and frustration. Not only for them, but also time they could have spend making the company money, but instead used it for a ghost hunt.

Conclusion
Giving one answer to all questions is impossible. For every specific challenge you’ll encounter, you need a new solution (or at least partially). So whenever you’re thinking of creating a solution for all of your applications: forget about it! This is impossible! Tons of developers have tried this, and none of them found a perfect solution. Every case needs to be handled with care, and needs a specific solution or workaround.

When logging, try to log the least possible, with the biggest impact possible. This might sound like kicking down an open door, but it isn’t. It’s as basic as that. Remember the impact of your logging, and try to tune it down where possible. Keep it basic!

#SQLHelp – SQL 2012 Management Studio Freezes

As I told you in a few of my previous blogposts, I try to follow the #SQLHelp hashtag / topic. And two weeks ago, I could help another colleague via this communication channel.

When SQL Server 2012 RTM came out, I installed it as quick as possible. Just to try it out, and to see what the differences were compared to the other version I installed on my machine: SQL Server 2008. When using SQL Server Management Studio 2012, I encountered random freezes of SSMS. The freezes didn’t occur every time I opened a menu, or started a wizard or something. So it was a problem with my installation.

After a while, I remembered that the installations of SQL Server 2005 and 2008 had the same issue. These SSMS installations also froze, because they shared some dll’s with Visual Studio. So the issues I had now, might have the same cause. And eventually I re-applied Visual Studio SP1, and this solved the issue for me.

And after a few weeks, I saw a similar question from Samson J. Loo (Blog | @ayesamson) coming by, using the #SQLHelp hashtag:

@ayesamson, 2012-05-23

has anyone experienced random unresponsiveness with SSMS 2012 to a point where you have to kill the process? #sqlhelp #sql

So because I saw this issue before, I replied to his tweet:

@DevJef, 2012-05-23

@ayesamson: Yes. Are you running into this issue constantly, or just once? Problem might come from shared DLL’s with VS2010…

Apparently he was still having these issues:

@ayesamson, 2012-05-23

@DevJef its been happening more frequently now. I do have VS2010 installed as well. ‪#sqlhelp

So from my previous experience, I gave him the tip to re-apply Visual Studio 2010 SP1:

@DevJef, 2012-05-23

@ayesamson: I had the same issue. I actually fixed it by applying VS210 SP1 again. This might help you as well! ‪#SQLHelp

The next day, I got the confirmation that SP1 was re-applied:

@ayesamson, 2012-05-24

@DevJef I re-applied VS2010 SP1 this morning, rebooted and haven’t had an issue. If I don’t have an issue come Mon. then we’re golden!

And a week later, I got the great news it helped him get rid of the freezes:

@ayesamson, 2012-05-31

@DevJef well I haven’t experienced any lockups with SSMS 2012 since reapplying VS2010 SP1. Thanks!! ‪#sqlhelp

So I was glad I could help him out, and happy he actually got back to me about resolving the issues. So thank you for that Samson! And for the rest of the community, I hope I helped you with writing this post!

Welcome #1000!!!

I never thought that I would get a chance to say this, but I’m proud to announce that on the 31st of May, I could welcome my 1000th visitors on this blog. And only in just a few months!

It all started on the 21st of September 2011, when I finally decided to create an online brain-dump for myself. Posting things I needed to remember, or (possibly) going to use again in the future. The more articles I wrote, the more visitors started reading my articles. This came as a surprise to me!

Then I decided to create a WordPress blog, to get a better layout of my blog, and more functionality in the back-end CMS.

So on the 3rd of November 2011 I copied all my previously written posts to my WordPress blog, and thereby finished my blog-move. From that date, I only posted new articles posts on this blog you’re reading now.

Since that time, a lot has changed. I started to monitor Twitter feeds (#SQLHelp and #SQLServer) for interesting subjects to write on, and try to write about (at least I think so) interesting subjects and problems.

Just last week I published my 25th article. And I hope to double this number in 2012.

From this place I want to thank you for reading and/or following my blog, and I hope I can provide you with more interesting stuff in the future. So if you have any subjects for me, or questions you want answered, please contact me and I’ll try to create a interesting article for you! 😉

Design a site like this with WordPress.com
Get started