NOLOCK Is Not a Performance Hint. It’s a Data Corruption Hint.

Ranting about NOLOCK right after my CLR post might be fanning the flames, but I’m actively dealing with these topics so they are fresh on my mind. And we all like a good argument anyway, right?

So at the risk of starting more fights…

NOLOCK is not a performance optimization

It’s a concurrency shortcut that trades correctness for the illusion of speed. Most people using it don’t fully understand what they’re giving up, and I am convinced those people are doing it out of a bad, old habit.

That doesn’t mean everyone who uses NOLOCK is lazy or reckless. It does mean the hint has been copy pasted into too many scripts because “it avoids blocking” and “everyone does it.”

Why NOLOCK Is So Tempting

I get why NOLOCK exists in so many environments.

(I also get that I used WAY too many bullets in this blog post, and I am very sorry. Every time I reread it, I tried to remove a few more. I blame the fact that I wrote this from a white paper that I wrote that was designed to make the topic understandable to nontechnical folks.)

You have:

  • High-volume inserts or updates
  • Reports timing out
  • Blocking chains that look scary in Activity Monitor

Someone adds WITH (NOLOCK) and suddenly:

  • The report runs
  • The blocking graph quiets down
  • Everyone breathes again

It looks like an easy win.

The problem is that NOLOCK doesn’t fix blocking. It just tells SQL Server to stop caring whether the data makes sense. If that doesn’t scare you, I don’t know what would.

What NOLOCK Actually Means

NOLOCK is functionally equivalent to READ UNCOMMITTED. You are explicitly telling SQL Server:

  • “I’m okay reading data that isn’t committed”
  • “I’m okay reading data that may never actually exist”
  • “I’m okay if rows disappear or show up twice”
  • “I’m okay if my data is bad.”

When I put it that way, I hope even more that you hate the idea of this hint.

Dirty Reads Are the Least Dangerous Part

Dirty reads are the example everyone uses and ironically, they’re not even the worst problem.

Example Scenario:

  • A process updates 10,000 rows
  • A report runs with NOLOCK
  • The process errors and rolls back

Your report just included data that never existed.

What’s worse is what happens during allocation scans.

Missing Rows and Double Counting

When SQL Server scans data, it relies on allocation structures to find where rows live. With NOLOCK, SQL Server is allowed to read these structures while they are actively changing.

If a page split or row movement happens mid-scan, SQL Server can:

  • Skip rows entirely
  • Read the same rows twice

This isn’t theoretical physics. Microsoft documents this behavior explicitly.

  • Counts can be wrong
  • Aggregates can be wrong
  • Financial reports can be wrong

Why People Don’t Notice

NOLOCK failures are subtle and obnoxious.

They don’t:

  • Crash queries
  • Throw exceptions
  • Leave obvious forensic evidence

They show up as:

  • “Why does this report not match that one?”
  • “Why are yesterday’s numbers different today when nothing changed?”
  • “It only happens occasionally.”

The Thing People Actually Want: RCSI

In almost every situation with NOLOCK, what people really want is this:

“Readers shouldn’t block writers, and writers shouldn’t block readers.”

Good news! SQL Server already solved that!

Read Committed Snapshot Isolation (RCSI) gives you:

  • Non-blocking reads
  • Guaranteed committed data
  • Zero query-level hints
  • No application code changes

Readers see the last committed version of a row via TempDB row versioning. Writers keep doing their thing. Everyone gets what they want correctly.

There are some things to consider with RCSI though!

Yes, there’s TempDB overhead.
Yes, you should size and monitor it.
Yes, large transactions should still consider batching for better performance.
Yes, you should still probably be using RCSI instead of NOLOCK.

(see, no bullets there…even though I could have easily added them)

My Rule of Thumb

  • If you need NOLOCK to make a report work, you have a deeper issue
  • If you need NOLOCK everywhere, you’re accepting data corruption
  • If accuracy matters at all, NOLOCK is the wrong default

Final Thought

The CLR debate taught me something important: Everything in SQL is objectively subjective…wait no…leaving things as they always have been isn’t a valid strategy, and education on new ideas is imperative.

NOLOCK is wildly overused by people who were never shown a better option.

And now you know there is one.

CLR: Powerful, Dangerous, and Almost Never Worth It

This might be an unpopular opinion, but it’s one I’ve held for a long time: SQL CLR is rarely useful, often misunderstood (or flat out unknown), and a security problem waiting to happen.

That doesn’t mean CLR can’t be used safely or appropriately. It means that in the real world, it usually isn’t. I find that most of the time it is deployed because its an application requirement and no one ever questions it. Ever.

What CLR Is Supposed to Be

SQL Server Common Language Runtime (CLR) integration allows developers to write database objects using .NET languages like C# instead of T-SQL. In theory, this opens the door to more complex logic, better string handling, and code reuse.

In practice, I’ve seen CLR used for things like:

  • Regular expressions
  • HTTP calls to external services
  • File system access
  • “Quick fixes” that really belong in the application layer

None of those are things I want running inside the SQL Server engine.

The Permission Sets (and Why UNSAFE Is Exactly What It Sounds Like)

CLR assemblies run under one of three permission sets:

  • SAFE – internal computation only
  • EXTERNAL_ACCESS – access to files, network, registry
  • UNSAFE – effectively unrestricted

The problem is UNSAFE and how often its used as a catchall.

UNSAFE assemblies can execute arbitrary code under the SQL Server service account. That means operating system access, network access, the ability to bypass normal SQL Server security controls entirely, you name it. Microsoft has been very clear over the years that Code Access Security (CAS) is not a real security boundary anymore, which is why CLR Strict Security exists.

Starting with SQL Server 2017, SAFE assemblies are treated as UNSAFE at runtime unless they’re properly signed. Many environments don’t realize this and assume “SAFE” still means safe.

This Isn’t Hypothetical Risk

In 2024, CVE-2024-37341 demonstrated exactly how dangerous UNSAFE assemblies can be. A malicious assembly could create sysadmin users or enable xp_cmdshell.

Furthermore, there is no meaningful audit trail for what a compiled DLL actually does. You’re trusting a binary blob that lives inside your database, probably without source control, or peer review, and without visibility into what code is executing.

Why This Gets Swept Under the Rug

One reason CLR sticks around is that most DBAs don’t really understand what it is or what it is doing, just that “the application needs it.” Another is that once it’s in place, it becomes scary to touch. Nobody wants to be the person who breaks production by disabling a mysterious DLL that was deployed years ago. I mean, I don’t either and I’m trying to advocate fixing this.

The STIG Side of Things

From a security and compliance perspective in the Public Sector, CLR is just bad.

Multiple DISA SQL Server STIGs explicitly flag CLR as a finding unless it is required and approved. If CLR is enabled and not documented, it’s a finding. If UNSAFE assemblies exist, it’s a finding. End of story. Enjoy your Risk Acceptance.

The guidance aligns with the principle of least functionality: don’t enable features you don’t absolutely need, especially ones that allow external code execution inside the database engine.

It’s Also a Cloud Migration Killer

If you have CLR dependencies, Azure SQL Database is simply off the table. It doesn’t support user-defined CLR assemblies at all. Managed Instance supports CLR with strict security, but that still comes with operational and security overhead. You are probably stuck with a lift and shift Azure VM if you want to keep using CLR.

The Alternatives Are Usually Better

Most common CLR use cases have better solutions now:

  • Regex and string parsing can often be handled with native T-SQL
  • HTTP calls belong in application code, Azure Functions, Logic Apps, or Agent jobs
  • External integrations should not live inside the database engine

When CLR truly is required, it should be SAFE, signed, documented, monitored, and treated like production code.

My General Advice

If you inherit an environment with CLR enabled:

  1. Inventory every assembly. I know its not fun, but it needs to be done.
  2. Identify what depends on it. More work, ugh.
  3. Understand why it exists. And thinking too, double ugh.
  4. Decide whether it still needs to exist.

CLR isn’t evil, but I act like it is because of the issues with it. It’s powerful in ways that SQL Server was never designed to safely contain. Most of the time, disabling it entirely is the correct move. And if you can’t disable it yet, you should at least understand the risk you’re accepting (there’s that RA again).

Availability Group Seeding

First off, I had some fun with the AI generated images with this. I think silly images are the way to go.

Automatic seeding for Availability Groups is one of those features that’s fantastic when it works and incredibly frustrating when it doesn’t. When seeding is healthy, databases just show up on the secondary and life is good. When it’s not, you’re left staring at vague status messages, wondering whether anything is actually happening at all. I really hate how the GUI handles this, because if the seeding is working or not, you get no feedback whatsoever until its basically done.

Luckily, there are scripts to help here, but if you don’t have them handy, you aren’t getting any information.

The first place I always check is:

SELECT * FROM sys.dm_hadr_automatic_seeding;

This DMV tells you whether seeding started, whether it failed, how many retries have occurred, and whether SQL Server recorded an error code. If seeding failed outright, this is usually where you’ll see the first clue as to why.

Next:

SELECT * FROM sys.dm_hadr_physical_seeding_stats;

When things are healthy, this view can show progress and estimated completion. When things are not healthy, it can be empty, partially populated, or frozen in a state that never changes. So you can use that knowledge; if seeding is supposedly “in progress” but this DMV isn’t showing anything, something is wrong.

Check Whether Data Is Actually Moving (Secondary)

That gnawing question is almost answered. Is anything actually happening right now?

On the secondary replica, I use performance counters to answer that question. This script samples backup/restore throughput over a short window to see if seeding activity is occurring:

----- RUN ON SECONDARY ------
-- Test if there are processes for the seeding occurring right now

IF OBJECT_ID('tempdb..#Seeding') IS NOT NULL DROP TABLE #Seeding;

SELECT GETDATE() AS CollectionTime,
instance_name,
cntr_value
INTO #Seeding
FROM sys.dm_os_performance_counters
WHERE counter_name = 'Backup/Restore Throughput/sec';

WAITFOR DELAY '00:00:05';

SELECT LTRIM(RTRIM(p2.instance_name)) AS [DatabaseName],
(p2.cntr_value - p1.cntr_value)
/ DATEDIFF(SECOND, p1.CollectionTime, GETDATE()) AS ThroughputBytesSec
FROM sys.dm_os_performance_counters AS p2
INNER JOIN #Seeding AS p1
ON p2.instance_name = p1.instance_name
WHERE p2.counter_name LIKE 'Backup/Restore Throughput/sec%'
ORDER BY
ThroughputBytesSec DESC;

If you see throughput here, seeding is still moving data, even if the DMVs look suspicious. If you see nothing, seeding is probably broken.

Restarting Seeding (Without Restarting SQL)

When seeding is stuck, sometimes the fastest path forward is to effectively “kick” the process. On the primary replica, toggling the seeding mode can force SQL Server to restart the automatic seeding workflow:

-----*** RUN ON PRIMARY ******-----
-- Change to your AG name and server names

ALTER AVAILABILITY GROUP
MODIFY REPLICA ON 'SecondaryServer1'
WITH (SEEDING_MODE = AUTOMATIC);

ALTER AVAILABILITY GROUP
MODIFY REPLICA ON 'SecondaryServer2'
WITH (SEEDING_MODE = AUTOMATIC);

This isn’t magic, and it doesn’t fix underlying problems like permissions, disk space, or network throughput, but it often clears up cases where seeding simply stopped progressing for no obvious reason. I use these scripts all the time to verify that there is data movement happening on an AG that stopped syncing over night or after a patch.

A Cautionary Tale About “Helpful” AI

I like to test AI to see what it suggests on problems I’m troubleshooting. Lots of times it tells me what I already know, but one time an AI tool confidently suggested a SQL command that would “restart AG data movement.”

That sounded amazing. I got excited. This must be a new script I didn’t know about from a new release?

No…It didn’t exist. It was just a hallucination.

AI can be a great accelerator, but you still need to verify everything against reality. Especially when something sounds too good to be true.

Final Thoughts

AG seeding failures are rarely caused by one thing, and no single DMV tells the whole story. You have to look at:

  • Seeding status and error codes
  • Physical progress
  • Data movement
  • And sometimes, force SQL Server to reattempt the process

The good news is that with the right scripts and a little patience, most seeding issues can be diagnosed without guesswork. The bad news is that when things break, SQL Server is still not very good at telling you why unless you know exactly where to look.

Hopefully, scripts like these save you a little time the next time seeding decides to go wrong.

Power BI: Easy Data Wins – but Annoying SQL Connections

I want to talk about two things about Power BI today. First, I will give a generic pitch for why it is so great. Then, I will discuss a specific gripe I have about the connection window which I find lacking.

Power BI: One of the Easiest Wins in Data

I’ve been working with data for a long time. One thing that hasn’t changed is how much easier everything gets once the data is in a pretty picture. Whether you’re troubleshooting a weird performance spike or trying to understand a trend, a simple visualization can be helpful. It can also aid in making sense of raw logs. It can highlight things you’d never catch in a wall of text.

But that’s probably obvious to more data driven people. Power BI is also awesome for people who wouldn’t normally think it is for them. I’ve taught Power BI basics to dozens of people. So far, I think everyone (even the skeptics) appreciated it once I explained how useful it can be.

Developers, DBAs, analysts, managers, administrators…almost anyone can find a good use for Power BI. You don’t have to be a reporting expert to get value out of it. Ingest a dataset from a CSV. Drag a couple visuals in. Suddenly, you can explain something in 10 seconds that used to take a 20-minute conversation and a whiteboard. You can send a quick report to your manager to showcase a win or a loss. It can help you visualize with the visualizations.

Even if you don’t plan on publishing dashboards or rolling out a reporting platform, Power BI is great as a personal tool. You can explore a dataset, build a quick visual, and understand the patterns. Then, you can move on. It turns “staring at CSV files” into something productive and dare I say, fun. There are other tools like it, but I think Power BI is perhaps the easiest to get started in. I still love SSRS for easier complex reports, but the performance is horrible, and it is deprecated. Databricks has Dashboards that are shockingly similar to Power BI. If you know how to use one tool, you’ll be able to use the other one with minimal effort.

Direct SQL Connection Woes

I do have a few annoyances with the Power BI, and the one I want to mention today is how it handles direct SQL connections.

If your SQL Server doesn’t have a trusted connection, but encryption is not forced, the native SQL connection will complain (see below), but still let you in.

However, if you are forcing encryption and have an untrusted certificate, things get bad. Ideally you want to have your certificate issued from a trusted certificate authority, but I know this doesn’t always happen quickly. So…unlike SSMS, there is no checkbox for trusting the certificate. You’ll just get a connection error about the untrusted cert.

This really irked me and I thought I was at an impasse until I found a connection work around.

The Workaround: Use the OLE DB Connection Instead

The good news is that Power BI has other connection types. One option here is using OLE DB. It’s not as obvious or user-friendly as the standard SQL connector, but it gives you something the default connection doesn’t: the trust certificate checkbox.

Bottom Line: Get a Trusted Certificate

This workaround shouldn’t replace proper security. If your SQL Server has an untrusted certificate, the real solution is to fix the certificate. I’ll admit I’m too lazy on most of my dev boxes to do that, but it’s the right way for production.

I’ll probably post a blog sometime about all the nuances of requesting, building, and the requirements for a SQL certificate, but that’s for another day.

Understanding SQL Audit Filters: A Guide for DISA STIG Compliance

I’m going to start the blogging back up with a topic that is near and dear to my heart, and something that has bugged me for years.

Every time I work with SQL auditing, especially in environments governed by DISA STIG requirements, I’m reminded how misunderstood, misconfigured, and frankly incomplete everyone leaves their auditing process.

Audits matter. A lot. They are one of the few artifacts that let you trace what actually happened on a SQL Server instance: the who, what, when, where, and how. When something breaks, or something suspicious happens, the audit is where you go to reconstruct the truth, or at least I’d like to say that.

But who actually digs into them? They are a huge pain to deal with.

Audit are normally just a firehose of noise. Unfiltered audits capture everything, including internal system operations SQL Server does behind the scenes thousands of times a minute. If you’ve ever tried to load a 20GB audit file in SSMS and waited long enough to rethink your life choices, you understand. First of all, letting a single audit file get that huge is a huge mistake, but there are so many reasons it might happen.

Filters: Important, Necessary… and Rarely Understood

If you run a DISA STIG compliant audit, you are capturing over 30 audit action items which translates to hundreds of gigabytes of audit data in a day, easily. The irony is that while filters are essential to reducing audit volume to something manageable, almost no one has tried to add a filter, and even fewer actually understand how to create them. And honestly, I can’t blame them. Audit filters are confusing and poorly documented.

To make it even worse, a lot of organizations think “filters = less auditing = bad,” so they leave everything on. All that does is ensure no one ever reviews the audit, because no one has time to wade through millions of rows of system chatter. Combine that with the dreaded action item SCHEMA_OBJECT_ACCESS_GROUP and you are looking at more data than you can shake a stick at.

Which leads me to the part that drives me crazy.

The STIG Problem: Required to Log It… but Never Required to Review It

If you work with DISA SQL Server STIGs, you already know how much of the checklist focuses on required audit actions. Historically dozens of STIGIDs cover dozens of audit action items.

But not a single STIGID actually requires proof of review.

  • We enforce that the audit must exist.
  • We enforce that certain actions must be captured.
  • We enforce retention (kind of), location, configuration, offloading…

…but nowhere do we enforce that the organization has to actually look at the audit on a scheduled basis. We do at least enforce testing backups.

I’ve seen organizations that never look at their audits. They just lock them away and hope the files don’t eat up too much storage space. Sometimes I was successful in teaching them how important those audit files could be, sometimes I wasn’t.

I’d love to see a STIGID that requires scheduled, documented review of the audit logs. What’s the point of collecting all this data if it only gets looked at after an incident? The problem is, I was the main person advising DISA on STIG changes and improvements for the last few years. There are other colleagues still at Microsoft who could champion this cause, but I don’t think they have the time any more.

SQL Audits Are Slow — Painfully Slow

Even if someone wanted to review the audit regularly, SQL Server doesn’t make it easy. Audits grow large quickly, and the built-in functions for reading audit files are notoriously slow.

You can easily hit scenarios where:

  • A single audit file takes minutes to open
  • A month of audit history takes hours to load
  • “Daily review” becomes impossible simply due to the cost of reading the files

This is another reason filters are so important.

The 2016 Filter Mistake (and Why It Still Haunts Us)

One of the biggest audit-filtering issues came from the SQL Server 2016 STIG. The wording was wrong on the STIGID itself. It told administrators not to filter “administrative permissions.” But that was not what the underlying SRG says.

The SRG actually says you cannot filter out “direct database access”.

I corrected this mistake in the 2022 SQL STIG, so newer environments are finally getting the right guidance in the Check Content. The 2016 STIG still has the incorrect language, and now that I’m no longer at Microsoft, I don’t know if it will ever be fixed. Since the 2016 STIG should be sunset soon, it may not be a problem much longer, but it still haunts me.

The worst part is that even the SQL Server product team didn’t want to touch the filter question. The phrase “administrative activities” was so vague that no one would explicitly approve or deny a filter as STIG-compliant when I asked. I couldn’t get a straight answer internally so that I could provide a STIG compliant audit filter in the Fix Text of the audit creation STIGIDs.

The filter was long and complicated. I’ll spare you the full wall of code in this blog, but imagine dozens of NOT clauses filtering sys tables that you probably have never heard of or want to hear of.

Even with explanations, few felt comfortable deploying the audit filter. It was too long and too confusing to read.

Where Do We Go From Here?

There’s no perfect answer here. A unified STIG for SQL Server, not broken out by version, is my dream. This would keep changes more up to date and easier to maintain. A magically faster audit reading solution would also be ideal. I’ve helped build and maintain an automated Audit Data Warehouse in the past, but it was never widely adopted because even trying to summarize the audit data for regular review took massive amounts of storage space. What you can do right now though?

  • Use audit filters, but test them thoroughly and document their intent
  • Separate system noise from user-driven events if you can
  • Keep audit files small enough to review regularly, many small files are easier to read than few huge files
  • Define a documented review schedule (even if the STIG doesn’t require it), use those audits for what they are intended
  • Push for clearer guidance in future STIG cycles. You can request changes from DISA too, they do listen to customers, they just vet those changes through the vendor and their own SMEs before a change cycle (which is about 6 months).

I plan to continue working with the SQL STIGs even from outside Microsoft. Security is important to me, and having that be consistent and actionable is paramount.

Changing Roles and Blogging Again

It’s been a while since I’ve posted here…ok it’s been a very, very long time.

After 9 incredible years at Microsoft, things have changed and my career is changing directions. Leaving wasn’t easy, but you can’t always stay and be complacent. More importantly, why did I stop blogging? Well, honestly once I moved to Microsoft I didn’t know where I should blog. I wondered if I should blog using an internal account there or if I could even safely keep blogging about topics related to work as before. It just felt easier to stop entirely, and I was busy too, so it made the decision easier. Now, I need to get into the habit of writing again.

Working at Microsoft was the highlight of my professional life so far. I considered it my capstone company to work for, and moving on from it hurts, and I will miss my colleagues that I had built relationships with other the years. I’m an introvert and never really thought I’d miss coworkers this much, but I also was at Microsoft for a long time, so it makes sense.

I’ve had the chance to collaborate with some of the smartest people in the industry, help customers solve complex SQL Server and Azure problems, and learn more than I ever imagined. It was a wild and great ride and I loved every minute of it, even as things changed and I had to adapt. I’ll always be grateful for the experiences, the mentorship, and the friendships I gained.

So what’s next? That’s still taking shape. It is equally stressful and exciting. I’m exploring opportunities that will build on the skills I’ve learned and the brand I’ve built for myself. The idea for building my brand as a consultant is very enticing, but that is going to be a long and delicate process.

In the meantime, I plan to revive this blog and use it as a place to document ideas and work arounds just as before. Learning to write in the time of AI will be odd though, so many things I blogged about before are just a quick prompt away from an answer…but then again, I’ve long enjoyed asking AI for advice on topics only for them to tell me something along the lines of “consult with a professional or Microsoft support to assist if this does not work.” Which of course I was, so it always got a chuckle out of me.

The regular writing was good for me, and it helped cement knowledge I gained along the way. I want to return to what originally made SQL Sanctum special: sharing experiences, insights, and lessons learned (sometimes the hard way).

Thanks to everyone who supported me over the years, and I look forward to expanding my network. Expect more posts on SQL, PowerShell, Azure, Python, Machine Learning, Power BI, and of course my niche and expert skill in SQL STIGs.

SSMS 2016 Policy Management Quote Parsing Error

I discovered a bug today in 2016 Management Studio when creating and updating policies. It drove me crazy until I realized what was going on, causing lots of lost time. Hopefully this will get fixed fast; we are reporting it immediately because I couldn’t find any references to it already out there. Special thanks to Kenneth Fisher for helping confirm that it wasn’t just affecting me.

The Problem

In the latest release of SSMS 2016, 16.5.1 and newer, policy conditions are removing quotes on each save, causing parse errors.

Vote the Connect up. A fix for this should be released in the next few weeks, but it doesn’t hurt to show your support for Policy Based Management.

Example

I’ll walk through a full, simplified policy creation showing how I discovered the problem, but it can be recreated by just editing a condition.

I created a new policy named Test and a new condition, also named Test. I set the condition facet to server, and input the following code into the field to create an ExecuteSql statement. Everything requiring a quote inside of the string has to have at least double quotes.


Executesql('string',' Select ''One'' ')

conditionscript

Once the code was input, you can see below that the code parsed correctly. SSMS was happy with it, so I hit OK to continue.

conditionready

I finished creating the policy, everything was still looking fine.

createpolicy

I then went to Evaluate the policy. The policy failed, as I expected. That’s not the point. If you look closely, you’ll notice that the Select One statement is no longer surrounded by double quotes. That shouldn’t have happened.

evalresults

I opened the Condition itself and received a parse error. Without the required double quotes, the Condition was broken.

parseerror

Summary
I tested this by creating or editing a condition without a policy or evaluating it and got the same results using SSMS 2016 on two separate computers, versions 16.5.1 and 17.0 RC1. When using SSMS 2012 or 2014, the code was not altered, everything worked as it should have. Finally, Kenneth happened to have an older version of SSMS 2016 and could not reproduce my error until he updated to the latest version of SSMS 2016, indicating that it is a recently introduced bug.

And again, if you haven’t already, vote up the Connect item.

Hyper-V VM Network Connectivity Troubleshooting

Last week I detailed my problems in creating a Virtual Machine in Hyper-V after not realizing that I had failed to press any key and thus start the boot process. Well, I had another problem with Hyper-V after that. Getting the internet working on my VM turned out to be another lesson in frustration. Worse, there was no good explanation for the problem this time.

Problem: A new VM has no internet connectivity even though a virtual switch was created and has been specified.

nonetworkaccess

Solution: Getting the internet working on my VM was a multistep process, and I can’t really say exactly what fixed it. Here are the steps I tried though:

RESET EVERYTHING!

Sadly, that is the best advice I can give you. If you have created a virtual switch, and the internet isn’t working correctly, select everything, and then uncheck whatever settings you don’t actually want. It sounds screwy, but it worked for me. This forces the VM to reconfigure the settings and resets connectivity.

Supposedly the only important setting on the virtual switch properties would be to ensure that you have Allow management operating system to share this network adapter. That will allow your computer and your VM to both have internet access. When I first set this, however, the PC lost internet while the VM had an incredibly slow connection. Needless to say, that was not good enough. Disabling the option did nothing but revert back to my original problem though.

For good measure, I then checked Enable virtual LAN identification  for management operating system. Nothing special still, but I left it to continue troubleshooting. Later, I would uncheck that feature, but I wanted results first.

SwitchSettings.png

Next I went into the Network Adapter properties and checked Enable virtual LAN identification. This is another setting I would later turn back off.

LanSetting.png

Finally I restarted my PC, restarted the Virtual Machine, and for some reason, I then had consistent internet on both the VM and the PC.

Ultimately, the problem was that features needed to be reset. I’m still not sure specifically which one had to be turned on and off again, but toggling everything and restarting worked well enough for me in this case. I was just tired of fighting with it by the time it was working.

At least now I have a VM running JAVA so it won’t touch my real Operating System.

Hyper-V VM Troubleshooting

I’ve made VMs before in Hyper-V, it’s a nice way to keep things separate from your main OS and test out configurations. When you haven’t used it lately, it can also be a lesson in frustration.

My solution? It was just embarrassing.

I had a VM set up working fine, however, I didn’t need that OS anymore, and wanted a brand new VM to play with. I spun up a new VM with the same configuration settings as last time, just a different OS. Every time that I tried to boot the VM, I got the same error though.

 

bootfailure

The boot loader failed – time out.

 

Maybe the new ISO file was corrupt? I switched back to the original that worked for Server 2012R2 in my old VM. That didn’t make a difference.

I hunted online, I asked around. There were a few suggestions.

Review Configuration Settings. Maybe I screwed up the configuration? I rebuilt the VM and made sure all the file paths were perfect, with a new Virtual Hard Disk, just in case I had moved files or changed some folders. That didn’t change anything though.

Disable Secure Boot. I heard that caused OS boot failures. Except that didn’t change anything, and it didn’t really apply to my situation.

Unblock the files. I hear that’s always a problem on new downloads, but I”ve never seen it actually happen to me. My problems are never that simple. This was the first time I actually checked the file properties and – they were blocked! I was very excited, but this did not make a difference. It’s still a good idea to check this anytime you run a new file as it is a common issue.

unblock

The Solution

Finally, at wits end, I reopened the VM console and started the machine, and tried it again. I smashed the keyboard in frustration as it came up. This time, it went straight to installing Windows.

My nemesis in this case was a simple five word phrase that disappeared almost instantly.

Press any key to continue...

It only shows up for a couple seconds at most, and if you start the VM before you connect to it, you’ll never have a chance to hit a key. VMs don’t automatically go into boot mode, instead they just try to load the (non)existing OS.

So after all that confusion, I just wasn’t hitting a key FAST enough. Sure all those other things can be important and you should always verify your settings, but it shouldn’t have been this difficult.

Next week I’ll share the fun I had trying to get internet connectivity on my VM…

 

 

Get and Set Folder Permissions with PowerShell

Managing permissions for numerous servers is the theme today. Drilling down into a folder, right-clicking properties, then reviewing security on the same folder for potentially dozens of computers is time consuming and, with the capabilities of scripting, unnecessary.

PowerShell lets us do this very easily. The first script allows you to view each account and their corresponding read/write permissions on any number of computers. By default the script will only search the local computer. You can filter to only display a specific right. A full list and explanation of each right is available here.

Function Get-Permission
{
[Cmdletbinding()]
Param(
  [string[]]$ComputerName = $Env:COMPUTERNAME,
 [Parameter(Mandatory=$true)]
  [string]$Folder,
  [string]$Rights
)
Process {
 $ComputerName |
 ForEach-Object {
 $Server = "$_"
 Write-Verbose "Getting Permissions for \\$Server\$Folder"
 (Get-Acl "\\$Server\$Folder").Access |
 Where { $_.FileSystemRights -LIKE "*$Rights*" } | Select IdentityReference, FileSystemRights, AccessControlType
}#EndForEach
}#EndProcess
}#EndFunction

Now for a simple example. Remember to supply a $ instead of a : after the drive letter, as this is designed to run remotely.

#Example of Get-Permission
Get-Permission -ComputerName "COMP1","COMP2" -Folder "C$\logs\SQL"
Now that you have verified the permissions list, you might need to make some adjustments. This set command will allow you to change $Access and $Rights for a specific $Account with minimal effort across your domain.
Function Set-Permission
{
[Cmdletbinding()]
Param(
  [string[]]$ComputerName = $env:COMPUTERNAME,
 [Parameter(Mandatory=$true)]
  [string]$Folder,
 [Parameter(Mandatory=$true)]
  [string]$Account,
  [string]$Access = "Allow",
  [string]$Right = "FullControl"
)
Process {
  $ComputerName|
  ForEach-Object {
  $Server = "$_"
  $Acl = Get-Acl "\\$Server\$Folder"
  $Acl.SetAccessRuleProtection($True,$False)
  $Rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$Account","$Right","ContainerInherit,ObjectInherit","None","$Access")
  $Acl.AddAccessRule($Rule)
  Set-Acl "\\$Server\$Folder" $Acl
  Write-Verbose "Permission Set for \\$Server\$Folder"
}#EndForEach
}#EndProcess
}#EndFunction
 And here is a quick example of how to execute the function. This can be used to allow or deny rights to the folder.
#Example Set-Permission
Set-Permission -ComputerName "Comp1","Comp2" -Folder "C$\logs\sql" -Account "Domain\ServiceUser" -Access "Allow" -Right "FullControl"