Four years ago I wrote an article called AD Connect: Starting Over. This covered how to accomplish the task quite easily. The problem is that not only has Microsoft rebranded Azure Active Directory to Entra ID, but they have also retired all of the PowerShell modules that I referenced in that article. As I find myself again in this situation, I decided to figure it out again… and why not write a new article for for the benefit of my readers?
The first step is to install the new Microsoft Graph PowerShell module. In their eternal wisdom (hic) Microsoft has decided that all cloud management should be done through this module. so:
Install-Module -Name Microsoft.Graph -Scope AllUsers
(I am not going to walk you through the different scope options… but remember that if you are not the only person who uses the computer on which you are installing it, you should look into this)
We now need to connect to our organization, but before we do you might have to import the Microsoft Graph Authentication module:
Import-Module Microsoft.Graph.Authentication
And then…
Connect-MgGraph -scopes Organization.ReadWrite.All
(NOTE: When I first installed the Microsoft Graph PowerShell module it would not connect… it would continue to error out. I needed to reboot my computer for it to work.
A window will pop up for you to log in. Make sure that you log in with an account that is a member of the Global Administrator role. You will then be asked for permissions between your Microsoft Graph Command Line Tools and the cloud… ensure you check the box marned Connect on behalf of your organization before clicking Accept.

You need to make sure you are connected to the correct organization, so let’s go ahead and check that out in a formatted list…
Get-MgOrganization | fl
We can see all sorts of information about your organization… but to check if our org is actually configured to sync to on-prem, let’s run this cmdlet that will give us a much more refined result:
Get-MgOrganization | Select OnPremisesSyncEnabled

We are now going to run the following script:
# Copy From Here
$OrgID = (Get-MgOrganization).id
$uri = “https://graph.microsoft.com/v1.0/organization/$orgid”
$body = @’
{
“onPremisesSyncEnabled”: ‘false’
}
‘@
invoke-MgGraphRequest -uri $uri -Body $body -Method PATCH
# Copy Until Here
My suggestion is that you copy and paste this script in its entirety… it is just going to make your life easier.
At this point we can run the following cmdlet again, but we are going to get a slightly different result:
Get-MgOrganization | Select OnPremisesSyncEnabled

I am paranoid, so I wanted to check it in the Entra portal. From https://entra.microsoft.com I expanded Entra ID, then scrolled to Entra Connect, and in the main window I click on Connect Sync. I can see that the status is Not Installed.

While there are other indicators on the same screen, once I see that there is nothing more that I need… it is completely disconnected, as if it had never happened, and I am ready to start again.
Conclusion
Seldom in the real world is an organization going to need to disconnect their on premises Active Directory domain from their cloud Entra ID. This is something that we would most likely only need in a demo or test/dev environment. With that said, it is something that might occasionally be needed (I cannot think of a reason a corporation might need it, but I am not so arrogant as to think that I can imagine every possible scenario. Whether you need it for the enterprise or for the lab, these steps should guide you through without too many headaches.
]]>$(get-item ‘.\filename.ext’).CreationTime=$(Get-Date “04/28/2025”)
Of course, they were not very technical so I found a free tool that did it for her, but if you are reasonably comfortable around PowerShell and do not want to go downloading tools, this will do just fine.
There are a bunch of other attributes that you could change with this cmdlet by replacing CreationTime with options such as LastWriteTime or LastAccessTime… but that is not what was asked of me.
Please be aware that while this will work well for most cases, I do not promise that the original metadata could not be extracted using advanced forensic auditing. In other words, if you are trying to fool your professor into thinking you did not start writing your term paper the day before it was due, then you should be fine. If you are trying to fool law enforcement then I make no guarantees…
]]>While I have written articles previously about installing individual RSAT tools, sometimes I just want to grab them all and install them in one fell swoop, rather than doing them one by one. To do this is pretty simple:
Open PowerShell as an Administrator.
Type the following: Get-WindowsCapability –Name RSAT* –Online | Add-WindowsCapability –Online
It will take a while… depending on the speed of your computer it may take a very long while! However that’s all you need to do. Let that run, and all of the RSAT tools will be installed.
]]>With that said, most of what we do in Active Directory can be done in PowerShell. Take creating a user, for instance.
New-ADUser iName Mitch –DisplayName “Mitch Garvis” –EmailAddress [email protected] –GivenName Mitch –Surname Garvis
Add-ADGroupMember –Identity “Bloggers” –Members Mitch
Set-ADAccountPassword Mitch –Reset –NewPassword (ConvertTo-SecureString –AsPlainText “[email protected]” –Force –Verbose”) –PassThru
Enable-ADAccount –Identity Mitch
These four lines are all you need to:
Of course, there are a lot more things that we could configure when we create the user, including all of the criteria that can be configured in the Active Directory Users and Computers (ADUC) console… but since these are the ones that most people use, I decided to keep the cmdlets short and sweet. For a full list of options, from the PowerShell prompt type Get-Help New-ADUser.
]]>The first thing you have to do is to modify the registry. I know, that sounds a bit extreme, but if you want to completely disable it, that is what you have to do. In PowerShell, run the following command:
reg add hklm\system\currentcontrolset\services\tcpip6\parameters /v DisabledComponents /t REG_DWORD /d 0xFF /f
Once that is done, you should then disable IPv6 tunneling, as there is malware that has been to discover to make use of this tunneling to escape and spread. Use the following cmdlet:
Get-NetAdapterBinding –ComponentID “ms_tcpip6” | disable-NetAdapterBinding –ComponentID “ms_tcpip6” –PassThru
After you have successfully executed both of these, you need to reboot the server.
That’s it! Go ahead and try it… but only if you need to!
]]>I am not entirely sure how true this story is, but I do remember the 40-bit encryption issue. Whatever the true origins of the story might be, that was real.
I have a customer that I have been working with who requires their servers be installed in French. Even in the Province of Quebec that is not as common as you might think, but it is the case with this customer, and I respect it and have no issue with it. Except that one problem kept coming up that was baffling me. I could not install PowerShell modules. To be fair, I could import PowerShell modules that had been downloaded onto other computers without a problem. I simply could not use the Install-Module cmdlet. I would receive the following error:
Now I am not sure if this is directly related to the French laws drafted after The Great War, but I do know that this is not something I have ever come across on an English language server. I did some digging, and sure enough there were some security protocols missing. I ran the following cmdlet and discovered that I only had Ssl3 and Tls…
[Net.ServicePointManager]::SecurityProtocol
Okay, that’s not right. We need Tls 1.2 for this to work. So I ran the following cmdlets to modify the registry:
Set-ItemProperty -Path ‘HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319’ -Name ‘SchUseStrongCrypto’ -Value ‘1’ -Type DWord
Set-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319’ -Name ‘SchUseStrongCrypto’ -Value ‘1’ -Type DWord
These set strong cryptography on both 32-bit and 64-bit .Net Framework (version 4 and up).
You have to shut down all open PowerShell consoles, and then when you open them up fresh and run the same cmdlet as above you get:
[Net.ServicePointManager]::SecurityProtocol
Okay, now Tls 1.2 is installed, and we are able to proceed with the installation of our PowerShell modules.
**NOTE: Before you do this, make sure you speak with your Security and Compliance teams. They might have a good reason for you to not do these. If that is the case, you can ask for an exception window. Say, you can open the protocols for an hour. After you are done, to close them use the following:
Set-ItemProperty -Path ‘HKLM:\SOFTWARE\Wow6432Node\Microsoft\.NetFramework\v4.0.30319’ -Name ‘SchUseStrongCrypto’ -Value ‘0’ -Type DWord
Set-ItemProperty -Path ‘HKLM:\SOFTWARE\Microsoft\.NetFramework\v4.0.30319’ -Name ‘SchUseStrongCrypto’ -Value ‘0’ -Type DWord
Conclusion
There is a saying that just because you can do something does not necessarily mean that you should. Decisions you make alone in your lab are fine for you… but when your actions affect the security of a larger organization, it is a good idea to get sign-off from the powers that be before you make any change.
]]>PS C:\> Install-Module -Name ExchangeOnlineManagement
PackageManagement\Install-Package : No match was found for the specified search criteria and module name ‘ExchangeOnlineManagement’. Try Get-PSRepository to see all available registered module repositories.
At C:\program files\powershell\6\Modules\PowerShellGet\PSModule.psm1:9491 char:21
+ … $null = PackageManagement\Install-Package @PSBoundParameters
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power….InstallPackage:InstallPackage) [Install-Package], Exception
+ FullyQualifiedErrorId : NoMatchFoundForCriteria,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackage
I was at a loss. For the life of me I could not install a PowerShell module on a server. Let me clarify… I could not install any PowerShell module onto this server. It wasn’t only this one server either. I was working at a customer that had implemented a lot of security measures on their servers – not only proxies, although that was there too – and had not done a very good job of documenting these measures. Oh, and then the guy who built that secured image (on which at least ten servers that I need to work on are based) left the company.
I spoke with my contact who told me that they were planning on fixing all of the issues that the former administrator’s security had caused… it was on their list. How far down on the list? That he couldn’t answer me. “In the meantime, you’re a pretty clever guy… you’ll be able to find workarounds for most of what is blocking you… but you are not permitted to compromise the security.”
I spent a little time working on this one, and as I dug deeper into the rabbit hole, I decided that I did not need to fix the problem… and frankly, doing so would have compromised the security. I just needed to work my way around it. Here’s what I did:
1) On a clean system (that is not based on the locked-down image), I opened PowerShell and ran the following:
cd .\Users\mgarvis\Downloads\
save-module ExchangeOnlineManagement
This resulted in a directory structure being created in my Downloads directory that was about 27.5MB.
2) I opened a File Explorer window and navigated to the Downloads directory.
3) I opened a second File Explorer window and navigated to the Downloads folder on the server (\\servername\c$\Users\mgarvis\Downloads).
4) I copied the ExchangeOnlineManagement directory from the source into the destination.
5) Back on the server I needed to work on, I opened a PowerShell (Administrator) console and navigated to the C:\ (root).
6) I then ran the following cmdlet:
Import-Module C:\Users\mgarvis-admin\Downloads\ExchangeOnlineManagement
7) To check that it worked, I ran a simple Get- cmdlet: Get-Module –name ExchangeOnlineManagement. The result:
It worked! That’s all I needed… and it will work with any module. I hope this helps!
]]>There are several different ways to do this, depending on what exactly you want to do.
Preparation
We cannot simply create a machine and start working on it remotely. We have to enable remote management by first enabling PowerShell Remoting. To do that, let’s log on to a computer against which we plan to run remote PowerShell cmdlets and do just that. Use the cmdlet Enable-PSRemoting –Force. This will enable remote PowerShell for you.
*CHEAT: On a Server Core system, you can use the sconfig menu to Enable Remote Management of your servers.
**Remember: There is a chance that you will have to enter your credentials to run any of these remote cmdlets. Of course, you have to have credentials on the remote system to do anything, and if your local account does not have them then you will need to authenticate.
A single cmdlet against a single remote system
When you want to simply run a cmdlet against another computer, a lot of cmdlets (although, as I was reminded recently, not all) support the switch –ComputerName. For example:
Get-WindowsFeature -Name *DNS* -ComputerName <RemoteComputerName>
Yes, I have blurred out the name of my client’s server, but this clearly shows that I am able to run the cmdlet against a remote server.
A single cmdlet against a group of remote systems
You may have a cmdlet that you need to run against multiple systems. For that we are going to create a session for all affected machines… using that same –ComputerName parameter that we used before.
$session-New-PSSession -ComputerName <RemoteComputerName1>, <RemoteComputerName1>
Invoke-Command -session $session {Get-WindowsFeature -Name *DNS*}
Watch:
By creating the New-PSSession (which we are cleverly calling $session), we can then use the Invoke-Command cmdlet to run everything contained in the brackets against all servers listed.
Let’s just work remotely…
Of course, you may be working on a remote system, and you want everything to run against that machine. For that we are not just going to create the PSSession, we are going to actually enter that session.
Just like in the previous example, we used the New-PSSession to create the remote session. However, unlike the previous example, we are only adding a single remote computer to the session. We then use the Enter-PSSession –ComputerName cmdlet to connect to it.
New-PSSession -ComputerName <RemoteComputerName>
Enter-PSSession -ComputerName <RemoteComputerName>
Watch:
In this screen capture, we see that at the beginning of the bash that the computer name that we specified in the previous cmdlet is listed. We created and then entered the remote session, and until we actually exit the session, we are running remotely. I always tell people that PowerShell is structured quite logically, so the exit a session, the cmdlet is Exit-Session.
Security
I will freely admit that I am not an IT Security Specialist. Rather, I am an IT Professional who is security-minded. As such, I try to eliminate unnecessary security risks in every environment that I manage. One way that I do that is by replacing the servers with a GUI (or Desktop Experience) with Server Core systems. By doing that, my clients eliminate 95% of server logons, which so often is where our security risks originate. By removing the Graphical User Interface (GUI) from servers and forcing remote management and administration, we are eliminating server security holes and reducing the attack surface of our critical servers.
Conclusion
A strong knowledge of PowerShell will give you access to an incredible set of tools and capabilities to manage your environment. I freely admit that I do not have that strong knowledge. With that said, I do know enough to make my life much easier, and with tools like PowerShell Remoting I can run the tools I need both locally and remotely with ease.
In 2014 I wrote an article called Do IT Remotely in which I showed that you do not have to go into the office to get your work done. It was not prescience – I had no idea at the time that nearly 5.5 years later the entire world would be immersed in a global pandemic, and that without the tools to work remotely we would fall into a global crisis unheard of in a century. I simply knew that occasionally working from home was easier than always having to go into the office. Being able to address issues remotely – whether from another room or from another continent – means that we are no longer tied to geography, and that when something goes down or needs fixing at an inconvenient time I do not have to drop what I am doing and rush to the office; I can simply connect to my servers remotely to fix what needs to be fixed, and then go back to what I was doing. You can call it lazy (we don’t have to go into the office) or selfish (we don’t have to drop what we are doing and run) or whatever you like… it makes our lives as systems administrators better.
]]>That does not mean that we all have to become command line and PowerShell experts (although if you are an IT Professional: LEARN POWERSHELL!), it just means we need to install the Remote Server Administrative Tools (RSAT) on a desktop computer, and use the same (mostly) MMC consoles to manage our computers remotely.
While you can install the RSAT Tools on both Windows Server and Windows 10/11, the PowerShell cmdlet to do so is different. Why? What is called a Windows Optional Feature in Windows Server is called a Windows Capability in the desktop OS.
But Mitch, why don’t you just add the RSAT tools via the GUI? That is an excellent question, and I am glad you asked. I service customers in both English and French. Conversationally, I am pretty fluent in French. Unfortunately my reading is not quite as good, and because so many server environments are installed in English (even when the desktops are in French), it is not something I have gotten used to. Fortunately, the PowerShell cmdlets are always in English, and it is easier for me.
Administer It!
I am not an advocate of people defaulting to running anything as Administrator unless absolutely necessary. To install the RSAT Tools, it is. Make sure you run PowerShell as an Admin.
Server Side
Before we install a particular RSAT tool, we need to know the name. We should also see if it is running or not before we go installing it again. Let’s run the following cmdlet:
Get-WindowsOptionalFeature –Featurename *RSAT* –Online | Select-Object Featurename, state
Excellent. We see what is there and what is not. Now let’s install one… for no reason at all, let’s pick the Hyper-V Manager. We’ll run this cmdlet:
Enable-WindowsOptionalFeature -FeatureName RSAT-Hyper-V-Tools-Feature –Online
We see that after a few seconds in returns successfully, and that no restart is required.
On the Windows Client we are going to do the same thing, but there are more steps to it.
1) We not only need the list of the Windows Capabilities, we also need to know the name of the installer. We’ll use the following cmdlet:
Get-WindowsCapability -name *rsat* -Online | Select-Object -Property Displayname, name
**The Hyper-V tools are not listed because Hyper-V is a part of the Windows client OS, and as such is still a Windows Optional Feature.
Notice that the name (which is what we need to install the tool) is not the same as the DisplayName (Friendly name). Notice also that some of the lines end with an ellipsis because there is not enough room on the line (thanks to AD DS and LDS Tools). So once we know which tool we want to install, we will run the same cmdlet again, but modified to the specific RSAT tool we want. For this article we will use Server Manager. Run the following cmdlet:
Get-WindowsCapability -name *Server* -Online | Select-Object -Property Displayname, name
Okay… we can now run the following cmdlet to install it:
Add-WindowsCapability -Name Rsat.ServerManager.Tools~~~~0.0.1.0 -Online
We are good to go… except note that the RestartNeeded is True, so we will need to reboot our computer for this console to work.
Conclusion
Server Core is a great way to reduce the attack surface and patch footprint on our servers. You will also discover that by removing the GUI we can save as much as 10GB storage space per server. In a virtualized environment where every server is stored on an iSCSI SAN device, that can add up to a lot of space.
Server Core is also a great way to discourage people who do not need to be logging on to your servers, which means you will save even more by not automatically creating user profiles for each user.
Administering our servers using remote consoles is an efficient way to work without compromising the advantages, nor having to learn the PowerShell cmdlets to administer all of our server features.
]]>**NOTE: All of the command line entries in this article are performed in PowerShell. To differentiate between the PowerShell cmdlets and Command Line Interpreter commands, the PowerShell cmdlets are in blue, and the Commands are in black.
I have long been a proponent of Server Core. Why? It takes fewer resources to run, has a smaller attack surface and needs fewer patches. No, I am not suggesting you manage your servers from the command line; The Remote Server Administration Toolkit (RSAT) provides the GUI tools, mostly MMC consoles but other things as well, that allow us to administer our servers (Core or otherwise) from either a server with the Desktop Experience, or a Windows 10 machine.
The first time I wrote about the RSAT, it was included in the Server desktop experience but you had to download it for the Windows client. From the 1809 release of Windows 10, it has been included as an option feature (or rather, the individual tools are available as optional features). They need to be installed, but unless your Windows Deployment team removed them, they are available to install.
From the GUI
Yes, I will go through the PowerShell installation later in the article, but let’s first see how easy it is to install with your mouse.
Step 1
In the Windows Search bar, type Settings, then click the Settings app that appears.
Step 2
Click the Apps option.
Step 3
In the Apps & features window, click the Optional features option. It should appear in the middle of the top of the window, above the installed applications list.
Step 4
In the Optional features window, click the +sign beside Add a feature.
Step 5
In the search bar of the Add an optional feature window type RSAT.
All of the Remote Server Administration Tools will be listed. You can select the checkbox next to each one, and then click Install (n).
That’s it… as easy as that! The installation will take a couple of minutes, and then the tools will appear in your Start Menu under Windows Administrative Tools.
With PowerShell
I am a big fan of PowerShell, even though I am a lousy scripter. With that said, the installation of RSAT tools from PowerShell is pretty easy… and it is different in Windows 10 than in Windows Server.
Step 1
Run PowerShell as an admin. If you are not sure how to do that, find PowerShell in your Start Menu and right-click, then click Run as Administrator.
Unless you have completely disabled UAC, you will be prompted either to confirm you want to run as an administrator or, if your account does not have permissions, you will be prompted for credentials.
Step 2
Run the following cmdlet to get the entire list of RSAT tools available:
Get-WindowsCapability -online -name *RSAT* |Select-Object -Property DisplayName, State
That will return a simplified list of all of the Optional Features that have the term RSAT in it, and whether it is installed or not. Like so:
With that list, you will see all of the tools with the friendly names… but you cannot use that name to install, so we need to know what the actual name would be. So once you know what tools you want to install, we will grab the list that reconciles DisplayName to the actual Name. To do that, use the following cmdlet:
Get-WindowsCapability -online -name *RSAT* |Select-Object -Property Name, DisplayName
Step 3
Now we are going to install the actual tools. Simple enough:
Add-WindowsCapability -Name “Rsat.ServerManager.Tools~~~~0.0.1.0” -online
It will take a few seconds, but you should get a return that looks like this:
Take note that they need to be installed individually, and the names do need to be enclosed in quotes.
After doing this a few times, you might end up with a return like this:
The different tools that I installed have their new state.
By the way, it is important to remember that before you try to manage a server, you need to enable remote management on that server. To do this, from the server you are trying to manage, type the following command:
winrm qc
Of course, if you are running the RSAT tools on a machine from a workstation outside your domain, you might end up with a bunch of errors like this:
What you want to see is this:
Conclusion
While the desktop experience on servers is no means gone, the days of having a GUI on every server are certainly waning. Knowing how to administer Server Core remotely will go a long way to making your life as an IT administrator much easier.
]]>**NOTE: All of the command line entries in this article are performed in PowerShell. To differentiate between the PowerShell cmdlets and Command Line Interpreter commands, the PowerShell cmdlets are in blue, and the Commands are in black.
Congratulations! You have taken the plunge, your company is enrolled in Microsoft 365. The cloud is the place to be! You start off by creating a bunch of user accounts for everyone. You then realize that you did not change the default domain, and everyone’s account is listed with an @<tenant>.onmicrosoft.com username and e-mail address. You want your people representing your brand, so that will not do.
Of course, you can go through the GUI, fixing each one individually… but at a certain point all of that mouse clicking will lead to carpal tunnel syndrome. I would much rather get it done in PowerShell.
The first thing we are going to have to do is to install the necessary PowerShell modules. If you have not done this yet, you need to do it from an Administrator Console:
Install-Module -Name MSOnline
Install-Module -Name AzureAD
Install-Module -Name ExchangeOnlineManagement
Now that you’re done with that, you likely won’t need to be in an Admin console, so you can close that and open a regular context PowerShell window.
We have to connect PowerShell to the Microsoft 365 tenant we are working on:
connect-msolservice
Enter your administrator username, and when prompted your password.
As strange as it may seem, in order to change e-mail addresses we also have to connect to Microsoft Exchange Online. Use the following script to do that:
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session –DisableNameChecking
Again, you will be prompted for your credentials.
Okay, you are connected to both Microsoft 365 and Microsoft Exchange Online. Let’s change those accounts:
Set-Mailbox -identity [email protected] -WindowsEmailAddress [email protected]
Set-MsolUserPrincipalName -UserPrincipalName [email protected] -NewUserPrincipalName [email protected]
If you don’t trust me, you can go into the GUI to check:
Great, it worked for a single user… but I want to do this for several users. No problem!
Step 1: Create a CSV file with all of the email addresses. It can be very simple, including just the User PrincipalName and the NewEmailAddress… like the following:
—
UserPrincipalName,NewEmailaddress
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
[email protected],[email protected]
—
Now, run the following script (replacing the file location and name with your own)
$csv = Import-Csv C:\Users\v-mgarvis\Documents\Users.csv | ForEach-Object {
Set-Mailbox $_.”UserPrincipalName” -WindowsEmailAddress $_.”NewEmailAddress” -MicrosoftOnlineServicesID $_.”NewEmailAddress”
}
You should receive an output like the following:
**NOTE: Prior to running this script, I had reset all of the @behike.ca addresses back to their original @behike.onmicrosoft.com addresses. Otherwise there would be more errors here.
Yes, I removed Nestor’s license before running the script. Because there is no e-mail box, there is no e-mail address, so his entry failed… but as for the rest:
I hope this helps. While researching this article, I found a lot of them that assumed you knew this or that, such as what the format of the CSV file should be. I assume you are a newbie, and are looking for a beginner’s guide. If I missed anything, please drop a comment and I will try to help!
]]>**NOTE: All of the command line entries in this article are performed in PowerShell. To differentiate between the PowerShell cmdlets and Command Line Interpreter commands, the PowerShell cmdlets are in blue, and the Commands are in black.
—
Yes, I know… the company line is that Active Directory Domain Services (ADDS) is the past, Azure Active Directory (AAD) is the future. That is a true statement… but ADDS is not going extinct like dinosaurs at a specific point in time; rather, over the course of many years, our organizations will evolve to be using AAD more and more, until one day in the distant future our identity will be managed completely in the cloud. Until that day comes, ADDS is still the premiere identity management solution, and you still need to know how to use it… even if it is now chiefly configured for Hybrid Azure Active Directory Joined (HAADJ) environments.
The heart and brain of your Active Directory is and has always been the domain controller. That has not changed significantly in twenty years. What has changed, beginning in Server 2008 but evolving tremendously since, is the ability to run our domain controllers on Server Core, with much smaller attack surface, patch footprint, and storage requirements than the same server with the Desktop Experience (Server with a GUI).
There are some people who have always loved working entirely in the command line, while others do so grudgingly because people like Mitch Garvis tell them it is the better way to manage their environments. What has, in my experience, always been the greatest factor in convincing the skeptics that Server Core is the way to go for their servers is the ability to manage their tools remotely using MMC consoles. That is a subject for another time.
In this article I will show you how to build a new domain controller using PowerShell.
Step 1: Log onto your domain-joined member server with a Domain Admin account. For best results, the account should be a member of the Enterprise Admins and Schema Admins groups as well.
Step 2: Run Windows PowerShell
PowerShell.exe
Step 3: Install the Active Directory Domain Services components.
Install-WindowsFeature AD-Domain-Services –IncludeManagementTools
Step 4: Promote the system to be a domain controller.
(Sticking with my recently created environment, we are using the domain behike.ca. You should replace that with whatever your domain name is.)
Install-ADDSDomainController -DomainName “behike.ca” -NoGlobalCatalog:$false -CreateDnsDelegation:$false -CriticalReplicationOnly:$false -InstallDns:$true -NoRebootOnCompletion:$false -Force:$true
(If you want to split this up onto multiple lines that may be more manageable (and do not scroll in weird places) try this:
Install-ADDSDomainController `
-NoGlobalCatalog:$false `
-CreateDnsDelegation:$false `
-CriticalReplicationOnly:$false `
-DomainName “behike.ca” `
-InstallDns:$true `
-NoRebootOnCompletion:$false `
-Force:$true
You will be asked to enter the Safe Mode Administrator Password (twice).
At this point, you should see some warnings flash by, but after a few minutes you should see the following on your screen:
When the reboot is completed, you should see the following:
Yes, you are going to have to delete all of that and enter your credentials as behike\Administrator (or whatever your domain and user are).
We are all done with the actions, but let’s look at a few things to be sure it all worked.
Am I really a domain controller?
From PowerShell, type the following cmdlet:
Get-ADDomainController
You should get a response like this:
Really and Truly a legitimate name server?
Let’s use the following cmdlet to verify that an NS record was created for our newly promoted domain controller:
Get-DnsServerResourceRecord –ZoneName behike.ca –RRType NS
You should get a response that looks like this:
Incidentally, if you did that on your newly promoted domain controller, that is also proof that when promoting it, the DNS Server role was installed as well. However, if you are still a Doubting Thomas who wants to go back to the training wheels environment that is the GUI server, you can open your DNS Manager on the server with Desktop Experience installed (or to a management workstation with the DNS RSAT Tools) and verify your zone:
Yessir, you have created a new domain controller… and you saved about 9gb of disk space (as seen in the following comparisons from my server with a GUI to the Server core install (using the following cmdlet):
Get-CimInstance -Class CIM_LogicalDisk | Select-Object @{Name=”Size(GB)”;Expression={$_.size/1gb}}, @{Name=”Free Space(GB)”;Expression={$_.freespace/1gb}}, @{Name=”Free (%)”;Expression={“{0,6:P0}” -f(($_.freespace/1gb) / ($_.size/1gb))}}, DeviceID, DriveType | Where-Object DriveType -EQ ‘3’
Server with Desktop Experience:
Server Core:
Conclusion:
The Windows Desktop experience is a great environment to use when you are working in the graphical user interface. However, it does take resources (9gb storage, not to mention the RAM requirements). It also presents a larger attack surface. With the Remote Server Administration Tools available to us in Windows 10 (and Windows Server), why would you install the GUI on every server? I promise you will get used to working remote, and once you do, you will save a ton of resources!
]]>Before going any further, let’s define a few terms you will need to understand:
Active Directory Domain Services: This is the good old on-premises AD that we have been using since the advent of Windows Server 2000. It was renamed ADDS at some point, but it is the same AD, only evolved. It leverages Kerberos authentication, and is controlled by our domain controllers that run the AD services.
Azure Active Directory: The cloud authentication service may share part of its name with ADDS, but it is quite different. For one, it is not a Kerberos system, rather it leverages OAuth and other modern protocols.
Rather than investing in new hardware, I opted to build my new domain controller in a Hyper-V environment. The configuration of that infrastructure for that is out of scope for this article.
I opted to start with Microsoft’s latest server operating system, Windows Server 20H2. This iteration does not include the graphical user interface (GUI, or Desktop Experience) that its predecessors do. Because of that, we will be relying entirely on PowerShell to build and configure our DC.
The Preliminaries!
I have installed the server OS, and am logged in to my new (clean) server. This is a lab environment in my home office that is not segregated from my regular devices, so it has gotten a DHCP address from my home router. The first thing I want to do is to change that to a static IP address.
I verify my current IP configuration using the Get-NetIPConfiguration cmdlet in PowerShell… essentially the modern version of ipconfig. I checked my existing environment and decided my IP address would be 10.0.0.2. I know that my subnet mask is 255.255.255.0, so my prefix length is 24. What I needed from this cmdlet was the Interface Index.
So to set my IP address, I will use the following cmdlet:
New-NetIPAddress –InterfaceIndex 4 –IPAddress 10.0.0.2 –PrefixLength 24 –DefaultGateway 10.0.0.1
That sets the IP Address, but clears the DNS Server information. I’ll fix that with the following:
Set-DnsClientServerAddress –InterfaceIndex 4 –ServerAddresses “10.0.0.1”
With that, my virtual machine is connected to the Internet again.
I don’t like the idea of having my domain controller named WIN-GQ35FV9 (or whatever random name Windows selected, so I’ll do a quick computer rename:
Rename-Computer MDG-DC
This won’t take hold until I reboot my system, so let’s do that now.
**Note: Lab environments can be tricky when they are on your production network. If I was building a completely segregated lab, or if I was building a lab that did not need the Internet, I would install a DHCP Server in this machine. As I am not, I will have to configure static IP addresses on all lab machines.
Let’s Do It!
Now that our networking is configured, we can move ahead with the domain creation.
The first step is to download the PowerShell module. That’s simple enough, although the name of it has changed a few times, so I want to make sure I download the right one:
Install-WindowsFeature AD-Domain-Services –IncludeManagementTools
It won’t take but a couple of minutes to download and install them.
Now let’s build my AD Forest:
Install-ADDSForest –DomainName <domain.name>
This will run for a few minutes, and when completed, you will be informed the computer needs to reboot.
When I am prompted to log in, I now need to know my username (it will be Administrator, as well as my password.
So let’s go back into PowerShell, and make sure that everything worked.
Get-Addsdomain | fl name, domainmode
Get-Adforest |fl name, domainmode
Get-Service ADWS,KDC,Netlogon,DNS
This will show us that the domain is properly configured, and that the necessary services are running.
That’s It?
Well, not quite… but that’s the scope of the article. To manage it, I am going to create a virtual machine running Windows 10 with the necessary Remote Server Administration Tools to manage my AD. Yes, you can do everything in PowerShell… but there are some things I still prefer to do in MMC consoles!
]]>Hey Mitch! Do you know if we can add a couple hundred users to a distribution list instead of adding them one by one?
One of my help desk techs was asked to create several distribution lists with several hundred users, and they do not want to have to scroll through the user list to click each user one by one. Of course there is a solution… PowerShell! It is pretty easy to do…
Firstly, you need to create a .csv file. Let’s call it DGroups.csv. Create the following headers: Alias,DistributionGroup. It should look like this:
Alias,DistributionGroup
Mitch.Garvis,O365-Admins
Fred.Kippels,O365-Admins
Fred.Kippels,HelpDesk-Managers
John.Frinks,HelpDesk-Managers
John.Frinks,Softball-Players
Mitch.Garvis,Softball-Players
Once you have that, open a PowerShell console, and connect to your Office 365 instance. Make sure you have the credentials to add users to the groups listed in the file. Now, run the following cmdlet:
Import-Csv “C:\DLAdd.csv” | ForEach-Object { Add-DistributionGroupMember -Identity $_.DistributionGroup -Member $_.Alias -BypassSecurityGroupManagerCheck }
That should be it… You should have your users added to the group. Have fun!
]]>
Last week, I was working with a client who moved a server. No big deal, right? Well unfortunately, this was a server that collected information from every other server in the environment… several hundred of them, to be precise. If the collection application were programmed differently, there would have been an option to send out to all of the servers the changed IP address. This application did not work that way. Even though we have an agent deployed to every server, there was no automated way to make the change on the agent side… at least, not out of the box.
It turns out that the information we needed to change was in a file I will call ‘c:\Program Files\Collector\agent.conf.’ The file consisted of three lines:
[Collector Agent Settings]
Collector Hostname: servername.domain.com
Collector IP Address: 10.201.15.72
While the collector hostname was not changing, the IP address had to, because it had been relocated to a different datacenter. The new address was going to be 10.205.119.70. (Obviously none of these addresses are the actual addresses from my client… don’t go looking for them!) I had to change the IP address in this file… but I had to do it across about 600 servers. Fortunately I have my deployment tool that allows me to send the script to every server… and I have PowerShell, which let me build the following script:
# Variables
$s1 = 10.201.15.72
$s2 = 10.205.119.70
$file =”c:\Program Files\Collector\agent.conf”# Stop the service
net stop Collector# Make my change
(Set-Content -path $file) -replace $s1, $s2
# Restart the service
net start Collector
So:
First I set my variables, which are the original IP address, the new IP address, and the file name.
Next I stop the service, because while the service is actually running, the configuration file is protected. In some cases, you may also have a Process protecting it, so you would then have to add a Kill command.
The Set-Content command does the following:
And lastly, I restart the service.
Now, I used this script for a configuration file, but there is no reason it cannot be used for any other purpose. Changing text in ASCII files is something you might need to do on a regular basis. Scripting it will save you a lot of time and effort.
]]>“Well, it wasn’t quite last week…”
It turns out that Fred (his real name is protected to protect the usually intelligent friend) has had this problem for a couple of months, but didn’t say anything to me, because he didn’t want to bother me. He figured he would just ask me about it the next time he came over.
For a decent analogy of why this is a bad idea, I want you to imagine getting a splinter on the side of your foot. If you sit down, remove the splinter, clean the wound, and put a bandage on it then sometime in the next few days your foot will heal. The alternative is to wait… keep up business as usual, walk through the pain, keep sweating and getting it dirty. In the same few days as before you will likely have something with the adjectives festering and infected applied to it.
Okay, here we are. Fred’s computer has a festering infected wound, and it’s my job to clean it up. He goes home and asks me what to do first.
“Please send me a list of the updates that have been installed since you realized there was a problem.” He sent me a screenshot of Windows Update.
Okay, that is one way to go… but a screen shot is a lot less useful than a text file. So here’s what you would do:
Get-WmiObject -class win32_QuickFixEngineering –Property Description,hotfixID| Export-Csv Updates.csv
This will create a CSV file of all of your patches, which if you were to open it in a text editor would look like this:
Not very nice, huh? But if we were to open it with a spreadsheet that recognizes comma separated value files, this is what you will get:
This is a much more useful file for an IT Professional to work with, as you have data, and not simply an image file of data.
I hope this helps!
]]>
One of the topics I inject into every one of my classes (and frankly, most of my customer conversations) is how to do whatever we are doing in PowerShell. Scripting is one of the ways I make my life easier, and I recommend my students and customers use the knowledge I share to make their lives easier.
One of the differences between a Command Shell window and a PowerShell window is the colours. Command Shell is white type on a black background. PowerShell is a blue background, with the type colours varying depending on the context… Yellow for cmdlets, red for errors, and so on.
One of my students recently told me that because of the issues he has with his eyes, he has trouble reading the red writing on the blue background, and asked if there was a way to change it. I honestly had never thought of it… so I decided to do some research.
It turns out, according to what I discovered, it is possible to change a lot of the colours in PowerShell. Let’s start by changing the colour of the error messages:
$host.PrivateData.ErrorForegroundColor = “Green”
So let’s see what that does:
Okay, that is much better. We can also change the background colour of the error text (black by default), by using this:
$host.PrivateData.ErrorBackgroundColor = “DarkCyan”
Granted, I hate the colour, but once you know the command, you can play with the colours that you want.
As well, if you want to change the colour scheme of the entire console, you can use the following:
[console]::ForegroundColor = “Yellow”
[console]::BackgroundColor = “black”
Now we have the entire console in black, and the default text in yellow.
If you want to use these colours persistently, you can insert them into your profile… or just create a .ps1 file that you run every time you open PowerShell.
Jeff Hicks wrote a number of great scripts a few years ago that will let you manage your colour schemes, and they can be found here. Unfortunately it is an older article and the images are gone, but the scripts are intact, and that is the important part.
Have fun!
]]>But alas, if you and your organization are not using IPv6, then there is no reason to have it bound to your workstations, let alone to your servers. Let’s get rid of it… for now, knowing we can come back and re-enable it with a simple cmdlet.
First, we need to see which network cards have IPv6 bound to it, with the following:
Get-NetAdapterBinding | where {$_.ComponentId -eq ‘ms_tcpip6’}
That will return a list of NICs that have IPv6 enabled, like so:

We can remove the binding from each adapter individually, like so:
Disable-NetAdapterBinding -Name “Wi-Fi 2” -ComponentID ms_tcpip6
Of course, then we would have to do it for each of our NICs. Rather than doing that, it would be simpler to just use a wildcard, thus disabling it for all of our NICs simultaneously:
Disable-NetAdapterBinding -Name “*” -ComponentID ms_tcpip6
Of course, in order to do this, you must open PowerShell with elevated credentials, so make sure you Run As Administrator.
Once you have done that, you can then go back and get the same list. Notice that the listings under Enabled all read False now.

Now, as you may have heard me say before, PowerShell is very easy to understand… it is almost as if it were post-troglodyte grammar. Get-Thing! Disable-NetAdapterBinding! So it stands to reason that the reverse of the Disable-NetAdapterBinding cmdlet would be… yes, you guessed it! Enable-NetAdapterBinding! But this time, rather than using the wildcard, let’s just do it for the NIC that I am currently using:
Enable-NetAdapterBinding -Name “W-Fi 2” -ComponentID ms_tcpip6
From this, we will now get the following results:

…and just like that, we can now enable and disable a protocol on demand.
By the way, if you are not fond of ComponentIDs, you can also use the actual display names:

Of course, that is too much typing for a lot of people, so you could shorten it with wildcards… or you can just cut and paste the ComponentID cmdlets.
Have fun guys, and script on!
]]>
A few minutes later, I went to log on as one of the newly created users, and the computer returned ‘The password is incorrect. Try again.’
I spent a few minutes troubleshooting, until I realized… PowerShell uses the dollar sign ($) for variables. I deleted the users, then changed the script to use a password like ‘P@ssw0rd’. Sure enough, it worked.
The moral of the story… When using PowerShell, remember that the $ means something, and might break things if you use it for other things.
Have fun!
]]>“How do I delete old users from a Windows 10 computer? I log in as an administrator, navigate to c:\Users\, and delete their tree.”
NO! In fact, HELL NO!
There are several reasons why you might want to delete a user profile from a computer. ranging from termination of employment to reallocation of systems to… well, you get the picture. There are a few of ways you can do it, but there are only a couple of ways of doing it right,
Recently I was working with a client who encountered a situation where a few of his domain users’ local profiles were corrupted on a corporate system. I told him that the simplest way of fixing the issue was to delete the user profile, so that when the user next logged on, it would re-create the profile for them. They called me back a few minutes later reporting that they were now receiving the following message when the affected users logged in:
Okay, that led me to believe they had simply deleted the c:\Users\%username% directory, and we had to clean up that mess in the registry (under “KEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList”, delete any entries that have the .BAK extension).
Okay… now that we have learned how NOT to do it, here’s how you should do it:
1) Open Control Panel > System and Security > System in the affected machine. The simplest way to do this in the more recent releases of Windows 10 is to click Run – sysdm.cpl.
3) In the Advanced tab of the System Properties window, in the User Profiles section, click Settings…
4) In the User Profiles window, click on the user you want to delete, and click Delete.
**NOTE: You will not be able to delete the account you are logged in as, nor the default Administrator account.
Of course, you will be asked if you are really really sure that you want to delete the account, and you can click Yes or No as you wish.
There are ways to do it in PowerShell… but they don’t seem to be very clear or very easy. For this one time, I strongly suggest the GUI.
]]>