Depending on the developer for an application, various & differing version numbers can be added into the metadata of an application/executable/binary which can sometimes be confusing when creating manual detecton rules as part of an Intune Win32 App.
A good example of this is ‘CMTrace.exe’ from Microsoft. The screenshot below shows the ‘Details’ tab of ‘CMTrace.exe’ file properties with ‘File version’ highlighted:
You can see from the screenshot that the ‘File Version’ displayed is ‘5.0.9068.1000′ which is what I initially used for my Detection Rule as part of a CMTrace Win32 App in Intune:
Unfortunately this resulted in a continuous loop of ‘CMTrace.exe’ being reinstalled as it wasn’t being detected correctly by Intune – this was visible in the log file below:
Turns out that Intune actually uses a different property of the file metadata to determine the version number of a file. This is documented in the Microsoft article below:
Two quick & easy ways to retrieve the correct ‘Version’ value to use in a Detection Rule are shown in the PowerShell commands below. Both commands are querying the same file in the same location on a reference computer i.e., ‘CMTrace.exe’ in the ‘C:\Windows’ directory:
The value returned from these queries was ‘5.00.9068.1000′.
The additional zero in the version number was the difference that broke the configured Detection Rule.
Using the correct value returned by either of these two PowerShell commands (they are both returning the same information from .NET) will match the value retrieved by Intune when checking if the Win32 App is installed or not.
This will result in the Win32 App being installing once and then correctly being detected as installed thereafter i.e., the Detection Rule will now successfully work as expected.
It’s a common requirement to configure Registry values on an endpoint when building or configuring them.
I like to use PowerShell to do this as it’s easy to document and keep track of changes.
One issue with the standard PowerShell method (Set-ItemProperty) to do this is that you usually need a key to exist before you can add values into it. This usually means having to figure out if a key exists and if not, running several ‘New-Item’ commands to create the key (and potentially the tree/path to the key) itself first.
Fine for a few settings but not ideal when you have 10’s or even 100’s of registry settings to deploy.
Using a .REG file avoids this as it imports the key structure and the key values too. But this adds obfuscation into mix as you need to open each .REG file to see what it contains and that can be unintuitive sometimes.
The solution to this is to call registry changes directly using the “[Microsoft.Win32.Registry]::SetValue” .NET method as this behaves in the same way as importing .REG files in that it will create the key and path to the key regardless of whether the key/path already exists. Quick and easy (also highly performant!).
The format of these commands wasn’t initially obvious from Microsoft documentation but after a lot of trial and error, frustration and profuse swearing, I can provide the following examples for the most common types of Registry changes:
Quick and simple post today. Had a customer deploying Windows 10 IoT Enterprise LTSC 2021 (yes I am aware that IoT Enterprise and LTSC are not officially supported for Windows Autopilot at time of writing but it works fine so… ) and there was a requirement to update the version of Edge included in this version of Windows so that it was at a version which supported some of the more recent Intune Configuration Profile policy settings. Should this not happen then the policies would not apply until such time as Edge updated itself which may be some time after a user had logged into the device.
To accomplish this I wrapped the following PowerShell script into a Win32 app and had it configured as a Blocking application on the Enrolment Status Page (ESP) being used for Autopilot.
The result is Microsoft Edge updating to the latest available version before any users log in for the first time.
<#
.DESCRIPTION
PowerShell script to force update of Microsoft Edge Stable
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.0 Jonathan Conway Initial script creation
#>
$Exe = Get-ChildItem -Path "${env:ProgramFiles(x86)}\Microsoft\EdgeUpdate\MicrosoftEdgeUpdate.exe"
$Arguments = "/silent /install appguid={56EB18F8-B008-4CBD-B6D2-8C97FE7E9062}&appname=Microsoft%20Edge&needsadmin=True"
return (Start-Process $($Exe.FullName) -ArgumentList $Arguments -NoNewWindow -PassThru -Wait).ExitCode
Be sure to configure the detection method according to your environment. For me, I set this to a version of “Greater than or equal to: 100.0.0000.00” to detect the installation but you may want to use a higher version number depending on your own circumstances:
Let me know if this works for you (or if you have any issues with the script) in the comments.
Had a requirement to detect and remove any user installations of Zoom (i.e. installed using standard user permissions and located in the user profile) via Intune. The supported route for uninstalling Zoom is use a Zoom-provided tool called ‘CleanZoom.exe’ so the script checks for that tool being present and if not, downloads and extracts it directly from Zoom before running the tool to remove any user installations of Zoom. Also needed a log file to show when this has been done from the client (this can obviously be removed if not needed).
Proactive Remediations to the rescue again!
Detection:
<#
.DESCRIPTION
Proactive Remediation | Detection
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.0 Jonathan Conway Initial script creation
#>
# Discovery
try {
# Run Test and store as variable
$Test = Get-ChildItem -Path "C:\Users\" -Filter "Zoom.exe" -Recurse -Force -ErrorAction SilentlyContinue
# Check where test is compliant or not - if no instances of Zoom are discovered then mark as 'Compliant' and exit with 0
if ($null -eq $Test) {
Write-Output "Compliant"
exit 0
}
# If instances of Zoom are discovered then mark as 'Non Compliant' and exit with 1
else {
Write-Warning "Non Compliant"
exit 1
}
}
catch {
# If any errors occur then return 'Non Compliant'
Write-Warning "Non Compliant"
exit 1
}
Remediation:
<#
.DESCRIPTION
Proactive Remediation | Remediation
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.0 Jonathan Conway Initial script creation
#>
# Logging
$LogPath = "C:\Support\Zoom\"
Start-Transcript -Path $LogPath\ZoomCleanup.log -Append -NoClobber
# Variables
$CleanZoomTool = "C:\Support\Zoom\CleanZoom.exe"
# Check to see if 'C:\Support\Zoom' exists
$CheckZoomFolder = Test-Path -Path "C:\Support\Zoom\" -PathType Container
# If 'C:\Support\Zoom' folder does not exist then create it
if ($CheckZoomFolder -eq $false) {
# Create folder
Write-Output "'C:\Support\Zoom' folder does not exist - creating it"
New-Item -Path "C:\Support" -Name "Zoom" -ItemType "Directory" -Force
}
else {
Write-Output "'C:\Support\Zoom' folder exists - continuing"
}
# Check if CleanZoom.exe exists on the device
$CheckZoomClean = Test-Path -Path $CleanZoomTool -PathType "Leaf"
# If CleanZoom.exe does not exist on the device - download from Zoom website and extract locally
if ($CheckZoomClean -eq $false) {
Write-Output "'C:\Support\Zoom\CleanZoom.exe' does not exist - downloading and extracting it"
Invoke-WebRequest -Uri "https://assets.zoom.us/docs/msi-templates/CleanZoom.zip" -OutFile "C:\Support\Zoom\CleanZoom.zip"
Expand-Archive -Path "C:\Support\Zoom\CleanZoom.zip" -DestinationPath "C:\Support\Zoom" -Force
Remove-Item -Path "C:\Support\Zoom\CleanZoom.zip" -Force
}
else {
Write-Output "'C:\Support\Zoom\CleanZoom.exe' exists - continuing"
}
try {
# Run CleanZoom.exe to remove any installed instances of Zoom client in User Profiles
Write-Output "Running CleanZoom.exe to remove Zoom instances from User Profile areas"
Start-Process -FilePath $CleanZoomTool -ArgumentList "/silent"
exit 0
}
catch {
Write-Output "CleanZoom.exe failed to run"
exit 1
}
Stop-Transcript
Recently had a customer requirement to encrypt Windows 10 devices using a MCM Task Sequence and then have the Recovery Keys escrowed into AAD once an Intune Drive Encryption policy was applied via Co-management workload shift (Endpoint Protection).
By default, Windows will escrow to where you tell it in the Task Sequence and not escrow into AAD. In my case the Task Sequence was storing the Recovery Key into on-prem Active Directory.
The Discovery script checks Event Viewer for an Event 845 including the text “was backed up successfully to your Azure AD” having been logged in the last 7 days (this can obviously be amended to suit individual requirements).
If non-compliant then the Remediation script forces the key to be escrowed using the ‘BackupToAAD-BitLockerKeyProtector’ PowerShell cmdlet.
Detection:
<#
.DESCRIPTION
Script to check for BitLocker Key escrow into Azure AD
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.0 Jonathan Conway Initial script creation
#>
# Check for Event 845 in BitLocker API Management Event Log over last 7 days - if contains text "was backed up successfully to your Azure AD" then Detection is complete
try {
$Result = Get-WinEvent -FilterHashTable @{LogName = "Microsoft-Windows-BitLocker/BitLocker Management"; StartTime = (Get-Date).AddDays(-7) } | Where-Object { ($_.Id -eq "845" -and $_.Message -match "was backed up successfully to your Azure AD") } | Format-Table -Property "Message"
$ID = $Result | Measure-Object
if ($ID.Count -ge 1) {
Write-Output "BitLocker Recovery Key escrow to Azure AD succeeded = Compliant"
exit 0
}
# If Event is not detected then mark as 'Non Compliant' and exit with 1
else {
Write-Warning "BitLocker Escrow Event Missing = Non Compliant"
exit 1
}
}
catch {
Write-Warning "An error occurred = Non Compliant"
exit 1
}
Remediation:
<#
.DESCRIPTION
Script to remediate BitLocker Key escrow into Azure AD
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.0 Jonathan Conway Initial script creation
#>
# Escrow BitLocker Recovery Key for OSDrive into Azure AD
$BitLockerVolume = Get-BitLockerVolume -MountPoint $env:SystemRoot
$RecoveryPasswordKeyProtector = $BitLockerVolume.KeyProtector | Where-Object { $_.KeyProtectorType -like "RecoveryPassword" }
BackupToAAD-BitLockerKeyProtector -MountPoint $BitLockerVolume.MountPoint -KeyProtectorId $RecoveryPasswordKeyProtector.KeyProtectorId -ErrorAction SilentlyContinue
Because the legacy WMI PowerShell cmdlets (e.g. Get-WmiObject) are eventually going to be deprecated, I always try to use the newer CIM-based PowerShell cmdlets (e.g. Get-CimInstance) wherever possible.
This can be a bit confusing sometimes though and it can appear that the new CIM cmdlets have less functionality than their older WMI counterparts. This isn’t the case as I explain later on in the blog post.
This perceived difference is especially true when working with TPM chips on devices. Below is an example of running a query against the ‘Win32_Tpm‘ class in WMI using both the old and new cmdlets.
The legacy ‘Get-WmiObject‘ cmdlet shows ‘70‘ Properties/Methods while the newer ‘Get-CimInstance‘ cmdlet shows only ‘20‘.
One WMI Method that I use regularly with OSD is the ‘SetPhysicalPresenceRequest‘ Method to configure a TPM to be cleared, activated and enabled. If you use the value of ‘14‘ for the request then you need to configure the firmware/BIOS to not require Physical Presence otherwise you’ll need someone to physically press a key to confirm the TPM clear is allowed.
If you can’t configure the firmware/BIOS to disable requiring physical presence confirmation then you can use the request value of ‘10‘ which won’t ask for physical confirmation but is slightly less effective. Using ‘10‘ should still mean your TPM is ready to be accessed by encryption-related commands later on in the Task Sequence though.
To use this command in a MCM Task Sequence I would historically use a ‘Run Command Line‘ task to run the following PowerShell command:
Given my previous statement that I want to use the more modern ‘Get-CimInstance‘ cmdlets I looked into how this could be done with the newer cmdlets so that if or when the legacy WmiObject cmdlets are no longer available in Windows, my Task Sequence commands will continue to run successfully without any changes being needed.
By running ‘Get-WmiObject‘ we can see that ‘SetPhysicalPresenceRequest‘ is listed as an available Method for us to use:
Running the same command with the ‘Get-CimInstance‘ cmdlet brings back significantly fewer Methods and most importantly ‘SetPhysicalPresenceRequest‘ is missing from the list of Methods!!!!
“Where’s my bloody Method?” I asked whilst preparing myself to overcome OCD and continue using the legacy command…
However, under the covers the ‘SetPhysicalPresenceRequest‘ method still exists in WMI but we just can’t see it as easily using ‘Get-CimInstance‘. In order to view these hidden Methods we need to run a slightly different PowerShell command as per below:
So we can now see the required ‘SetPhysicalPresenceRequest‘ method. But how do we use it in a MCM Task Sequence in the same manner as the legacy cmdlet?
The answer is below – we need to pipe one cmdlet (Get-CimInstance) into another (Invoke-CimMethod) to achieve the same result as the legacy cmdlet:
Running the newer CIM commands in my MCM ‘Run Command Line‘ task now gives me the same result as the legacy command did and balance is once again restored to the galaxy…
Edit 28/02/24: Updated to reflect increase of Recovery partition to 2048Mb.
How should disks be partitioned for Windows 10 so that the Recovery Partition is configured properly? Turns out the answer isn’t straightforward and that the standard tools provided provided in ConfigMgr by Microsoft don’t quite allow you to do it properly by default…
WinRE is important as it is used by many of the reset features used in Windows 10, especially in Modern Management scenarios. This includes manual recovery, Factory Reset and newer options such as Windows Autopilot Reset.
This partition is used to help recover an OS when not bootable which is why it needs to be located on a separate partition. This partition should be placed immediately after the Windows partition to allow Windows to modify and recreate the partition in the future if future updates (i.e. newer versions of Windows) require a larger Recovery Image. It also allows the device to be rebuilt (i.e., have the OS reinstalled) without affecting the Recovery Partition.
If this resizing/recreation of the Recovery Partition can’t be done during OS upgrades then the ability to use the WinRE environment can be removed and is tricky to remediate afterwards.
To produce this post I have read through a lot of other well-respected blogs and also analysed what a lot of experts have suggested on Twitter etc. This article is the summary of that analysis as well as providing a script written by me which can be used during ConfigMgr Task Sequences to create the required disk partitions correctly. The main resources I used were:
Lets start with the recommendations from Microsoft.
UEFI| GPT
For UEFI, Microsoft recommend the following partition layout for Windows 10 (Docs link can be found here):
Microsoft Recommended Partition Layout
From this diagram we can see that Microsoft recommend a ‘System‘ partition’ (also known as a EFI System Partition (ESP)) which provides a device with a partition to boot from. It shouldn’t contain any files other than what is intended for booting the device.
‘System’ Partition
Following that is a ‘Microsoft Reserved‘ partition (MSR) which is used for partition management. No user data can be stored on this partition.
‘Microsoft Reserved’ Partition
The next partition is the ‘Windows‘ partition which is obviously used for the Windows Operating System.
‘Windows’ Partition
And the last partition (and the one which causes the most questions) is the ‘Recovery Tools‘ partition which will host a copy of the ‘Windows Recovery Environment’ (WinRE). WinRE is a recovery environment that can repair common causes of unbootable operating systems.
The Recovery Tools partition presents a problem as there is no built-in way in a Task Sequence of ensuring that the Recovery Tools partition is located at the end of the disk (i.e., after the Windows partition) and is configured correctly – this is the main reason for writing this blog post.
The table below summarises the best recommended sizes for each partition based on the Microsoft recommendations and also various experts around the web:
Name
Size (Mb)
Format
System (EFI)
360 fixed size
FAT32
MSR
128 fixed size
–
Windows (Primary)
100% of remaining space on disk
NTFS
Recovery
2048 fixed size
NTFS
UEFI Disk Partitions
UEFI | GPT Format and Partition Disk Step
BIOS | MBR
For traditional BIOS, Microsoft recommend the following partition layout for Windows 10 (Docs link can be found here). Note that ‘BIOS | MBR’ is now considered legacy and has limited use case scenarios with Modern Device Management. Many modern security controls are based on the newer UEFI firmware and so ‘BIOS | MBR’ and is therefore rarely used:
From this diagram we can see that Microsoft again recommends a ‘System‘ partition which is used to boot the device.
‘System’ Partition
Following that is the ‘Windows‘ partition to be used for the Windows 10 operating system.
‘Windows’ Partition
And the final partition is once again the ‘Recovery Tools‘ partition which was mentioned previously in the UEFI section above.
The table below summarises the best recommended sizes for each partition based on the Microsoft recommendations and also various experts around the web:
Name
Size (Mb)
Format
System
360 fixed size
NTFS
Windows (Primary)
100% of remaining space on disk
NTFS
Recovery
2048 fixed size
NTFS
Traditional BIOS Disk Partitions
BIOS | MBR Format and Partition Disk Step
Recovery Tools Partition Solution
To create the Recovery Tools partition in the location and size required we need to use ‘Diskpart’ to create the Recovery Partition once the built-in ‘Format and Partition Disk‘ step has been completed.
The way this is done is to shrink the ‘Windows’ partition by the size needed (in this case 984Mb) and create a new partition with that newly-available space called ‘Recovery’.
For UEFI deployments, the ‘Recovery Tools’ partition needs to have it’s ‘ID‘ set to ‘de94bba4-06d1-4d40-a16a-bfd50179d6ac‘, be given the GPT attribute of ‘0x8000000000000001‘ and needs to be set to have a Partition Type of ‘27‘ so that it is hidden.
To accomplish this I wrote a PowerShell script to run Diskpart with the required settings. The script should be added to the Task Sequence just after the built-in ‘Format and Partition Disk’ step as per the images below:
Location of ‘Create Recovery Partition’ Step
The script works best when entered as a PowerShell script as part of a ‘Run PowerShell Script‘ task in a Task Sequence:
‘Create Recovery Partition’ Script Task
The script for automating the creation of the Recovery Partition is embedded below. It will detect if the device is configured for BIOS or UEFI and create the partition accordingly:
<#
.DESCRIPTION
Script to create Recovery Partition during OSD and hide System Partition (if needed) on MBR
This script requires the following optional Boot Image components: 'WinPE-DismCmdlets', 'WinPE-PowerShell', 'WinPE-StorageWMI'
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
VERSION AUTHOR CHANGE
1.5 Jonathan Conway Initial script creation
#>
# Loads the Task Sequence environment
$tsenv = New-Object -COMObject Microsoft.SMS.TSEnvironment
# Configures script log path to match tsenv logs folder
$LogPath = $tsenv.Value("_SMSTSLogPath")
# Determines if firmware configured as UEFI
$UEFI = $tsenv.Value("_SMSTSBootUEFI")
# Get OS Disk information
[string]$OSDrivePartitionNumber = (Get-Disk | Where-Object {$PSItem.BusType -ne 'USB'} | Get-Partition | Where-Object {$PSItem.Size -gt '5GB' -and $PSItem.Type -eq 'Basic'}).PartitionNumber
# Recovery partition size in Mb
$RecoveryPartitionSize = '2048'
If ($UEFI -eq $true) {
'select disk 0',
'list partition',
"select partition $OSDrivePartitionNumber",
"shrink desired=$RecoveryPartitionSize minimum=$RecoveryPartitionSize",
'create partition primary',
'format quick fs=ntfs label=Recovery',
'set id="de94bba4-06d1-4d40-a16a-bfd50179d6ac"',
'gpt attributes=0x8000000000000001',
'list partition' | diskpart | Tee-Object -FilePath "$LogPath\Pwsh-OsdDiskpart.log"
}
else {
'select disk 0',
'list partition',
"select partition $OSDrivePartitionNumber",
"shrink desired=$RecoveryPartitionSize minimum=$RecoveryPartitionSize",
'create partition primary',
'format quick fs=ntfs label=Recovery',
'set id=27',
'list partition',
'select partition 1', 'set id=17', 'list partition' | diskpart | Tee-Object -FilePath "$LogPath\Pwsh-OsdDiskpart.log"
}
The script uses variables for the OS Drive Partition and Recovery Partition Size so both of these can be modified according to specific requirements in the event that this is needed.
So this solution should allow you to correctly configure the disk partitions needed to install Windows 10 and also ensure that a functional Recovery Tools partition is always present.
If you’re Old Skool like me and still use MDT to produce Windows 10 Reference Images then this script may be useful to save some time and hassle.
The script basically automates the creation of the filename for the backup WIM file so that all that is required to produce a new image bi-annually (or as often as you like) is to run a Build & Capture Task Sequence which (providing the VM has access to WSUS or Microsoft Update) will include the all the latest patches.
It produces a filename in the following format which includes the Windows 10 version being captured, architecture, language and the date.
W10X64_20H2_en-GB_19042.572_2020-10-22_1525.wim
Because the date includes the time the image is captured, the filename will always be unique so there will never be an occasion where the image can’t be captured to a pre-existing WIM file being present with the same name.
The net result is that the Reference Image creation can be as simple as booting a VM, choosing a Task Sequence then collecting the WIM file it produces at the end of the process.
The script produces an MDT variable called ‘%WimFileName%‘ which is then used to populate the ‘BackupFile‘ property in the Task Sequence – this is demonstrated in the images below:
Configure: Set WIM FilenameSet: BackupFile
The script content is embedded below. Copy and paste into a blank text file and save as ‘Pwsh-SetWimFilename.ps1‘ and copy it into your MDT Deployment share in the following location:
DeploymentShare\Scripts\Custom
Script Content:
<#
.DESCRIPTION
Script to automate the naming of the WIM File Name during MDT OSD
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1 [-Debug]
.NOTES
Author(s): Jonathan Conway
Modified: 17/11/2021
Version: 1.5
Option [-Debug] switch can be run locally to output results to screen to test WIM File Name is correct
#>
Param (
[Switch]$Debug
)
Begin {
# Variables: Information from Registry
$OsRegistryInfo = Get-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion'
[string]$DisplayVersion = $OsRegistryInfo.DisplayVersion
[string]$ReleaseId = $OsRegistryInfo.ReleaseId
[string]$Ubr = $OsRegistryInfo.UBR
[string]$EditionId = $OsRegistryInfo.EditionID
[string]$OsCurrentBuildNumber = $OsRegistryInfo.CurrentBuildNumber
# Variables: Change 'ReleaseID' to new Windows Release naming format if Windows 10 20H2 or later
if ($ReleaseId -gt '2004' -and $ReleaseId -lt '2009') {
if ($ReleaseId -match "^(..)(01|02|03|04|05|06)$") {
[string]$ReleaseId1stHalf = $ReleaseId.Substring(0, 2)
[string]$ReleaseId2ndHalf = $ReleaseId.Substring(2, 2)
[string]$ReleaseId2ndHalfReplaced = $ReleaseId2ndHalf -replace "$ReleaseId2ndHalf", "H1"
[string]$ReleaseId = "$ReleaseId1stHalf" + "$ReleaseId2ndHalfReplaced"
}
if ($ReleaseId -match "^(..)(07|08|09|10|11|12)$") {
[string]$ReleaseId1stHalf = $ReleaseId.Substring(0, 2)
[string]$ReleaseId2ndHalf = $ReleaseId.Substring(2, 2)
[string]$ReleaseId2ndHalfReplaced = $ReleaseId2ndHalf -replace "$ReleaseId2ndHalf", "H2"
[string]$ReleaseId = "$ReleaseId1stHalf" + "$ReleaseId2ndHalfReplaced"
}
}
elseif ($ReleaseId -ge '2009') {
$ReleaseId = $DisplayVersion
}
# Variables: Information from WMI
$OsWmiInfo = Get-CimInstance -ClassName 'Win32_OperatingSystem'
# Variables: OS 'Caption' information
$Caption = $OsWmiInfo.Caption
[String]$RegExPattern = '(Microsoft\ (Windows|Hyper-V)\ (10|11|Server\ (2016.*?|2019.*?)))'
[String]$MachineOS = ($Caption | Select-String -AllMatches -Pattern $RegExPattern | Select-Object -ExpandProperty 'Matches').Value
# Variables: Media Language
[string]$OsLanguageNumberCode = $OsWmiInfo.OSLanguage
if ($OsLanguageNumberCode -eq '2057') {
$OsLanguage = 'en-GB'
}
if ($OsLanguageNumberCode -eq '1033') {
$OsLanguage = 'en-US'
}
# Variables: Date Information
$BuildDate = Get-Date -Format "yyyy-MM-dd_HHmm"
# Variables: OS Architecture
if ($OsWmiInfo.OSArchitecture -eq '64-bit') {
$Architecture = 'X64'
}
if ($OsWmiInfo.OSArchitecture -eq '32-bit') {
$Architecture = 'X86'
}
}
Process {
# Microsoft Hyper-V Server
if ($MachineOS -like 'Microsoft Hyper-V Server*') {
# Variables: Set OS Prefix
$HypervPrefix = 'HVS'
# Variables: Set Windows Server Version
$WindowsServerVersion = $MachineOS.TrimStart("Microsoft Hyper-V Server")
# Hyper-V Server: Create Wim File Name string
$WimFileName = "$HypervPrefix" + "$WindowsServerVersion" + "$Architecture" + '_' + "$OsLanguage" + '_' + "$OsCurrentBuildNumber" + '.' + "$Ubr" + '_' + "$BuildDate" + '.' + 'wim'
}
# Microsoft Windows 10
if ($MachineOS -like 'Microsoft Windows 10*') {
# Variables: Set OS Prefix
$Os = 'W10'
# Variables: OS Edition
if ($EditionId -eq 'Enterprise') {
$Edition = 'ENT'
}
if ($EditionId -eq 'IoTEnterprise') {
$Edition = 'IOT'
$ReleaseId = $OsRegistryInfo.DisplayVersion
}
if ($EditionId -eq 'EnterpriseS') {
$Edition = 'IOT'
$Channel = 'LTSC'
}
if ($EditionId -eq 'Professional') {
$Edition = 'PRO'
}
if ($EditionId -eq 'EnterpriseS') {
# Windows 10 IoT LTSC: Create Wim File Name string
$WimFileName = "$Os" + "$Architecture" + '_' + "$Edition" + '_' + "2019" + '_' + "$Channel" + '_' + "$OsLanguage" + '_' + "$OsCurrentBuildNumber" + '.' + "$Ubr" + '_' + "$BuildDate" + '.' + 'wim'
}
else {
# Windows 10: Create Wim File Name string
$WimFileName = "$Os" + "$Architecture" + '_' + "$Edition" + '_' + "$ReleaseId" + '_' + "$OsLanguage" + '_' + "$OsCurrentBuildNumber" + '.' + "$Ubr" + '_' + "$BuildDate" + '.' + 'wim'
}
}
# Microsoft Windows 11
if ($MachineOS -like 'Microsoft Windows 11*') {
# Variables: Set OS Prefix
$Os = 'W11'
# Variables: Set Display Version
$OsDisplayVersion = $OsRegistryInfo.DisplayVersion
# Variables: OS Edition
if ($EditionId -eq 'Enterprise') {
$Edition = 'ENT'
}
if ($EditionId -eq 'IoTEnterprise') {
$Edition = 'IOT'
}
if ($EditionId -eq 'Professional') {
$Edition = 'PRO'
}
# Windows 10: Create Wim File Name string
$WimFileName = "$Os" + "$Architecture" + '_' + "$Edition" + '_' + "$OsDisplayVersion" + '_' + "$OsLanguage" + '_' + "$OsCurrentBuildNumber" + '.' + "$Ubr" + '_' + "$BuildDate" + '.' + 'wim'
}
# Microsoft Windows Server
if ($MachineOS -like 'Microsoft Windows Server*') {
# Variables: Set OS Prefix
$ServerPrefix = 'WS'
# Variables: Set Windows Server Version
$WindowsServerVersion = $MachineOS.TrimStart("Microsoft Windows Server")
# Windows Server: Create Wim File Name string
$WimFileName = "$ServerPrefix" + "$WindowsServerVersion" + "$Architecture" + '_' + "$OsLanguage" + '_' + "$OsCurrentBuildNumber" + '.' + "$Ubr" + '_' + "$BuildDate" + '.' + 'wim'
}
}
End {
# If Debug is true then write WIM file name to host
if ($Debug) {
Write-Host "Caption is: `"$Caption`"" -BackgroundColor 'Green' -ForegroundColor 'Black'
Write-Host "MachineOS is: `"$MachineOS`"" -BackgroundColor 'Green' -ForegroundColor 'Black'
Write-Host "MachineOS is: `"$EditionId`"" -BackgroundColor 'Green' -ForegroundColor 'Black'
Write-Host "WIM File Name is: `"$WimFileName`"" -BackgroundColor 'Green' -ForegroundColor 'Black'
}
else {
# Set MDT Task Sequence Variable to be used to populate 'BackupFile'
$tsenv:WimFileName = "$WimFileName"
}
}
It is possible to test the output of the script by copying the script onto a device and running it with the ‘-debug’ switch. This will display the WIM Filename in the PowerShell console so you can check to see if it is correct:
I’ve been a huge fan of MDT over the years and still use it to create my Windows Reference images to this day as it’s so straightforward for me to make tweaks to a WIM file if I need to.
Historically I have always recommended and implemented MDT-Integrated Task Sequences in Configuration Manager to take advantage of all the additional capabilities that MDT provides.
Recently though I have started to move to using a standard MCM OSD Task Sequence as they are so much more simple and require less maintenance.
The most useful thing from MDT Integration that I use day-to-day for OSD is the ‘MDT Gather’ step to collect information about the device and deployment at various points in the Task Sequence. This allows various aspects of a deployment to be controlled dynamically based on numerous pre-defined variables such as the classic IsDesktop/IsLaptop scenarios etc.
The downside to this is the steps require a MCM Package to be created and maintained plus it adds unnecessary time to the deployment when downloading the Toolkit Package.
It is possible to retain this useful capability by replacing the MDT Toolkit/Gather steps with a PowerShell script which can be added directly into the ‘Run PowerShell Script’ Task Sequence step and that’s why I’m here writing this post 🙂
I found a script which was created by Johan Schrewelius (with contributions from various others) which did the majority of what I wanted. His script can be accessed on the Technet PowerShell Gallery (Link).
By reworking this script and adding functionality that I specifically needed, I now have a lightweight and solid ‘Gather’ solution which can be easily added to any MCM Task Sequence.
v1.0 of the script collects the following information. The example is one of my lab devices so you can see what the info looks like. I expect this list to expand over time as new requirements crop up:
I’ve been using it with customers for a while and am happy now that it’s robust/mature enough to be shared on GitHub for others to use as well if they want to:
It can be added into a Task Sequence as per the image below using the ‘Run PowerShell Script’ step with the Execution Policy set to ‘Bypass’. Each time it runs, it will add the collected variables into the running Task Sequence environment and can be used throughout the Task Sequence:
‘Run PowerShell Script’ Step with Pwsh-Gather.ps1 script added
Below is an example of how variables can be utilised – in this example the condition is to control some BitLocker tasks which I only wanted to run on physical devices which are also laptops:
Conditions using Gather Variables
It also creates a log file (Pwsh-Gather.log) in the standard Task Sequence logging directory defined as the built in variable “_SMSTSLogPath” which can be reviewed using cmtrace.exe.
For testing, the script can be run locally on a device by using the ‘-Debug’ parameter as per the example below from an ‘Administrator’ PowerShell prompt:
Feel free to start using the script and let me know if there are any improvements or additions that you’d like to see and I’ll try and accommodate them when time permits. Hopefully people find it useful!
A customer recently had a requirement for rebuilds to be done in remote sites via USB flash drives configured as MCM Bootable Media due to a lack of local MCM Distribution Points and PXE Boot capability.
Using devices in UEFI mode with BitLocker enabled makes this tricky when the Boot Image associated with the Task Sequence becomes out of sync with the Boot Image on the USB media. If the boot images don’t match then MCM attempts to pre-stage onto the local disk and fails as the OSDisk is unavailable due to it being encrypted with BitLocker (the drive appears as “RAW” and cannot be accessed) and none of the other partitions are large enough or available.
I worked around this by creating a PowerShell PreStart script and adding it to the Boot Media ISO image. The script runs before the Task Sequence begins. It creates a Diskpart configuration text file on the fly in the ‘X:\Windows\Temp’ folder of the running WinPE. After creating the Diskpart configuration file, it then runs Diskpart referencing the configuration file in order to create suitably-sized/lettered partitions to successfully boot from using UEFI and that are also accessible for the Task Sequence to download and pre-stage the latest Boot Image if it’s required (i.e. if it’s different to the boot image on the USB).
Problem solved!
The command for the PreStart script that I used was:
And the PowerShell code contained with PreStart.ps1 is shown below:
<#
.DESCRIPTION
Configures GPT disk layout using DiskPart.exe to avoid Boot Image mismatching when using MCM Bootable Media
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File .ps1
.NOTES
Author: Jonathan Conway
Modified: 06/04/2019
Version: 1.0
#>
# Display warning and request confirmation from engineer
$Shell = New-Object -ComObject "WScript.Shell"
$Button = $Shell.Popup("Proceeding will wipe all local data from all local drives. Hold Power Button until device powers off to cancel. Click OK to proceed.", 0, "WARNING", 0)
# Set variables
$DiskPartFile = "X:\Windows\Temp\DiskpartConfig.txt"
if (Get-Volume | Where-Object {$_.DriveLetter -eq 'C' -and $_.DriveType -eq 'Removable'}) {
Get-Partition -DriveLetter 'C' | Set-Partition -NewDriveLetter 'U'
}
# Create contents of DiskPart configuration file
Write-Output "SELECT DISK 0" | Out-File -Encoding utf8 -FilePath "$DiskpartFile"
Write-Output "CLEAN" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "CONVERT GPT" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "CREATE PARTITION EFI SIZE=200" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "ASSIGN LETTER=S" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "FORMAT QUICK FS=FAT32" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "CREATE PARTITION MSR SIZE=128" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "CREATE PARTITION PRIMARY" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "ASSIGN LETTER=C" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "FORMAT QUICK FS=NTFS" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
Write-Output "EXIT" | Out-File -Encoding utf8 -FilePath "$DiskpartFile" -Append
# Run DiskPart
Start-Process -FilePath "diskpart.exe" -ArgumentList "/s $DiskPartFile" -Wait
In my environment this formats the disks in a way which allows my Task Sequence to progress whatever state the UEFI partitions are in (i.e. BitLocker enabled or not).
A pop up warning is shown on screen stating:
“Proceeding will wipe all local data from all local drives. Hold Power Button until device powers off to cancel. Click OK to proceed“.
Clicking OK continues ahead and starts the Diskpart process before progressing to the Task Sequence selection screen 🙂
Standard Windows/DOS wildcards don’t work in WMI “LIKE” queries as they use WQL language instead:
Multiple Characters = "%" (Percentage)
Single Character = "_" (Underscore)
For reference, the corresponding Windows wildcards are:
Multiple Characters = "*" (Asterisk)
Single Character = "?" (Question Mark)
Note: when using wildcards in ConfigMgr Task Sequences pay attention to what is being done. If you’re querying a value from WMI then you should use “%” and “_” as wildcards:
SELECT * FROM Win32_ComputerSystem WHERE Name LIKE 'PC%'
SELECT * FROM Win32_ComputerSystem WHERE Name LIKE 'PC_____'
If you’re querying a Task Sequence variable then the Windows wildcards (“*” and “?” should be used:
OSDComputerName LIKE "PC*"
OSDComputerName LIKE "PC?????"
I like to use PowerShell for all my scripting these days (all VB and batch files have now been rewritten in PoSh) and I also like to use RoboCopy for any file copies that I need to do such as in an OSD Task Sequence.
The pain in the arse with RoboCopy is the return/exit codes it uses which cause issues when used in PowerShell scripts.
The return codes used by PowerShell are:
0 No files were copied. No failure was encountered. No files were mismatched. The files already exist in the destination directory; therefore, the copy operation was skipped.
1 All files were copied successfully.
2 There are some additional files in the destination directory that are not present in the source directory. No files were copied.
3 Some files were copied. Additional files were present. No failure was encountered.
5 Some files were copied. Some files were mismatched. No failure was encountered.
6 Additional files and mismatched files exist. No files were copied and no failures were encountered. This means that the files already exist in the destination directory.
7 Files were copied, a file mismatch was present, and additional files were present.
8 Several files did not copy.
Because PowerShell expects an exit code of ‘0’ for success, if RoboCopy completes with an exit code of ‘1’ (i.e. All files were copied successfully) then it throws an exit code other than ‘0’.
In an OSD Task Sequence this is picked up as an error and will therefore cause the Task Sequence to fail. Bollocks.
This can easily be prevented using a wee bit of code at the end of the script used to run the RoboCopy.
In the example below I am copying a single ISO image using a PowerShell script in a Task Sequence (using a ‘Run PowerShell Script’ task). The resulting PowerShell exit code will equal ‘1’ as “all files will be copied successfully”.
<#
.SYNOPSIS
Copies VM Bootable ISO
.DESCRIPTION
Copies the VM Bootable ISO from the package folder to C:\Media
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File <ScriptName>.ps1
.NOTES
Author: Jonathan Conway
Version: 1.0
Created: 29/11/2017
#>
# Set variable for newest ISO in package folder (in case there are more than one then the most recent will be chosen)
$ISO = Get-ChildItem '.\*.iso' | Sort-Object 'LastWriteTime' | Select-Object -last '1' | Select-Object -ExpandProperty 'Name'
# Run ROBOCOPY to copy the Bootable ISO image to "C:\Media"
& ROBOCOPY ".\" "C:\Media" $ISO
# Robocopy for a single file returns a exit code of "1" (i.e. All files were copied successfully) which causes a Task Sequence error - this "if" statement changes exit code to a "0"
if ($LASTEXITCODE -eq '1') {
EXIT 0
}
To prevent a Task Sequence failure I can intercept the ‘$LASTEXITCODE’ variable and exit the script with a ‘0’ using an ‘if’ statement.
This will then be picked up by the running Task Sequence and consumed as a ‘success’ which will subsequently allow the Task Sequence to progress without error.
Sometimes you will have an AD Service Account configured and you might not be sure what the password is – a good example of this that sometimes catches me out is the MCM Network Access Account.
To safely test the account username and password we can use PowerShell with the following simple and safe command:
Whilst deploying MBAM as part of a Windows 10 OSD Task Sequence in MCM CB the “MbamClientDeployment.ps1” task was failing I was getting the error message shown below in the client “smsts.log” file:
HRESULT: 0x803d0006
I logged into one of the failed clients, opened Internet Explorer and attempted to connect to the URL for the MBAM Core Service manually – this took 42 feckin seconds! Obviously this is far too slow for the connection via the PowerShell script to be successful so the next question was why was this taking so long…
To do this I installed a tool called Fiddler (sounds dodgy but it’s a lightweight freeware tool for monitoring web connections – far simpler to implement and use when compared to WireShark or Microsoft Message Analyzer) on a client and once again accessed the URL via Internet Explorer to see what connection attempts were being made by the client when attempting to access the MBAM service.
Turns out that calls were being made to Windows Update URLs and various “crl.microsoft.com” URLs. Basically the clients were trying to download the latest Root level Certificate Revocation Lists/Certificates from Microsoft’s servers over the internet. Because the clients didn’t have access to the internet due to firewalls blocking, the clients eventually timed out trying to connect to Microsoft which subsequently took the response time for the MBAM service connection over the allowed limit. This resulted in a timeout occurring when MbamClientDeployment.ps1 ran.
If the clients were able to access the internet (and able to connect to the URLs they were reaching out to) then there wouldn’t have been a problem and the script would have completed without any problems.
To make absolutely sure I tested this by unchecking the Internet Explorer option “Internet Options | Advanced | Check for server certificate revocation” on the client – rebooted the client and retried: I was able to hit the MBAM web service immediately with zero delay. Ticked the box again, rebooted, retried and the response was back up to 42 seconds (i.e. buggered again).
I don’t think it’s possible (or desirable) to disable certificate revocation checking for all certificates so another solution had to be found to this problem.
In the end the solution was to disable the automatic updating of the Root CA certificates/CRLs using the following registry key:
I accomplished this via a PowerShell script running as part of the Task Sequence.
PowerShell script to set the “DisableRootAutoUpdate” registry key
Reboot
MbamClientDeployment.ps1
This disables the automatic update of the Root CA’s resulting in there being no delay in the MBAM service connection and consequently the MBAM PowerShell script completes successfully.
The PowerShell script to make the registry changes contains the following lines of code:
Once the “MbamClientDeployment.ps1” script has completed Root certificate auto update needs to be re-enabled (so SSL websites work as expected) by deleting the registry key that we created with the above PowerShell.
This can be done in your Task Sequence using a “Run Command Line” step called something like “Enable Certificate Checking” with the following command:
It can be useful to have a PowerShell script which runs as a Windows Scheduled task to perform otherwise manual tasks. Being a lazy bugger I like to automate as many boring, repetitive tasks as I can so PowerShell and Scheduled Tasks are my friends…
A good example of this would be if you needed to run a cleanup of WSUS to remove declined, superseded, expired updates etc.
The script I want to run looks like the following:
<#
.DESCRIPTION
Cleans up WSUS on local server
.EXAMPLE
PowerShell.exe -ExecutionPolicy ByPass -File WSUSCleanup.ps1
.NOTES
Author: Jonathan Conway
Created: 21/07/2016
Version: 1.1
#>
# Set WSUS port number (standard is 8530 on Windows Server 2012 R2 but can be customised)
$WSUSPortNumber = 8530
# Connect to local server using PowerShell
Get-WsusServer -Name $env:computername -PortNumber $WSUSPortNumber
# Perform required cleanup commands
Get-WsusServer | Invoke-WsusServerCleanup -CleanupObsoleteUpdates -CleanupUnneededContentFiles -CompressUpdates -DeclineExpiredUpdates -DeclineSupersededUpdates | `
Out-File -FilePath C:\Tools\Scripts\wsuscleanup.log
In order to run this as a Scheduled Task in Windows I’d need to run it as SYSTEM (NT AUTHORITY\SYSTEM) – change the “Configure for:” section at the bottom to match the OS you’re using as well, for compatibility purposes.
Configure a Trigger – once a week should be more than enough for this particular task.
The action should be configured to “Start a Program” which would be as per the command line example below (example assumes you have a script called WSUSCleanup.ps1 located in a folder called “C:\Tools\Scripts”):
A customer recently had a requirement to deploy a PowerShell script to configure a setting for App-V 5.0.
Normally I’d do this with a Batch file called “Configure.cmd” containing the code displayed below. This works for the majority of tasks:
@echo off
PUSHD %~dp0
PowerShell.exe -ExecutionPolicy Bypass -File ".\PowerShellScriptFileName.ps1"
POPD
As usual, I tested the deployment before adding into ConfigMgr by using psexec running under the System context (i.e. the same context that ConfigMgr deployments run under) with the command below. One the command prompt is open you can run the required installer as the System context:
psexec /s cmd.exe
This completed successfully and made the configurations that I’d wanted. Bosh. Bloody awesome I thought…
On this occasion however I needed to reconfigure a 32-bit application (App-V 5.0) on a 64-bit operating system (Windows 7 x64).
This doesn’t play well when deployed via ConfigMgr is seems and ends up running using the SysWOW64 redirection – the result in my case was that registry changes were made by the PowerShell cmdlet in the Wow6432Node area of the registry.
In order to get around this I discovered a new concept to me called “Sysnative” which is a virtual directory and special alias that can be used by applications/scripts to access the 64-bit “System32” folder – which is exactly what I wanted to happen in this instance for my script to produce the required result.
Therefore to use this you need to change the location that PowerShell is called from to %WinDir%\Sysnative (you can’t see it in Windows Explorer btw – but trust me, this little bugger does indeed exist):
@echo off
PUSHD %~dp0
%WinDir%\Sysnative\windowsPowershell\v1.0\Powershell.exe -ExecutionPolicy Bypass -File ".\PowerShellScriptFileName.ps1"
POPD
Once changed to reference the correct PowerShell.exe my script punted out the desired results. Magic 🙂
Since Windows 8.1/Server 2012 R2 you can now test/ping TCP connections over any port using PowerShell and the Test-NetConnection cmdlet.
The syntax for the command is as below. This tests if RDP (port 3389) is available/open on a server called ‘DC01‘ – you can change the hostname to an IP adress and you can use any port you like with the ‘-Port‘ switch:
Test-NetConnection DC01 -Port 3389
This should present results like the following with the key result being “TcpTestSuceeded: True” to signify that the port is indeed open:
If you run “Test-NetConnection” without specifying any parameters it will perform a test to determine if the device you are running the command on has access to the internet – this can be quite useful in a number of situations. When run on a client that does have Internet access you will see results similar to this:
2. paping.exe
paping.exe is an invaluable tool for testing network connectivity, especially in a firewalled environment where using telnet isn’t straightforward and standard ICMP ping is blocked.
The tool basically allows you to send a “ping” over any TCP port which means that even if a firewall blocks ICMP pings, the paping.exe packet will be allowed through on a port that is open through the firewall.
A good example of this might be testing connectivity to an ConfigMgr Distribution Point where SMB (TCP port 445) has been allowed through a firewall but standard ping is blocked: simply copy the paping.exe onto the machine you wish to initiate the connectivity test from and run the following command from a command prompt:
paping.exe [target hostname or IP address] -p 445
This will send a constant ping on TCP port 445 to the hostname or IP address specified. Brilliant.