Delete Leaf Objects from Active Directory User Object

The Story

The past days I had a colleague of mine come to me with a user migration problem. He wanted to migrate a user between two child domains in an AD forest. For this most of the time you use Microsoft’s ADMT (Active Directory Migration Tool). He went through the whole migration wizard and had the migration fail with an error message like this:

2014-03-17 12:47:27 ERR2:7422 Failed to move source object ‘CN=Joey’. hr=0x8007208c The operation cannot be performed because child objects exist. This operation can only be performed on a leaf object.

That was strange, I was expecting the user object to be a leaf object itself, not to contain leaf objects?! Then I remembered we are in 2014, we also use Microsoft Exchange and use ActiveSync on Mobile devices. In case you didn’t know, when you configure ActiveSync on your phone a special object is created under your User Object in Active Directory. This object is of type “msExchActiveSyncDevices” and list each of the mobile phones where you have configured Active Sync. I used adsiedit.msc to confirm that the leaf objects were indeed these msExchActiveSyncDevices objects.

So that explains what the leaf objects were, and how they got there. Now since the user is being migrated across domains, it really doesn’t matter whether those leaf AS objects are there or not, because after USMT users have to reconfigure their devices anyway, so they’re “safe to delete”. To fix this you can either use ADSIEDIT to locate the leaf objects and delete them or use Exchange Shell to delete the AD devices….or use Powershell to delete them from the user object just like you would with ADSIEDIT, which is what i want to share now.

The Script

I built this script on a Windows 8.1 computer with Powershell 4.0 and RSAT tools for Windows 2012 R2 installed. Also the AD environment is Windows 2008 R2 FFL and only Windows 2008 R2 DCs run on it. I didn’t check if this runs on less than this configuration, so do report back if this is not working on older combinations of RSAT + OS + AD, though i’m pretty sure you need at least one 2008 R2 DC in the user’s source domain, otherwise the powershell cmdlets won’t wor. You can download this script from the link: Delete-AS-ChildItems.

The script takes only one parameter the SamAccountName of the user with leaf objects. From then on it will show you the leaf objects it wants to delete, and then actually delete them, if not canceled.

Learning Points

I’ve made the script “autonomous”, in the sense that it will automatically discover the closest DC running AD Web Services, and query it for the SamaccountName. This snippet accomplishes that.

$LocalSite = (Get-ADDomainController -Discover).Site
$NewTargetGC = Get-ADDomainController -Discover -Service ADWS -SiteName $LocalSite
IF (!$NewTargetGC)
    { $NewTargetGC = Get-ADDomainController -Discover -Service ADWS -NextClosestSite }
$NewTargetGCHostName = $NewTargetGC.HostName
$LocalGC = "$NewTargetGCHostName" + ":3268"

Once we have this information, we query the GC for the SamAccountName and Domain Information. We need the domain to also discover the closest DC for that domain, and get the list of leaf objects (Lines 14-26)You will want to do this for 2 reasons: first because the GC partition doesn’t contain all the information you want (the child object information) and second, you can’t write to the GC partition, so you have to find your closest respective DC anyway.

The “trick” with this script is to use Get-ADObject with a search base of the User’s DN on a DC in the User’s Domain and look for the respective msExchActiveSyncDevices object type like below:

$UserOjbActSync = Get-ADObject -SearchBase $UserObj.DistinguishedName -filter { ObjectClass -like 'msExchActiveSyncDevices'} -Server $UserobjDC -ErrorAction:SilentlyContinue

Now to actually fix the problem, we run this command, that deletes all the child items, and whatever may be inside them.

Remove-ADObject $UserOjbActSync -Server $UserobjDC -Recursive:$true -Confirm:$false -Verbose -Debug

That about wraps this up. Just wait for replication to occur on your domain and you should e good to finish the migration. As always, use with caution, this does delete things. If you found this useful share it around 🙂

Quick Tip: Update Resource Records in Microsoft DNS using Powershell

One of the great things I like about the (not so) new Windows 2008 R2 Powershell modules is that we can now more easily manage the core Microsoft Networking services (DNS, DHCP). I want to share a little script I built that will add/update Host Records fed from a CSV file.

The Script

In the past automating this kind of thing was possible using a combination of WMI and VBS/Powershell and or batch scripting and using the famous DNSCMD. My script script will not work on any DNS server, you need to run Windows 2008 or later DNS, running against Windows 2003 DNS servers will yield strange/wrong results.

#sample csv file

#DNSName,IP,<other fields not used>
#foo.fabrikam.com,192.168.1.1,<other values, not used>

Param(
 [Parameter(Mandatory=$false)][System.String]$ResourceRecordFile = "C:\Temp\somefile.txt",
 [Parameter(Mandatory=$false)][System.String]$dnsserver = "DNS.constoso.com"
 )
import-module DNSServer

Write-Warning "This script updates DNS resource records in DNS based on information in a CSV file. Details are:`n
Using file $ResourceRecordFile as source file.`nMaking changes on DNS:$dnsserver`n
If you wish to cancel Press Ctrl+C,otherwise press Enter`n"
Read-Host

$HostRecordList = Import-csv $ResourceRecordFile

foreach ($dnshost in $HostRecordList) {
 $RR = $dnshost.DNSName.split(".")[0]
 $Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
 [System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)
 $OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue
 If ($OldObj -eq $null) {
 write-host -ForegroundColor Yellow "Object does not exist in DNS, creating entry now"
 Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP
 }
 Else {
 $NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
 $NewObj.RecordData.Ipv4Address = $NewIP
 If ($NewObj -ne $OldObj) {
 write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
 Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
 }
 }
 $OldObj = $null
 $NewObj = $null
 }

Learning Points

Running this script requires Windows 2008 R2 RSAT installed. As you can see, all the script needs is a CSV file with 2 columns called “hostname” and IP, containing the FQDN, and the DNS server you want to connect and make the changes.

Lines 17-18: This is where we’re extracting the short DNS name from the FQDN and the DNS zone name. Also we are converting the IP address to the format required for entry into DNS:

$RR = $dnshost.DNSName.split(".")[0]
$Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
[System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)

Lines 19-21: Here we try to resolve the DNS record, perhaps it already exists. We will use this information in the next lines…

$OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue

Lines 23:  To create a new Host record  (“A” type record). T he command is pretty straightforward:

Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP

Lines 27-31: or To update an existing A record. No that there is a difference in how Set-DNSServerResourceRecord works compared to the ADD command. This one requires that we get the record, modify the IPV4Address field, then use it to replace the old object.

$NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
$NewObj.RecordData.Ipv4Address = $NewIP
If ($NewObj -ne $OldObj) {
write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
}

That’s about it. You can easily modify this script, so that you can pass the DNS server name from the CSV file (updating lots of records on multiple DNS servers) or updating multiple record type (A Records, CNAME Records). As always C&C is welcome.

Report DHCP Scope Settings using Powershell

It has been a busy time for me lately, but I’m back here to write about a script to Report on some basic DHCP scope settings. In my situation I used this script to find out which DHCP scopes had specific DNS servers configured, DNS servers that we planned to decommission, so it made sense to replace the IP addresses with valid ones.

keep-calm-and-import-module-dhcpserver

 I found myself lately working more and more with the Powershell V3, available in Windows  Server 2012, and the new “goodies” it brings.

Among those goodies there’s a DHCPServer module, so we can finally breathe a sigh of relief, we can dump netsh and any VBS kludges used to manage DHCP!*

(* lovely as this module is, you cannot use it fully against  Windows 2003 Server, some cmdlets will work, others, not so much, so windows 2008 or later it is)

For an overview of what commandlets are available in this new module take a look on the Technet Blogs. To get started simply deploy a Windows 2012 machine and open Powershell, then type:

import-module DhcpServer

While you are at it update help files for all your Powershell module with this command:

Update-Help –Module * –Force –Verbose

Mission Statement

I needed a report that would contain following Info: DHCPServer name, Scope Name, Subnet defined, Start and End Ranges, Lease Times, Description, DNS Servers configured, globally or explicitly defined. As you can imagine, collating all this information from netsh, vbs, or other parsing methods would be kind of time consuming. Also i’m aware there are DHCP modules out there for Powershell but personally I prefer to use a vendor supported developed method, even if it takes more effort to put together / understand (you never know when a Powershell module from someone starts going out of date, for whatever reason and all your work in scripting with them is redundant).

The Script

Anyway, I threw this script together, which isn’t much in itself, apart from the  error handling that goes on. As I mentioned before, the DhcpServer module doesn’t work 100% unless you are running Windows 2008 or later.

import-module DHCPServer
#Get all Authorized DCs from AD configuration
$DHCPs = Get-DhcpServerInDC
$filename = "c:\temp\AD\DHCPScopes_DNS_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"

$Report = @()
$k = $null
write-host -foregroundcolor Green "`n`n`n`n`n`n`n`n`n"
foreach ($dhcp in $DHCPs) {
	$k++
	Write-Progress -activity "Getting DHCP scopes:" -status "Percent Done: " `
	-PercentComplete (($k / $DHCPs.Count)  * 100) -CurrentOperation "Now processing $($dhcp.DNSName)"
    $scopes = $null
	$scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue)
    If ($scopes -ne $null) {
        #getting global DNS settings, in case scopes are configured to inherit these settings
        $GlobalDNSList = $null
        $GlobalDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
		$scopes | % {
			$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
			$row.Hostname = $dhcp.DNSName
			$row.ScopeID = $_.ScopeID
			$row.SubnetMask = $_.SubnetMask
			$row.Name = $_.Name
			$row.State = $_.State
			$row.StartRange = $_.StartRange
			$row.EndRange = $_.EndRange
			$row.LeaseDuration = $_.LeaseDuration
			$row.Description = $_.Description
            $ScopeDNSList = $null
            $ScopeDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ScopeID $_.ScopeId -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
            #write-host "Q: Use global scopes?: A: $(($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null))"
            If (($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null)) {
                $row.GDNS1 = $GlobalDNSList[0]
                $row.GDNS2 = $GlobalDNSList[1]
                $row.GDNS3 = $GlobalDNSList[2]
                $row.DNS1 = $GlobalDNSList[0]
                $row.DNS2 = $GlobalDNSList[1]
                $row.DNS3 = $GlobalDNSList[2]
                }
            Else {
                $row.DNS1 = $ScopeDNSList[0]
                $row.DNS2 = $ScopeDNSList[1]
                $row.DNS3 = $ScopeDNSList[2]
                }
			$Report += $row
			}
		}
	Else {
        write-host -foregroundcolor Yellow """$($dhcp.DNSName)"" is either running Windows 2003, or is somehow not responding to querries. Adding to report as blank"
		$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
		$row.Hostname = $dhcp.DNSName
		$Report += $row
		}
	write-host -foregroundcolor Green "Done Processing ""$($dhcp.DNSName)"""
	}

$Report  | Export-csv -NoTypeInformation -UseCulture $filename

Learning Points

As far as learning points go, Get-DHCPServerInDC lets you grab all your authorized DHCP servers in one swift line, saved me a few lines of coding against the Powershell AD module.

Get-DhcpServerv4Scope will grab all IPv4 server scopes, nothing fancy, except for the fact, that it doesn’t really honor the “ErrorAction:SilentlyContinue” switch and light up your console when you run the script.

Get-DhcpServerv4OptionValue can get scope options, either globally (do not specify a ScopeID) or on a per scope basis by specifying a scopeID. This one does play nice and gives no output when you ask it to SilentlyContinue.

Some Error Messages

I’ve tested a script in my lab, and used in production, it works fine for my environment, but do you own testing.

Unfortunately, the output is not so nice and clean you do get errors, but the script rolls over them, below are a couple of them I’ve seen. First one is like this:

Get-DhcpServerv4Scope : Failed to get version of the DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : NotSpecified: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 1753,Get-DhcpServerv4Scope

This actually happens because the Get-DhcpServerv4Scope has a subroutine to check the DHCP server version, which fails. As you can see my code does have Silentlycontinue to ommit the error, but it still shows up. I dug up the 1753 error code, and the error message is “There are no more endpoints available from the endpoint mapper“…which is I guess a Powershell way of telling us, Windows 2003 is not supported. This is what we get for playing with v1 of this module.

Another error I’ve seen is this:

Get-DhcpServerv4Scope : Failed to enumerate scopes on DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : PermissionDenied: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 5,Get-DhcpServerv4Scope

It is just a plain old permission denied, you need to be admin of the box you are running against…or at least member of DHCP Administrators I would think.

As far setting the correct DNS servers on option 6, you can use the same module to set it, I did it by hand, since there were just a handful of scopes.

Hope this helps someone out there with their DHCP Reporting.

Managing DNS Aging and Scavenging settings using Powershell

Aging and scavenging of DNS records is a topic that is pretty well covered on the web. I’m not really looking to rehash all the information out there with this post. I will however put out some resources for whoever wants to do the reading:

  • This post has a good “primer” for DNS aging and scavenging and the steps for implementing it.
  • This post gives a real life example of how unscavenged records impact authentication mechanisms in Windows
  • This post explains how the configuration of aging and scavenging can be done, either via GUI or batch command line.

I’ll paint the bigger picture for the environment I’m working on right now, perhaps a good example of how typical Windows Infrastructure services are setup in global corporations.

  • AD integrated DNS zones that replicate to all DCs in forest, zones allow secure updates only. This means that if we…
  • Run local DHCP services on all locations in the infrastructure we need to standardise DHCP scopes lease time to a single value, for Windows client scopes prior to working on enabling DNS Aging + Scavenging on all our DNS zones. (the other scopes we don’t care, they can’t add/update records in DNS, they’re not domain joined and the zone only allows secure updates). Link #2 gives us the correlation between DHCP lease time and DNS aging / scavenging of records.
  • We also have clients register their DNS records, not the DHCP server itself (this hasn’t come up for change until now).

What I am going to script about is what Josh Jones from link #1 above referred to as “the setup phase”. In this phase we are merely configuring the DNS zones to age DNS records according to our requirements. The guys over at cb5 do a fine job of explaining the various scenarios to change this via DNSCMD, via the wizard and all the “bugs” of the GUI wizards.

That may be fine for just a few zones, but when you have tens of DNS zones (most of them reverse DNS) the clicky business starts to sound less fun. Also working with DNSCMD might not be everyone’s cup of tea. Luckily I’m writing this in 2013, a few months after the release of Windows Server 2012 and the shiny new cmdlets it brings, and yes, there are DNS server ones.

So you will need a client running either Windows 8 + Windows Server 2012 RSAT or a Windows Server 2012 box (doesn’t need to be domain controller or DNS server, a member server is fine).

Get DNS Aging and Scavenging Settings

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
 Import-Module DNSServer
 }

#Report on Existing Server settings
$DnsServer = 'dc1.contoso.com'
$filename = "c:\temp\AD\$($DNSServer)_Before_AgScavConfig_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"
$zones = Get-DnsServerZone -computername $DnsServer
$zones | %{ Get-DnsServerZoneAging -ComputerName $DnsServer -name $_.ZoneName} | Export-Csv -NoTypeInformation $filename

There’s nothing too fancy about this part. We get all the Zones we need using Get-DNSServerZone, then we pass the value to Get-DNSServerZonesAging. The output would return following information:

ZoneName Name of the DNS Zone
ScavengeServers Servers where this zone will be scavenged
AgingEnabled Flag wether records are aged or not
AvailForScavengeTime Time when the zone is eligible for scavenging of stale records
NoRefreshInterval Interval when the Timestamp attribute cannot be refreshed on the DNS Record
RefreshInterval Interval when the Timestamp attribute can be refreshed on the DNS Record

If no one ever configured Scavenging on the servers, the output should be pretty much blank.

Configure Aging of DNS records for all zones

This snippet accomplishes this:

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
	Import-Module DNSServer
	}

#Set New values
$DnsServer = 'dc1.contoso.com'
$DNSIP = [System.Net.DNS]::GetHostAddresses($dnsServer).IPAddressToString
$NoRefresh = "3.00:00:00"
$Refresh = "5.00:00:00"
$zones = Get-DnsServerZone -computername $DnsServer | ? {$_.ZoneType -like 'Primary' -and $_.ZoneName -notlike 'TrustAnchors' -and $_.IsDsIntegrated -like 'False'}
$zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName -Aging $true -NoRefreshInterval $NoRefresh -RefreshInterval $Refresh -ScavengeServers $DNSIP -passThru}

Learning Points

The $Zones variable now contains a filtered list of zones, the Primary zones, those that are not “TrustAnchors” and those that are not AD Integrated (the 0.in-addr.arpa … 127.in-addr.arpa and 255.in-addr.arpa zones).

Why we do this? Well in our case we only run primary and stub zones, so that explains the “primary” filter. The “Trust Anchors” Zone we don’t have a use for (more info on Trust Anchors here). Lastly the filter removes zones that are not AD integrated (we will never be able to get an IP from those zones, since they are either network addresses, loopback addresses or broadcast addresses).

Note: If you fail to filter the “0, 127 and 255” zones your last command will spit out an error like below. I looked the Win32 9611 error code up in the windows 32 error code list  and it means “Invalid Zone Type”. So filter it, ok ?!

Set-DnsServerZoneAging : Failed to set property ScavengeServers for zone 255.in-addr.arpa on server dc1.contoso.com.

<em id="__mceDel">At line:1 char:14
+ $zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : InvalidArgument: (ScavengeServers:root/Microsoft/...ServerZoneAging) [Set-DnsServerZoneA
 ging], CimException
 + FullyQualifiedErrorId : WIN32 9611,Set-DnsServerZoneAging

You should also be careful that the commandlet expects the Refresh/No-Refresh Intervals in a specific format, and the ScavengeServers parameter  needs to be an IP address, not a hostname.

The -PassThru switch displays some output on the console, as by default the commandlet doesn’t generate output.

The last commandlet (Set-DNSServerZoneAging) has kind of little documentation about it flying on the web, and I actually found some documentation to some non-existing parameters that got me all excited, something like a “SetAllZones”, but the actual parameter doesn’t exist as of this time (February 2013). So I had to use a foreach loop to configure each zone.

Wow, Initially I wanted this to be a short post, but apparently it added up to something not  so short. I hope it is useful, and helps with your DNS Aging configuration. If there are other  more simpler/better ways to accomplish this I would like to hear about them, just a leave a note in the comments.

How to remove a KMS Server from your infrastructure

These days I took a swing at some clean-up I had to do in our KMS servers list. In any large environment you are bound to find some configurations you either did not put in place (there is usually more than 1 person managing it) or put in place for testing and forgot to remove them. I’m mainly referring to KMS servers that may have once been used to activate Windows licenses, or people have attempted to set them up that way (but failed for one or more reasons). You might have this problem too in your environment, and not know about it. Usually any “rogue” or unauthorized KMS servers also publish their KMS service in DNS. This means that when a client tries to activate it will pick one of the servers that offer the _VLMCS service (license activation) in the _TCP node of the DNS suffixes he has configured or his own domain name. By default all KMS hosts publish their Service Record with equal priority and weight, so with few KMS hosts, there’s a high chance you will get sent to the wrong/rogue KMS. If the client picks the correct KMS host, all is well with the world, if not, they get an error and you get an unneeded support call that users can’t activate their Windows.

To fix this you should first find the rogue KMS hosts. Since the information is published in your DNS, this nslookup query should reveal your servers:

nslookup -q=srv _vlmcs._tcp.contoso.com

Run this for all your subdomain’s fqdn to list all servers. A sample output would be this:

Server: dc1.contoso.com
Address: 192.100.5.10

_vlmcs._tcp.contoso.com SRV service location:
 priority = 0
 weight = 0
 port = 1688
 svr hostname = KMS01.contoso.com
_vlmcs._tcp.contoso.com SRV service location:
 priority = 0
 weight = 0
 port = 1688
 svr hostname = John-Desktop.contoso.com
KMS01.contoso.com internet address = 192.41.5.4
John-Desktop.contoso.com internet address = 192.20.50.20

As you see, we have 2 KMS host entries, one seems valid, the other looks like someone attempted to activate his PC the wrong way and ended up publishing KMS service records in DNS. Here’s how to remove this, for good. Some of the steps are taken from technet documentation, some are from social.technet site.

  •  Login/RDP/PSEXEC to the affected host (John-Desktop) and uninstall KMS product key. To do this, run this from an elevated command prompt:
cscript %windir%\system32\slmgr.vbs /upk
  • Install the default KMS client key, found here:
cscript %windir%\system32\slmgr.vbs /IPK [KMS client Setup Key]"
  • Activate the computer as a client using the command below. In our case it would go to the KMS01.constoso.com host
cscript %windir%\system32\slmgr.vbs /ato"
  • Now you should stop this record from being published in DNS. You guessed it, just because you uninstalled the KMS host key and put in the client Key doesn’t mean he stopped advertising KMS in DNS. If you are running Windows 2008 R2, slmgr.vbs has  a switch which does this for you:
cscript %windir%\system32\slmgr.vbs /cdns"

Important Note: If you are running Windows 2008 not Windows 2008 R2 there is no /cdns switch. Also you cannot run slmgr.vbs from a 2008 R2 box over the 2008 machine with that switch, it will say the something like this:


Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

The remote machine does not support this version of SLMgr.vbs

This is also a good “failsafe” command in case the /cdns switch didn’t work for Windows 2008 R2. Changing this registry key worked for me, other people suggested other fixes (here) but along the same lines, I didn’t test them. You need to run this command from an elevated command prompt:

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SL" /v DisableDnsPublishing /t REG_DWORD /d 1
  • Stop and Start the Software Licensing Service:
net stop SLSVC

net start SLSVC

Update: If running Windows 2008 R2 you should look at restarting Software Protection Service

net stop "Software Protection"

net start "Software Protection"
  • Remove the _vlmcs KMS service record for John-Desktop from the contoso.com _tcp node. You can do this via dnsmgmt.msc console

That’s about it, Hope someone finds this one useful. Any comments are welcome.

Active Directory Domain Controller Backups – Part 3

Time for the last part of the Active Directory Backup Series. The last 2 posts(#1 and #2) have been about defining what needs to be backed up and the scripts / commands that are used to describe. This time we will just discuss some administrative things, they involve Powershell and some Group Policy Preferences (GPP) Settings. So you have all the parts that make the thing “backup”, now how do you put them to work, regularly, automatically, with as little maintenance as possible from your side. This how I chose to do it, from a high level:

  • I created a central location to store your scripts then deploy a GPO with GPP that copies over the scripts  from that central location.
  • For running the backups script I used scheduled tasks created using a Powershell scripts on all remote systems.

I admit it is not the most elegant, zero-touch approach, but I had to make a compromise. Some of you might think, “hey, why didn’t you use GPP to deploy a scheduled task? That would have been easier, no reason to have this hassle with a script that creates your scheduled tasks.”

The reason why I chose to do it via a script is I need access to the network to run the actual scripts (they save data to a file share) so using system account to run the batch jobs is out of the question as it doesn’t have network access, I need a domain account (member of backup operators) and with access to the share. This means I have to be careful when passing credentials to configure the scheduled tasks. As it turns out, when passing credentials in a GPP, they are just obscured, not encrypted (more info here and here) so no way am I giving backup operators credentials over the sysvol share, it is not secure  So, I either create the backup tasks manually (2 * number of scripts * number of domains) or create a script to do it for me. They say “Efficiency is the most intelligent form of laziness”, so I wrote a script.

With this out of the way, let’s handle each task, first off….

Distributing The Scripts

Create a folder in a central location where my scripts and files would be located. In case you don’t know what I’m talking about, it’s the scripts from this post.

Create a GPO object, map it to the Domain Controllers OU, and configure its GPP settings to copy the folder you setup  at step 1, locally on the DCs like in the picture below (I’m showing you just 1 file, make sure your GPP has all 3 files included). The GPO also changes script execution policy  so you can run scripts as batch jobs. 
GPP_Settings

I’ve applied this GPO to the Domain Controllers OU but restricted it to only apply to a certain Security group in AD (and yes you guessed it, I put the DCs I want to backup in that group).

Creating the Scheduled Jobs

I found Ryan Dennis’s blog here where he gives a sample of how to create a scheduled task, and he took it form another smart man, over here.  I took his sample script and tweaked it a little, I needed to make it a little more generic, and to be able to accept credentials as parameters. Then I created another script that calls the “Create-scheduledtask.ps1” to connect to each DC and create the scheduled tasks. Needless to say you need to be Domain / Enterprise admin to run these scripts

$BackupTasks = Import-CSV -useculture "C:\Temp\AD\BCP\BackupSource.csv"
$Domains = $BackupTasks | group-object -property Domain
$DomainCreds = @()
foreach ($domain in $domains) {
 $Creds = Get-Credential -Credential "$($Domain.Name)\dom_backup"
 $row = "" | select Domain,UserName,Password
 $row.Domain = $domain.Name
 $row.UserName = $Creds.UserName
 $row.Password = $Creds.Password
 $DomainCreds += $row
 }

#Create_Tasks
Foreach ($BackupTask in $BackupTasks) {
 $curCred = $DomainCreds | ? { $_.domain -eq $BackupTask.Domain }
 $SchedTaskCreds = New-Object System.Management.Automation.PsCredential($curCred.UserName,$curCred.Password)
 #$SchedTaskCreds
 $ScriptFullPath = $BackupTask.ScriptFolder+ "\" + $BackupTask.ScriptName
 .\Create-ScheduledTask.ps1 -HostName $BackupTask.HostName -Description $BackupTask.TaskName -ScriptPath $ScriptFullPath -SchedTaskCreds $SchedTaskCreds
 }<span style="color: #222222; font-family: 'Courier 10 Pitch', Courier, monospace; line-height: 21px;">

As far as learning points go, I first determined which domains I needed to get credentials from, then I ask the user interactively to type the account and password. This saves a lot of password prompts when the scheduled task is created

The 2 scripts I mentioned are included in this rar file. SchedTask_Creation

This is it mostly. The scheduling script is the initial version, it would be more elegant if he just pulled host names from the AD group, then simply built with each name the script files and names from the .csv file.

A further refinement and securing of this process would be to sign the scripts using a windows CA and only allow execution of signed scripts on the DCs.

Active Directory Domain Controller Backups – Part 2

Time for part 2 of the “how to backup DCs” story. I’ll try to keep it more concise and just deliver the needed info.

In my previous post we established I was going for a backup to disk (another network share). I was also going to back up the system state of 2 DCs /domain, the list of GPOs and their links and the list of object DNs.

The process explained

I want to setup the backup in such a way, that it is more automated, and I don’t have to worry about checking all the bits and pieces are in place, and I also want to be able to update parts of the process without rebuilding everything. Therefore the process can be split in these parts:

1. Preparing accounts and permissions

2. Creating and delivering worker scripts (the scripts that actually do the job)

3. Setting up backup schedules (scheduled tasks run scripts from point 2, using credentials and resources setup at point 1)

Accounts and Permissions

You will need some accounts and groups setup so that you can safely transfer the backups from the DC to the backup share. The steps are outlined below:

  1. Create a Universal Security Group in one of your domains (top root domain, preferably) let’s call it “Global AD Backup Operators”. We will use this group below
  2. Create a network share on your choice for a backup backup location , where only “Domain Controllers” and “Global AD Backup Operators” have read/write access (Security and Sharing tabs). Refer to my previous post, for why this is important. You cannot use the “BUILTIN\Backup Operators” of the domain since that group is specific to DCs only.
  3. In the network share create a few subfolders, named DistinguishedNameBackup, GroupPolicyObjectBackup and WindowsImageBackup.
  4. Create an account in each domain that will run the backups. Make this account member of the “BUILTIN\Backup Operators” and the “Domain\Global AD Backup Operators” you created in Step 1. The Backup operators group is per domain, as you might know.
  5. Create a new GPO and link it to the “Domain Controllers” OU in each of your domains/ change your existing default Domain Controller Policy. In the policy you should include “BUILTIN\Backup Operators” in the list of accounts for “Allow logon as a batch job”.

Creating The Backup Scripts

Backing up the DC System State

If you don’t have the Windows Backup feature installed the snippet below will do that for you:

Import-Module ServerManager

if (!(Get-windowsFeature -Name Backup).Installed) { add-windowsfeature Backup}

Now for the backup itself you just run wbadmin wrapped up in some powershell code like below:

$TargetUNC="\\server.fqdn.com\ForestBKP$"

$WBadmin_cmd = "wbadmin.exe START BACKUP -backupTarget:$TargetUNC -allCritical -include:c: -noVerify -vssFull -quiet"
Invoke-expression $WBadmin_cmd

I used -allCritical instead of -SystemState, to include all that is necessary to do a bare metal recovery, other than that, nothing major to write home about. More info here.

Backing up Group Policy Objects and Links

Next step is to configure the backup of the GPO objects and the GP-Links. GP-links must be backed up separately since it is not stored in the GPO object, but in AD Database, on each object where the GPO is linked. This gets even more convoluted when you link GPO’s in different domains, than the domain they are created it. There are multiple ways to backup the GPOs:

-Using GPMC sample scripts, there is a script for backing up GPOs

-Using powershell module grouppolicy, running on windows 2008 R2 – I chose this one.

I also wanted to handle GPO backup history at script level, so, the script Backup-GPOs.ps1, attached to this post contains the “delete older than x days” logic to handle accumulating backups. The command to backup a GPO, looks like this:

backup-GPO -all  -domain $domain -path $path

The options are pretty self explanatory I suppose.The command to get all gpLink objects looks like this:

$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $env:ComputerNameps]</pre>
Now, I've read some of the people's experiences online and it seems that using the wildcard character for the backup GPO command has some inconsistent results, meaning, past a certain number of GPOs backed up, the cmdlet stops working properly. The solution would be to grab all GPOs and back them up in a <strong>for-each</strong> loop. This ties in pretty well with the fact that we need to map the GPO name with the gP-link information, so the core piece of the GPO backup script looks like this (most of the code is reused from <a href="http://gallery.technet.microsoft.com/scriptcenter/Backup-Group-Policy-with-a24f1a1b">here</a>):

import-module grouppolicy
import-module activedirectory

#build a list of GPOs in current domains
$domobj = get-addomain
$dom = $domobj.dnsroot
$domdn = $domobj.distinguishedname
$gpocol += get-gpo -all -domain $dom

$gpl = $null

#build a list of gplink objects across the enterprise
$domains = get-adforest | select -ExpandProperty domains
$domains | % {
$domobj = get-addomain $_
$domdn = $domobj.distinguishedname
$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $domobj.PDCEmulator
}

#backup GPOs, map, GPOs to Target DNs
$section = "backup"
foreach ($gpo in $gpocol) {

$name = $gpo.displayname
new-item $curpath\$name -type directory -erroraction:silentlycontinue | out-null
$id = $gpo.id
$configdn = (get-adrootdse).configurationNamingContext
backup-gpo -guid $id -domain $dom -path $curpath\$name | tee-object -variable msg
write-log
get-gporeport -guid $id -domain $dom -reporttype html -path $curpath\$name\$name.html
$gpl | % {if ($_.gplink -match $id) {$_.distinguishedname + "#" +  $_.gpoptions + "#" + $_.gplink} } | out-file -filepath $curpath\$name\gplinks-$id.txt -append

}

Just a little note here, the script is designed to get the gp-Links outside of the current domain of the account the script is running under. What differs from Frank Czepat’s script is the fact that i added a lookup to the $gpl variable and i pointed the get-adobject command to a specific DC (leaving it go for the default, would result in errors).

Backing up DistinguishedNames List

This is fairly easy and straightforward. While I could do this using powershell, i decided to go for the old and trusted dsquery, as it is faster than powershell code. Here we also have to deal with accumulating backups, as i built the script to output a timestamped file. The command that actually does the backup is this one:

$DomainDNsFile = "DomainDNs_$(get-date -Uformat "%Y%m%d-%H%M%S").txt"
$FilePath = "$curpath\$DomainDNsFile"
$DomainDNList_cmd = "dsquery * domainroot -scope subtree -attr modifytimestamp distinguishedname -limit 0 > $FilePath"
Invoke-expression $DomainDNList_cmd

I built these backup scripts as 3 individual files, you can download them from here Backup-Scripts.

That’s about it for how the backup is done, I guess more than half these items are sort of trivial to setup, the only tricky part is grabbing the gP-links, and creating a mapping between the GPO, and DNs in the gp-links

Next post in the series will discuss how to deliver and schedule these scripts on your domain controllers.

Discover Missing Subnets in Active Directory

The past days I stumbled upon the “regular” Event ID:5807, “During the last xx hours there have been <<lots and lots>> of logons …. from computers that are not defined in Active Directory Sites”. This is not such a big deal, not that it’s something you should ignore, but usually there are other things to worry about than some IPs connecting to your DCs and not being included in an AD site. Most of the time there are “operational” reasons behind this (someone setup a new location in your company and didn’t think to include you in the email chains, so you can adjust your configuration). But this time I wanted to nip this in the bottom, since I’m pretty sure no one else had bothered with it until now. Again , I didn’t reinvent the wheel, but I did manage to improve on some of the resources I found, and come up with a more scalable and convenient solution, so you could say “I made the wheel get more traction than before” :).

The Problem

The Event Viewer event I’m talking about is described here. A short snippet:

“During the past %1 hours there have been %2 connections to this Domain Controller from client machines whose IP addresses don’t map to any of the existing sites in the enterprise. Those clients, therefore, have undefined sites and may connect to any Domain Controller including those that are in far distant locations from the clients”

The solution to this is to map the client IPs to subnet in AD. To do this, you need to build a report of all unmapped client IPs, on all Domain Controllers, from all domains in the forest. This information, like the event says is stored mainly in the file “%systemroot%\debug\netlogon.log“. The output of this file looks like this:

05/03 08:37:06 Contoso: NO_CLIENT_SITE: Marry-PC 172.17.17.3
12/15 13:18:23 Contoso: NO_CLIENT_SITE: Bob-PC 172.172.2.240

What others have tried

From the way the file looks, you can see this is something we can convert to CSV, using powershell, and then process in Excel/Database. This is exactly what this person here did. His script works in the current domain, and starts hitting a wall when handling too many DCs/big files, because variables that store the data, keep getting bigger and bigger. There is a workaround in the comments section for this, but I wanted the whole data, to look around at my leisure, and I didn’t think I had time to wait for the script to work.

Also the environment I’m working on is spread across 6 continents, increasing the changes my script would take forever, and I really do want to get home in time for dinner. Just for the record, at the time of writing this, there were 52 domain controllers, total combined size of log files was over 180MB. Assuming each row had about 70 Bytes , that means over 2.5 Million entries. I’m really hoping that after I fix this, these numbers will go down significantly.

I also wanted to add some more information in the report, separating the IP address (A.B.C.D.) into octet strings, so I could more easily report on the data, in Excel. Granted, there are some ways to split the IP in Excel, but hey, if excel can do it, so can my powershell script.

My Solution

I took a different solution than Jean Louw on his blog. My approach was this:

  1. Get all Global Catalogs in the forest, using the one liner in my quick info article.
  2. Copy all netlogon files to a local network share. from here I could unleash powershell onto the “unsuspecting log files”
  3. Go through each file and import each one into a variable, as CSV. On each variable get the unique values and add them to a Reporting variable.
  4. Add some Reg-ex code to find the 1st, 1st-2nd, 1st-3rd octets in the IP string and add it to the report.
  5. Finally to make sure the final report only has Unique IPs in it, by filtering the Reporting variable, then exporting it to CSV

Then I put this all together in a script, added some basic error checking, the result you can download in this script, Report_DebugNetlogon.

Learning Points

Using regular expressions to find network address for /8,/16,/24 IPs, is done using this code. I used the code for detecting an IPv4 Address from here, this is found in other places on the web, but I stuck with this one:

#Extract Entire IP v4 address (A.B.C.D)
Function ExtractValidIPAddress($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
} 

For detecting the first/first 2/first 3 octets of an IP address you just adjust the {3} variable in the $IPRegex variable to {2} – first 3 octets, {1} – first 2 octets, and {0} for first octet, or just shorten the regex as below:

#Extract 1st Octet of IP v4 address (A)
Function Extract1IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st and 2nd Octet of IP v4 address (A.B)
Function Extract2IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){1}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st, 2nd and 3rd Octet of IP v4 address (A.B.C)
Function Extract3IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){2}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

This approach needs some careful consideration, I first used the ExtractIP function, and on that value I applied the ExtractxIPOctet function, since throwing it on the initial variable would give incorrect results (more than 1, first octet, for example).

As far as speed is concerned, processing those 2.5 million took 32 minutes, of which, 14 minutes spent copying all files to a central location, 18 minutes going though all files, finding unique IPs.

A final word of advice: For networks defined as A.0.C.D, A.B.0.D – there is a “bug” when opening the CSV straight in excel, that it considers the column to be a number, and the .0 is omitted. To get around this import that data using data import into excel, and specify the Octet fields as Text fields.

Now your next step should be building a pivot table with all network prefixes, and sending it over to your friendly network admin, and work together to find out what each subnet does, then to add the to Active Directory subnet list.

Active Directory Domain Controller Backups – Part 1

I decided to write down for posterity and my own forgetfulness the workflow I developed for backing up domain controllers running Windows 2008 R2. I didn’t really reinvent the wheel, I merely adapted and put together some disparate pieces of code I found on the Internet.

Backup Overview

I guess this is the time we should ask ourselves the 5 Ws:

  • Who is being backed up?
  • What to backup?
  • Why do we need the backups?
  • When will backups run?
  • Where will backups be stored?

Who?

Our backup sources must be at least 2 domain controllers / domain. Why 2? Well because 2 is better than 1, in the remote case one of the backups is not working properly you are in trouble, having 2 backups, at least diminishes this chance. Also I would advice that your backup targets are, if possible, among the FSMO roles holders in the domain, so in case of a forest/domain recovery to avoid doing FSMO role seizing. Suppose you are managing a single forest, multiple child domain infrastructure. The forest root domain DCs that make the best candidate for backup are:

  • PDC Emulator
  • RID Master
  • Schema Master

I’m not saying the Domain Naming Master and Infrastructure master are not important, only that in a Forest/Domain Recovery Scenario, you must make sure you have the roles above working properly, and not have to go through the trouble of seizing them from dead DCs. Domain Naming Master is useful for setting up new domains – not something you do in a recovery, and Infrastructure masters are not so much used, if all your DCs are Global Catalogs, a common situation in my opinion  nowadays). On domain level I would pick servers that are PDCs (Primary Domain Controllers) and RID Master Servers, for mostly the same reasons above.

What?

  • System State (at least), Critical Volume Recommended
  • List of objects (distinguishedNames) – useful for restoring data
  • GPOs (contents)
  • GPO-Links

Why?

Well, the “System State” is an obvious choice, since that is the entire operating system, and in the case of DCs, the Active Directory files (database, logs, etc). Keep in mind though, that your restore options, for a supported setup are limited, as explained here.

The “List of Objects ” (object name and DN) is needed, because in some restore cases (e.g.: object restore from accidental deletion) you must provide a DN for the restored object, that is not included in deleted objects information in AD.

The “GPO objects” are backed up for convenience sake (they are included in the “system state backup”). In case we need to restore a GPO from backup, it is more easy to have it backed-up separately, compared to restoring a system state backup, mounting the backup, searching for the files, etc). The GPO links are something special that are not backed-up with the usual GPO backup tools. Also special care must be taken so that GP-links to GPOs outside of the domain where the GPO exists are also backed-up.

When?

In my case I did a backup to network share, which was included in the regular backup policy of the company. So the answer would be: “At a time of low activity on the server, before the regular backup runs over your backup location?”

Where?

The backup destination in my case was a network share in the same subnet and datacenter as my domain controllers. This obviously lowers backup duration and any load on the WAN links. I am fortunate enough to be able to have most of the DCs for my entire forest, with at least 1 member in a single datacenter, where I setup a network share to store the backups.

A word about security here. Take as much care in securing this network share and the OS that hosts it, as you would if it were a very sensitive system. Remember, this is where all of your domain controller backups are stored. Anyone gaining access to this share would be able to mount your AD files, view the content, steal password hashes from the backups, the attack possibilities are quite numerous. If you are setting up this share on a windows machine CHANGE the default security settings to only allow the Domain Administrators and Backup Operators access to it.

Tools and Requirements

To setup the workflow that I will describe you need to have these resources/permissions:

1. Domain Admin over all domains/child domains

2. You should be running Windows 2008 R2 with Powershell installed on your DCs (including AD-Modules)

I will discuss later on about specific requirements for the backup itself, when the time comes.

Resources

Some links and resources helped me build this workflow are below:

A good deep-dive video explaining a lot of the concepts around Domain Controller backups, and restore techniques, presented at TechEd US this year. I picked up form here what needs to be backed up, and some command for doing the backup.

This Technet gallery script for backing up GPO objects, the stuff there is pretty accurate.

This is it for an introductory post, giving an overview of how we will get the job done. Part 2 will cover the scripts/commands used to achieve this backup.

Get Basic Information on Active Directory Domain Controllers

Lately I found myself doing a lot of work around AD, since I’m responsible for migrating the forest to 2008 R2 Functional Level. As you may already know, in order to raise forest Functional Level you have to raise the Functional Level of all child domains. To be able to do this, each DC in child domains must run on Windows 2008 R2 or later. To get started you need a list of all systems in the AD infrastructure, and a list of those that need replacing their OS. If your infrastructure is like mine, you have lots of DCs, most of them setup before your time with the company, and lots of them lacking documentation. Also lots of them probably run on antiquated hardware, some of which probably will not support Windows 2008 R2. The most stringent requirement, in my book, for installing Windows 2008 R2 is that the CPU must support 64bit, since Windows 2008 R2 only comes in 64bit flavor.

When I first started inventorying our DCs, I made a list of the basic things that interested me for transitioning to 2008 R2 FL (Functional Level):

  • HostName, Operating System and Domain
  • Site and IPAddress
  • FSMO roles Installed
  • Hardware and Model
  • CPU x64 Readiness and Memory size

The first 3 above are low hanging fruits, you can extract them using a modified one liner from my “Quick Tip #1” article.

Hardware, Model and Memory size, are also not so difficult, you have WMI to each server for this.

The most challenging part is finding out if the CPU supports 64-bit instructions. The first place where you will probably think to look is environment variables within Windows (type “echo %processor_architecture%” in a cmd prompt to see the output, anything not x86 is 64bit). You are out of luck, because what that variable actually stores is the capabilities of the Operating System, and unless you are running 64bit OS on 64bit hardware (in which case you don’t need this script in the first place) that is of no use. Then I thought: “Hey, there must be a way to find this out via Powershell/WMI” … indeed you can find out some information about the CPU bandwidth (wmic cpu get datawidth) … however the data is inaccurate, it also refers to Operating System. You can crosscheck your results with a tool from the overclocking world (CPU-Z) – you will see it shows the CPU can do 64bit instructions, while WMI says it can’t (because the OS is 32 bit).

Finally my quest bought me to a tool written by this gentleman, the tool is called chkcpu32. It was created a long time ago, but i see it is actively being maintained, last update was September 2012. This tool actually queries the CPU for this information rather than the WMI. The latest version added XML support, a real treat for us powershell scripters, now we don’t have to do text parsing. Here’s a sample non XML output from one of my systems:

C:\>chkcpu32 /v

CPU Identification utility v2.10                 (c) 1997-2012 Jan Steunebrink
──────────────────────────────────────────────────────────────────────────────
CPU Vendor and Model: Intel Core i7 Quad M i7-2600QM/2700QM series D2-step
Internal CPU speed  : 2195.0 MHz
System CPU count    : 1 Physical CPU(s), 4 Core(s) per CPU, 8 Thread(s)
CPU-ID Vendor string: GenuineIntel
CPU-ID Name string  : Intel(R) Core(TM) i7-2720QM CPU @ 2.20GHz
CPU-ID Signature    : 0206A7
CPU Features        : Floating-Point Unit on chip  : Yes
Time Stamp Counter           : Yes
Enhanced SpeedStep Technology: Yes
Hyper-Threading Technology   : Yes
Execute Disable protection   : Yes
64-bit support               : Yes
Virtualization Technology    : Yes
Instr set extensions: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2
Size of L1 cache    : 4 x 64 KB
Integrated L2 cache : 4 x 256 KB
Integrated L3 cache : 6144 KB

I bundled all this information gathering bits and pieces in this script and below you can find short learning points from some of the key parts.

Learning Points

First of all the script assumes that you are running under enterprise admin credentials, and all your GCs are DCs, if you don’t have this setup, you will have to come up with another way to list all your domain controllers.

I found it nowadays that it is more of a headache to not have all GCs as DCs than to just make sure they all are. Dcpromo by default in Windows 2008 R2 will make a DC a GC and DNS server.

My previous post, on how to get all domain controllers lists the one liner to get basic information about DCs (hostname, domain name,  sitename, IP, FSMO roles). The only real challenge here is how to handle the formatting of the “Roles” Property, I used a switch command to loop through all of the FSMO roles a DC might have.

foreach ($role in $($dc.roles)) {
Switch ($role) {
"PdcRole" { $row.PdcRole = "TRUE"}
"RidRole"  {$row.RidRole = "TRUE"}
"InfrastructureRole"  {$row.InfrastructureRole = "TRUE"}
"NamingRole"  {$row.NamingRole = "TRUE"}
"SchemaRole"  {$row.SchemaRole = "TRUE"}
}
}

As far as getting the CPU 64bit support this is done with chkcpu32 with these 2 lines. You will also need psexec from the sysinternals Toolkit(at least v1.98 of psexec), and you should run it at least once before, to get rid of the EULA accept pop-up.

Set-alias pshell-psexec "c:\windows\system32\psexec.exe"
& pshell-psexec \\$dc -n 25 -c -f -h CHKCPU32.exe /v /X > "$($dc)_x64status.log"

If (((Get-Content "$($dc)_x64status.log")).chkcpu32._64bit ='1') {
$row.CPUx64Ready = 'True'
}
Else {
$row.CPUx64Ready = 'False'
} 

The rest of the code in the script is just putting all of this together in a nicely formatted CSV file.

This is kind of all of it, nothing to difficult once you find the right tools and use them properly. Any comments or feedback are most welcomed!