Delete Leaf Objects from Active Directory User Object

The Story

The past days I had a colleague of mine come to me with a user migration problem. He wanted to migrate a user between two child domains in an AD forest. For this most of the time you use Microsoft’s ADMT (Active Directory Migration Tool). He went through the whole migration wizard and had the migration fail with an error message like this:

2014-03-17 12:47:27 ERR2:7422 Failed to move source object ‘CN=Joey’. hr=0x8007208c The operation cannot be performed because child objects exist. This operation can only be performed on a leaf object.

That was strange, I was expecting the user object to be a leaf object itself, not to contain leaf objects?! Then I remembered we are in 2014, we also use Microsoft Exchange and use ActiveSync on Mobile devices. In case you didn’t know, when you configure ActiveSync on your phone a special object is created under your User Object in Active Directory. This object is of type “msExchActiveSyncDevices” and list each of the mobile phones where you have configured Active Sync. I used adsiedit.msc to confirm that the leaf objects were indeed these msExchActiveSyncDevices objects.

So that explains what the leaf objects were, and how they got there. Now since the user is being migrated across domains, it really doesn’t matter whether those leaf AS objects are there or not, because after USMT users have to reconfigure their devices anyway, so they’re “safe to delete”. To fix this you can either use ADSIEDIT to locate the leaf objects and delete them or use Exchange Shell to delete the AD devices….or use Powershell to delete them from the user object just like you would with ADSIEDIT, which is what i want to share now.

The Script

I built this script on a Windows 8.1 computer with Powershell 4.0 and RSAT tools for Windows 2012 R2 installed. Also the AD environment is Windows 2008 R2 FFL and only Windows 2008 R2 DCs run on it. I didn’t check if this runs on less than this configuration, so do report back if this is not working on older combinations of RSAT + OS + AD, though i’m pretty sure you need at least one 2008 R2 DC in the user’s source domain, otherwise the powershell cmdlets won’t wor. You can download this script from the link: Delete-AS-ChildItems.

The script takes only one parameter the SamAccountName of the user with leaf objects. From then on it will show you the leaf objects it wants to delete, and then actually delete them, if not canceled.

Learning Points

I’ve made the script “autonomous”, in the sense that it will automatically discover the closest DC running AD Web Services, and query it for the SamaccountName. This snippet accomplishes that.

$LocalSite = (Get-ADDomainController -Discover).Site
$NewTargetGC = Get-ADDomainController -Discover -Service ADWS -SiteName $LocalSite
IF (!$NewTargetGC)
    { $NewTargetGC = Get-ADDomainController -Discover -Service ADWS -NextClosestSite }
$NewTargetGCHostName = $NewTargetGC.HostName
$LocalGC = "$NewTargetGCHostName" + ":3268"

Once we have this information, we query the GC for the SamAccountName and Domain Information. We need the domain to also discover the closest DC for that domain, and get the list of leaf objects (Lines 14-26)You will want to do this for 2 reasons: first because the GC partition doesn’t contain all the information you want (the child object information) and second, you can’t write to the GC partition, so you have to find your closest respective DC anyway.

The “trick” with this script is to use Get-ADObject with a search base of the User’s DN on a DC in the User’s Domain and look for the respective msExchActiveSyncDevices object type like below:

$UserOjbActSync = Get-ADObject -SearchBase $UserObj.DistinguishedName -filter { ObjectClass -like 'msExchActiveSyncDevices'} -Server $UserobjDC -ErrorAction:SilentlyContinue

Now to actually fix the problem, we run this command, that deletes all the child items, and whatever may be inside them.

Remove-ADObject $UserOjbActSync -Server $UserobjDC -Recursive:$true -Confirm:$false -Verbose -Debug

That about wraps this up. Just wait for replication to occur on your domain and you should e good to finish the migration. As always, use with caution, this does delete things. If you found this useful share it around 🙂

Quick Tip: Update Resource Records in Microsoft DNS using Powershell

One of the great things I like about the (not so) new Windows 2008 R2 Powershell modules is that we can now more easily manage the core Microsoft Networking services (DNS, DHCP). I want to share a little script I built that will add/update Host Records fed from a CSV file.

The Script

In the past automating this kind of thing was possible using a combination of WMI and VBS/Powershell and or batch scripting and using the famous DNSCMD. My script script will not work on any DNS server, you need to run Windows 2008 or later DNS, running against Windows 2003 DNS servers will yield strange/wrong results.

#sample csv file

#DNSName,IP,<other fields not used>
#foo.fabrikam.com,192.168.1.1,<other values, not used>

Param(
 [Parameter(Mandatory=$false)][System.String]$ResourceRecordFile = "C:\Temp\somefile.txt",
 [Parameter(Mandatory=$false)][System.String]$dnsserver = "DNS.constoso.com"
 )
import-module DNSServer

Write-Warning "This script updates DNS resource records in DNS based on information in a CSV file. Details are:`n
Using file $ResourceRecordFile as source file.`nMaking changes on DNS:$dnsserver`n
If you wish to cancel Press Ctrl+C,otherwise press Enter`n"
Read-Host

$HostRecordList = Import-csv $ResourceRecordFile

foreach ($dnshost in $HostRecordList) {
 $RR = $dnshost.DNSName.split(".")[0]
 $Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
 [System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)
 $OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue
 If ($OldObj -eq $null) {
 write-host -ForegroundColor Yellow "Object does not exist in DNS, creating entry now"
 Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP
 }
 Else {
 $NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
 $NewObj.RecordData.Ipv4Address = $NewIP
 If ($NewObj -ne $OldObj) {
 write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
 Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
 }
 }
 $OldObj = $null
 $NewObj = $null
 }

Learning Points

Running this script requires Windows 2008 R2 RSAT installed. As you can see, all the script needs is a CSV file with 2 columns called “hostname” and IP, containing the FQDN, and the DNS server you want to connect and make the changes.

Lines 17-18: This is where we’re extracting the short DNS name from the FQDN and the DNS zone name. Also we are converting the IP address to the format required for entry into DNS:

$RR = $dnshost.DNSName.split(".")[0]
$Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
[System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)

Lines 19-21: Here we try to resolve the DNS record, perhaps it already exists. We will use this information in the next lines…

$OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue

Lines 23:  To create a new Host record  (“A” type record). T he command is pretty straightforward:

Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP

Lines 27-31: or To update an existing A record. No that there is a difference in how Set-DNSServerResourceRecord works compared to the ADD command. This one requires that we get the record, modify the IPV4Address field, then use it to replace the old object.

$NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
$NewObj.RecordData.Ipv4Address = $NewIP
If ($NewObj -ne $OldObj) {
write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
}

That’s about it. You can easily modify this script, so that you can pass the DNS server name from the CSV file (updating lots of records on multiple DNS servers) or updating multiple record type (A Records, CNAME Records). As always C&C is welcome.

Create your own Wifi-hotspot Windows 7 / Windows 8

The topic of making your Windows box a wireless AP, and then sharing internet connection with your wireless devices connected, is not something new, but I’ve never seen anyone wrap Powershell around it. Also this script is designed to work on Windows 7, will work under Windows 8, but with windows 8 and Powershell v. 3.0 some parts will be easier to script. There are 3 parts to creating your personal Wifi-Hotspot:

First allow Windows to control the power state of your Wireless Card. You can either do this from the GUI, or if you’re a geek, you might be looking to do this via Powershell, which is what I’ve done.

Second Enable the HostedNetwork feature available in Windows 7. Again, the technical bits of how this works, and what hosted network can do, is available from Microsoft, here. Good part about the hosted network is that is comes with its own DHCP, so Internet Access will pretty much work out of the box.

Finally Enable Internet Connection Sharing (ICS) – as much as I would like to automate this in Powershell, this just isn’t possible in Windows 7 (I’ll dig inside Windows 8, see if it can be done there). To enable ICS,  follow these Instructions from Microsoft.

The Script

To wrap Steps 1 and 2 up I’ve written a Powershell script that will enable what is needed automatically (so you still have to enable ICS by hand, but that’s easy). Click the link to download Enable-Wifi-HotSpot. You must run this script from an elevated powershell console, it won’t fully work unless you do.

Read on to get some learning points on how I did this.

CAUTION: If you just run the script out of the box, please read the instructions it spits out (it has some commands to temporarily disable your Wifi, so if you are on a wifi only connection you will get disconnected)

Learning Points

First step says….

Enable_TurnOff

I wanted to tick the “Allow the computer to turn off this device to save power” check box programatically. This tick box corresponds to the following registry key:

HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002bE10318}\<ID>\PnpCapabilities

<ID> is a device ID given by Windows to the network adapter when it is installed. When the tick box is checked the first bit of the DWORD value of PNPCapabilities is 0. If the tick box is unchecked the first bit becomes 8.

PNP-capabilities

Also this value is not accepted automatically by the OS, after changing it, you have to reboot (so the Internet says)…but I just went with disabling and enabling the WLAN Adapter and it worked for me. I figured if changing the setting works from the Windows GUI with no reboot there was a way around rebooting.

So our first order of business is to find that <ID> parameter that maps each Wifi adapter to the registry keys. I had the script find all possible Wifi adapters on the system:

$WifiAdapters = Get-WmiObject -Namespace root\wmi -Class MSNdis_PhysicalMediumType -Filter `
 "(NdisPhysicalMediumType=1 OR NdisPhysicalMediumType=8 OR NdisPhysicalMediumType=9) AND NOT InstanceName LIKE '%virtual%'"

I also included integer values for NDISSPhysicalMediumType are included at the top of the script. For reference they can be obtained from 2 places:

  • Windows SDK or WDK (more info here).
  • You can cheat a little and run this command on a Windows 8/ Windows 2012
(Get-NetAdapter | Get-Member PhysicalMediaType).Definition

Once  we have the list of all Wifi adapters, we take each adapter and see if its configuration is OK and it is not disabled. I’m using a filter on the ConfigManagerErrorCode property.  The possible values for this property can be found here.

$PhysicalAdapter = Get-WmiObject -Class Win32_NetworkAdapter -Filter "Name='$($WifiAdapter.InstanceName)'" -Property * |`
 ? {$_.ConfigManagerErrorCode -eq 0 -and $_.ConfigManagerErrorCode -ne 22}

The ID parameter we are looking for is stored in “$PhysicalAdapter.DeviceID” but unfortunately it is not stored in the format we need, (in my case DeviceID = 15, and I needed to transform into 0015). I did it with this line:

$AdapterDeviceNumber = $("{0:D4}" -f [int]$($PhysicalAdapter.DeviceID))

From here on, things get a little simpler. once you get the registry key, I just check if the last HEX digit is 0 or 8.

$PnPCapabilitiesValue = (Get-ItemProperty -Path $KeyPath).PnPCapabilities
 #convert decimal string to HEX to compare first bit
 $PNPCapHEX = [convert]::tostring($PnPCapabilitiesValue,16)

I compare the PNPCapHEX value to see what the first digit is,and decide to just do a “disable/enable” of the wifi adapter, or change the value and then “disable/enable”. Disabling the NIC can be done easily once you have the network adapter object.

$PhysicalAdapter.Disable() | out-null
 $PhysicalAdapter.Enable() | out-null

Note that the commands above return no output. If you take out the out-null, you should see return value = 0. If you get “return value = 5” that is an access denied, and it means you didn’t run the script from an elevated prompt.

Now the registry settings are done, all that is left is to build the netsh command to enable the hosted network. What is “of interest, in this section” is how we read out the wifi password ( I wanted the script to be a little secure, and then how we pass the wifi password to the netsh (that involves converting from secure string to plaintext). For this last conversion I used the function described here.

#now that WiFi adapter is configured, let's add our hotspot
$WifiPassSec = Read-host -Prompt "Enter password for your Wifi, must be at least 8 chars long, complex" -AsSecureString
#enable hosted network
$WifiPass = ConvertFrom-SecureToPlain $WifiPassSec
$SetupHN = "netsh wlan set hostednetwork mode=allow ssid=Rivnet-Wifi key=`"$WifiPass`" keyUsage=persistent`nnetsh wlan start hostednetwork"
Invoke-expression $SetupHN
$SetupHN = $null
$WifiPass = $null

So now you should be all set, just connect your devices to the Wifi and enjoy Internet access via your laptop. Finally, you might want to turn off the hosted network at some time. To do this, run this command:

netsh wlan stop hostednetwork

netsh wlan set hostednetwork mode=disallow

Hopefully this will  help someone out there, looking for a scripted way to do this. For me it was quite a learning journey, since I got to dig inside windows’s internals while scripting this.

Know thy Hypervisor’s limits: VMware HA, DRS and DPM

Last week I was setting up a vSphere Cluster and like any good admin, I was test driving all its features, making sure everything was working fine. As a side note, I’m trying to squeeze as much values of the vSphere Licenses we currently have, so I’ve set this cluster up with lots of the bells and whistles ESXi has, like:

  • Distributed virtual Switches
  • SDRS Datastore Clusters
  • NetIOC and SIOC
  • VMware HA, DRS and Distributed Power Management

(In v5.1 a lot of them have gotten better, more mature, less buggy)

So here I was, had this 5 host cluster  (ESXi1 to 5) setup nicely and I was testing VMware’s HA, but in slightly different scenario, one when DPM said “hey power down these 3 hosts you don’t need them, you have enough capacity”. Fine…”Apply DRS Recommendation” I did, hosts went to standby.

So there I had my 201  dual core test VMs running on just 2 servers (mind you this cluster is just for testing, so VMs were mostly idle). Time to do some damage:

Let me tell you what happens if you have one of the remaining blades go down, say ESXi1:

  1. First HA kicks in and notices, ESXi1 is not responding to hearbeats via IP  or Storage Networks, so that means host is down, not isolated.
  2. HA figures out ESXi1 had VMs running on it that were protected and starts powering ON VMs. Also DRS will eventually figure out you need more capacity and start powering on some of your standby hosts.
  3. HA will power on all your VMs just fine, by the time it finishes, DRS still had not onlined some standby hosts….so I ended up with 1 VM that HA did not manage to power on, bummer!

I investigated this issue. First stop – event viewer on the surviving host had this to say

“vSphere HA unsuccessfully failed over TestVM007. vSphere HA will retry if the maximum number of attempts has not been exceeded.

Reason: Failed to power on VM. warning

6/27/2013 1:39:00 PM
TestVM007″

Related events also showed this information:

“Failed to power on VM.
Could not power on VM : Number of running VCPUs limit exceeded.
Max VCPUs limit reached: 400 (7781 worlds)
SharedArea: Unable to find ‘vmkCrossProfShared’ in SHARED_PER_VCPU_VMX area.
SharedArea: Unable to find ‘nmiShared’ in SHARED_PER_VCPU_VMX area.
SharedArea: Unable to find ‘testSharedAreaPtr’ in SHARED_PER_VM_VMX area.”

I then went straight to the ESXi 5.1 configuration maximums…for sure an ESXi host can take more than 400 vCPUs right? And there it was, page 2:

“Virtual CPUs per host 2048

Ok…I’m at 400 nowhere near that limit. Then I find this KB article from vmware… no help since it seems to apply to ESXi 4.x not 5.1. Also you can’t find the advanced configuration item they mention in the paper. I looked for it using web client, vsphere client and powerCLI. It’s not there. However you do see the value listed if you run this from an ssh session on the target host….but I suspect it is not configurable:

# esxcli system settings kernel list | grep maxVCPUsPerCore

maxVCPUsPerCore uint32 Max number of VCPUs should run on a single core. 0 == determine at runtime 0 0 0

I go back to the maximums document and read also on page 2:

“Virtual CPUs per core 25

OK…So the message said 400vCPU limit reached:

400/25 = 16 – the exact number of cores I have on my ESXi boxes.

Eureka-Moment

So kids, I managed to reach one of ESXi’s limits with my configuration. Which makes you wonder a little about running high density ESXi hosts…and VMWARE’s claim that they can run 2000 vCPUs…sure they can, if you can run 40 physical ones in one box 🙂

My hosts had 16 pCPUs and 192GB  RAM and half the RAM slots were still empty so I could in theory double down the RAM and  stuff each ESXi server with 200VMs each with 2vCPUs and 1-2GB RAM…and I would not be able to failover in case of failure and other scenarios.

Where else does vCPU to pCPU limit manifest itself?

I also tried to see exactly what other scenarios might cause vCenter and ESXi to act up and misbehave. Here’s what I’ve got sofar:

Scenario A: 5 host DRS cluster, with VMware HA enabled, percentage based failover, and admission control enabled. 201 VMs running, 2vCPUs each:

  • Put each host in maintenance mode, until DRS stop migrating VMs for you and enter maintenance mode is stuck at 2%. Note that no VMs are migrated from the evacuated host, but this is due to Admission control, not the hitting vCPU ceiling.
  • Once you reach that point DRS will issue a fault, that it has no more resources, and will not initiate a single vMotion. The error I got was

Insufficient resources to satisfy configured failover level for vSphere HA.

Migrate TestVM007 from ESXi1.contoso.com to any host”

Scenario B: 5 Host DRS cluster, with VMware HA enabled, percentage based failover, and admission control disabled. 401 VMs running, 2vCPUs each:

  • Put each host in maintenance mode, until DRS stop migrating VMs for you and enter maintenance mode is stuck at 2%. Note that VMs are migrated from the evacuated host, but only when you hit the 400vCPU limit do the vMotions stop.
  • The error you get is:

“The VM failed to resume on the destination during early power on.

Failed to power on VM.
Could not power on VM : Number of running VCPUs limit exceeded.
Max VCPUs limit reached: 400 (7781 worlds)
SharedArea: Unable to find ‘vmkCrossProfShared’ in SHARED_PER_VCPU_VMX area.
SharedArea: Unable to find ‘nmiShared’ in SHARED_PER_VCPU_VMX area.
SharedArea: Unable to find ‘testSharedAreaPtr’ in SHARED_PER_VM_VMX area.”

So a similar message to the one you get when HA can’t power on a VM.

To wrap this up, I think there might be some corner cases where people might start to see this behaviour (I’m thinking in VDI environments mostly), and it would be very wise to take a serious look at the vCPU : pCPU ratio in failover scenarios to avoid hitting vSphere ESXi’s maximum values.

vSphere Web Client 5.1 on Windows Server 2012 not starting up – Adobe Flash Player unavailable

I’ve been doing some work the past week around setting up a new vSphere based Datacenter for a customer and I’ve found out that working with the latest and greatest versions of Windows and vCenter isn’t always working without a hitch, as it should. For example if you try to run the vSphere client on Windows Server 2012, you will have to jump trough quite a number of hoops to get it to run…but wait…vSphere Client is deprecated as of version 5.0 and although I still think it is better than the web client in this day, June 2013, the way forward is the web client, and we will not be able to access 5.x features from the vSphere client from now on….so we better get with the program and work on using the vSphere Web Client only.

First step when you try to run the client, is first connect directly to the web page of vcenter, let’s say https://vcenter.contoso.com. If you are on the default Windows Server 2012 install, with IE Enhanced Security Configuration enabled, you will get a broken web page like the one below:

vCenter-Broken-Page

There’s an easy fix to this:

  • First disable IE Enhanced Security Configuration from the Server Manager GUI. Do not disable IE ESC if this is a server running production workloads unless you understand and accept the risks involved.
  • Then simply go to Internet Options > Security > Local Intranet > Sites > add your vCenter URL, click OK to close all Windows and then reload your page.

After that the web page opens up correctly, you click the “Log in to vSphere Web Client” link…And surprise, you get this message:

vSphere-webClient-broken

Isn’t this great…you now need Adobe Flash Player installed on the system. I should have known better, VMWare’s web client is built using Adobe Flex technology, so you do need Flash Player to run it.

So, happily you click on the “Get Adobe Flash Player” link, and you are redirected to Adobe’s page that says…”Hey you are running Windows 8, you should have Flash Player installed”…well Actually I’m running Windows 2012, so things might be a little more locked down than that. I followed the troubleshooting FAQ from Adobe’s website only to find out I didn’t have the plugin installed, as I said, this is Windows Server 2012, not all the bells must be turned on.

I did a little digging and as it turns out, on Windows 2012, there is such a thing called “Desktop Experience”, a Windows Feature. Among the things this Windows feature will bring, this webpage, lists “Adobe Flash Player”  in the miscellaneous section.  That webpage is also going to guide you through the GUI based installation of the Desktop Experience, it’s basically click click, next next until you get the damn thing installed.

If you want to do it via Powershell, here’s the one liner that will help you out. Either way you choose to install it, you will need a reboot once it is completed:

import-module ServerManager
Add-WindowsFeature -name Desktop-Experience,qWave `
-IncludeAllSubFeature -IncludeManagementTools

The qWave feature is Quality Windows Audio-Video Experience – this just in case you were wondering what that is doing there…someone on MSDN suggested to add this to Desktop Experience…so I did, you can leave it out.

After rebooting the box go to the Web Client page, and yo and behold the webpage will load  correctly. If this is your first time loading up the Web Client, after successful authentication you will get one error and a message from Adobe Flash.

The error is something like “Web client has encountered an error #Some Number“. On top of it comes Adobe Flash with the message that asks for permission that the webpage store more than 1MB of flash data on your hard drive. Click Accept, then on the error message choose to reload the client, and all will be well with the VMware world again, you can manage your infrastructure without other annoyances.

Hope this helps others out there, let me know if this worked for you or not

Report DHCP Scope Settings using Powershell

It has been a busy time for me lately, but I’m back here to write about a script to Report on some basic DHCP scope settings. In my situation I used this script to find out which DHCP scopes had specific DNS servers configured, DNS servers that we planned to decommission, so it made sense to replace the IP addresses with valid ones.

keep-calm-and-import-module-dhcpserver

 I found myself lately working more and more with the Powershell V3, available in Windows  Server 2012, and the new “goodies” it brings.

Among those goodies there’s a DHCPServer module, so we can finally breathe a sigh of relief, we can dump netsh and any VBS kludges used to manage DHCP!*

(* lovely as this module is, you cannot use it fully against  Windows 2003 Server, some cmdlets will work, others, not so much, so windows 2008 or later it is)

For an overview of what commandlets are available in this new module take a look on the Technet Blogs. To get started simply deploy a Windows 2012 machine and open Powershell, then type:

import-module DhcpServer

While you are at it update help files for all your Powershell module with this command:

Update-Help –Module * –Force –Verbose

Mission Statement

I needed a report that would contain following Info: DHCPServer name, Scope Name, Subnet defined, Start and End Ranges, Lease Times, Description, DNS Servers configured, globally or explicitly defined. As you can imagine, collating all this information from netsh, vbs, or other parsing methods would be kind of time consuming. Also i’m aware there are DHCP modules out there for Powershell but personally I prefer to use a vendor supported developed method, even if it takes more effort to put together / understand (you never know when a Powershell module from someone starts going out of date, for whatever reason and all your work in scripting with them is redundant).

The Script

Anyway, I threw this script together, which isn’t much in itself, apart from the  error handling that goes on. As I mentioned before, the DhcpServer module doesn’t work 100% unless you are running Windows 2008 or later.

import-module DHCPServer
#Get all Authorized DCs from AD configuration
$DHCPs = Get-DhcpServerInDC
$filename = "c:\temp\AD\DHCPScopes_DNS_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"

$Report = @()
$k = $null
write-host -foregroundcolor Green "`n`n`n`n`n`n`n`n`n"
foreach ($dhcp in $DHCPs) {
	$k++
	Write-Progress -activity "Getting DHCP scopes:" -status "Percent Done: " `
	-PercentComplete (($k / $DHCPs.Count)  * 100) -CurrentOperation "Now processing $($dhcp.DNSName)"
    $scopes = $null
	$scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue)
    If ($scopes -ne $null) {
        #getting global DNS settings, in case scopes are configured to inherit these settings
        $GlobalDNSList = $null
        $GlobalDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
		$scopes | % {
			$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
			$row.Hostname = $dhcp.DNSName
			$row.ScopeID = $_.ScopeID
			$row.SubnetMask = $_.SubnetMask
			$row.Name = $_.Name
			$row.State = $_.State
			$row.StartRange = $_.StartRange
			$row.EndRange = $_.EndRange
			$row.LeaseDuration = $_.LeaseDuration
			$row.Description = $_.Description
            $ScopeDNSList = $null
            $ScopeDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ScopeID $_.ScopeId -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
            #write-host "Q: Use global scopes?: A: $(($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null))"
            If (($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null)) {
                $row.GDNS1 = $GlobalDNSList[0]
                $row.GDNS2 = $GlobalDNSList[1]
                $row.GDNS3 = $GlobalDNSList[2]
                $row.DNS1 = $GlobalDNSList[0]
                $row.DNS2 = $GlobalDNSList[1]
                $row.DNS3 = $GlobalDNSList[2]
                }
            Else {
                $row.DNS1 = $ScopeDNSList[0]
                $row.DNS2 = $ScopeDNSList[1]
                $row.DNS3 = $ScopeDNSList[2]
                }
			$Report += $row
			}
		}
	Else {
        write-host -foregroundcolor Yellow """$($dhcp.DNSName)"" is either running Windows 2003, or is somehow not responding to querries. Adding to report as blank"
		$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
		$row.Hostname = $dhcp.DNSName
		$Report += $row
		}
	write-host -foregroundcolor Green "Done Processing ""$($dhcp.DNSName)"""
	}

$Report  | Export-csv -NoTypeInformation -UseCulture $filename

Learning Points

As far as learning points go, Get-DHCPServerInDC lets you grab all your authorized DHCP servers in one swift line, saved me a few lines of coding against the Powershell AD module.

Get-DhcpServerv4Scope will grab all IPv4 server scopes, nothing fancy, except for the fact, that it doesn’t really honor the “ErrorAction:SilentlyContinue” switch and light up your console when you run the script.

Get-DhcpServerv4OptionValue can get scope options, either globally (do not specify a ScopeID) or on a per scope basis by specifying a scopeID. This one does play nice and gives no output when you ask it to SilentlyContinue.

Some Error Messages

I’ve tested a script in my lab, and used in production, it works fine for my environment, but do you own testing.

Unfortunately, the output is not so nice and clean you do get errors, but the script rolls over them, below are a couple of them I’ve seen. First one is like this:

Get-DhcpServerv4Scope : Failed to get version of the DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : NotSpecified: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 1753,Get-DhcpServerv4Scope

This actually happens because the Get-DhcpServerv4Scope has a subroutine to check the DHCP server version, which fails. As you can see my code does have Silentlycontinue to ommit the error, but it still shows up. I dug up the 1753 error code, and the error message is “There are no more endpoints available from the endpoint mapper“…which is I guess a Powershell way of telling us, Windows 2003 is not supported. This is what we get for playing with v1 of this module.

Another error I’ve seen is this:

Get-DhcpServerv4Scope : Failed to enumerate scopes on DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : PermissionDenied: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 5,Get-DhcpServerv4Scope

It is just a plain old permission denied, you need to be admin of the box you are running against…or at least member of DHCP Administrators I would think.

As far setting the correct DNS servers on option 6, you can use the same module to set it, I did it by hand, since there were just a handful of scopes.

Hope this helps someone out there with their DHCP Reporting.

Automate Replacing of Certificates in vCenter 5.1

A few days ago, VMware launched a much awaited tool, called SSL Certificate Automation Tool. This tool enables VMware administrators to automate the process by which they replace expired/self-signed certificates on all components of the VMware vCenter management suite. As many of you know this process, especially in the new v5.1 version is a complete pain to implement, error prone, and so many steps to follow that you are bound to make a mistake. Compared to say VMware vCenter 4.x, version 5.1 has more “standalone” components that need to interact with users or interact with each other to provide users with data/visualizations and must do that over secure connections.

The components I’m talking about are (the ones highlighted in orange are new to version 5.x vs 4.x)

  • InventoryService
  • SSO
  • vCenter
  • WebClient
  • LogBrowser
  • UpdateManager
  • Orchestrator

It might not look like much but it’s almost double the number of components, double the number of certificates and close to double the number of interactions between the components themselves. To make is “worse”, the workflow for replacing v4.x certificates  (I have this post where I automated the whole process for ESXi 4.x, extendable for vCenter 4.x) is different than the workflow for v5.x ones, much more complicated. Here‘s a document how to manually do it. But if you value your time read on, there’s  an easier way.

The relevant documentation for this automation process is available here from VMware. Essentially it a 2 step process:

Step1: Generate your certificates using OpenSSL and your Internal Windows Enterprise root CA, as per this document(Generating certificates for use with the VMware SSL Certificate Automation Tool (2044696)

Step2: Use the SSL Certificate Automation tool to deploy the certificates, as described here.

Automate Certificate Generation for use with the VMware SSL Certificate Automation Tool

I should say this again, your main source of information should be VMware’s KB article. The information here is either echoing that, or supplementing it where needed. Also things are presented in a different order in my post, so the whole process can be automated, unlike the KB which assumes manual work.

Prerequisites

1. The account you will use in this step must:

  • Be a Local administrator on the computer where the script presented will run
  • Be able to enroll certificates for the Certificate Template that will be used from your PKI infrastructure.
  • Be a Local Administrator on the servers where the vCenter components are installed.

2. Any commands/scripts presented here should be run from an elevated prompt.

3. Name resolution must work correctly on the client where this script will run (all vCenter components must be resolvable via DNS).

4. You must use the OpenSSLversion that VMware specifies, not newer, not older, that is openSSL 0.98 at the time of writing this post.

5. You need to have a certificate template configured according to VMware’s specifications, that means basically duplicating the default web server certificate of the Windows CA, with a few changes:

  • Go to Certificate Manager > Extensions > Key Usage > Allow encryption of user data
  • Also uncheck “allow private key to be exported” as the script will fail if the template allows this.

6. Your CA must have automatic approval activated so you can obtain the certificate using command line.

7. Obtain and save the certificates of your root CA and any intermediate CAs, as described in the KB. From the KB, as I understand it, you should name your root CA certificate root64.cer (x.509 format, base64 encoded). Any other intermediate or issuing CA certificates should be named root64-#.cer (x.509 format, base64 encoded), where # is a number starting from 1 to as many intermediates you have (where root64-1 is the intermediate closest to the issued certificate, and root64-N is the certificate closest to the root CA). Place all the certificates from the chain in a folder called “RootCA_chain“.

Note: Naming the files in a certain way is important. The KB does not do a good enough job of explaining how to build the .PEM file (either that, or I’m not doing a good enough job of understanding it), so make sure your CA chain certificates are numbered like this. if you don’t do this way you will have to rework the piece of code that sorts the certificate files for building the .PEM file.

8. Make yourself a custom openSSL configuration file. The configuration file must include only these lines, not more, not less (it’s a copy paste from VMware’s KB). Save this file as custom_openssl.cfg in the \bin directory where you installed OpenSSL.

[ req ]
default_bits = 2048
default_keyfile = rui.key
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:ServerShortName, IP:ServerIPAddress, DNS:server.contoso.com #examples only

[ req_distinguished_name ] # change these settings for your environment
countryName = US
stateOrProvinceName = Change State
localityName = Change City
0.organizationName = Change Company Name
organizationalUnitName = ChangeMe
commonName = Changeserver.contoso.com

9. Build a .csv file like this example below:

Name,DomainName,OUName
CNTvcS,contoso.com,vCenterServer
CNTvcO,contoso.com,VMwareOrchestrator
CNTvcIS,contoso.com,vCenterInventoryService
CNTVUM,contoso.com,VMwareUpdateManager
CNTwc,contoso.com,vCenterWebClient
CNTwc,contoso.com,,vCenterLogBrowser
CNTsso,contoso.com,vCenterSSO

As you can see, you need to specify a server name, domain name, and OU Name. The OU name is the name of the vCenter component, the value will be written in the “organizationalUnitName” part of the OpenSSl configuration file.

10. Create a folder (the script uses a static c:\temp\certs entry) where you will store all your certificates as they are generated. To this folder copy the RootCA_chain folder created above.

Just like with cooking you now have all the “ingredients” to generate your vCenter certificates.

The Script

You can download the script (Generate_vCenter_Certs_v5.1) from on my blog. My previous post on VMware certificates has a little more background information, that I won’t repeat here.

Learning Points

Lines 16-50: The script takes a few parameters, and also does some error checking on them.

    • The location of the openSSL directory ($OpenSSLPath)
    • The CSV from step 9, with all the servers and components of vcenter ($vCenterHostsFile)
    • The name\display name combination for your issuing/root Certification authority ($CAMachineName_CAName).
    • The name of the template to be used when issuing the certificates ($TemplateName).

Lines 52-71: More variables are set, and the script checks whether you have the root CA files in the specified location (default: c:\temp\certs\RootCA_chain).

Also this is where you should go in the script and change the Country, Company, State, and City Name. These will replace the values in the custom_openssl.cfg file.

Lines 76-78: Create a folder based on the Name-OUName combination to store all files used in certificate generation (CSR, private key, certificate, copy of the root CA chain certificates).

Lines 81-94: Customize the fields from the custom_openSSL.cfg file to match the specific server-service combination, according to the CSV file.

Lines 96-114:  We copy the template .cfg file to the working directory and replace each corresponding row with our values for name, city, state, etc. This is not written in the cleanest powershell for string manipulation, but it gets the job done.

Lines 116-131: This is where we generate the CSR to send to the CA. As per VMware’s instructions this is a 2 step process, first we generate the CSR, then we convert the private key to RSA format.

Lines 136-146: We use the windows tool certreq  to request a certificate and also retrieve the certificate. This is where being local admin in an elevated prompt comes into play, and also where automatic retrieval of certificates comes in handy.

Lines 151-159: We copy over the individual certificates of the Root/Intermediate/Issuing CAs to the working directory. Then we create a file called chain.pem, which will be the .PEM chained certificate file for the given server-component combination specified in the CSV.

Important!!! This is where reading the VMware KB wrong can cost you, and you won’t realize it until you run the SSL Automation Tool. I quote from the KB:

“Open the Root64.cer file in Notepad and paste the contents of the file into the chain.pem file right after the certificate section. Be sure that there is no whitespace in the file in between certificates.

Note: Complete this action for each intermediate certificate authority as well.”

This is where I initially made a mistake, and did it the wrong way.

  • I pasted the certificate in the .pem
  • I pasted the root after the certificate in the .pem
  • I pasted the intermediate/issuing CA certificates in the .pem after the root certificate

It is “do the same for each intermediate certificate” as in “sandwich your intermediate CAs certificates between the root and the certificate in descending order”, THE EXACT OPPOSITE of how Windows will display the certificate chain in the GUI. This is why I asked you in the first place to number your certificates in a specific way.

As a result, if you named your CA certificates like I explained above this code snippet will arrange them in the proper order and chain them in the .pem file correctly.

$chain_pem = "$rootdir\$($strHost.Name)-$($strHost.OUName)\chain.pem"
gc $crt | add-content $chain_pem</span>
$roots = dir "$rootdir\$($strHost.Name)-$($strHost.OUName)" root*.cer | sort name -desc
$roots | % {gc $_.FullName | add-content $chain_pem }

That wraps it up, needless to say, test this script before you let it run loose on your environment. Since the script has a lot of “Read-Host” in it, it is designed to run with pauses, to give you a chance to review the output, and cancel if there is a problem. After all you are running commands against your Certificate authority, so handle with care.

Once you have all the files you should follow step 2, and actually use the SSL automation tool.

Replace vCenter Certificates using the SSL Automation Tool

For reference here is VMware’s KB if you missed it above for using this tool. I don’t have much to say about this, other than stick to the document, but here are a few tips that will make your life easier:

  • If you have multiple servers running each service, it is best that you copy ALL the folders generated by the script (for each server-service combination) to each of the individual servers, and you can freely dispose of them once you successfully replaced the certificates.
  • Before you go ahead and copy over the Automation Tool, take the time to modify the SSL_Environment.bat file pointing each variable to its respective file/value. Then copy the SSL_Environment file to each server. This way each time you run the tool to update a certificate it will always know where to pick “inter-component-trust” certificates, user prompts and so on.
  • When you fill in the “set sso_admin_user=” variable put it in the format “user@domain” as it will give an invalid credentials error when you will run some steps.

I know it is a long read, but it is not an easy topic and I hope it helped you in your environment. Let me know in the comments if there is a way to improve this or an easier way to do this whole process.

Managing DNS Aging and Scavenging settings using Powershell

Aging and scavenging of DNS records is a topic that is pretty well covered on the web. I’m not really looking to rehash all the information out there with this post. I will however put out some resources for whoever wants to do the reading:

  • This post has a good “primer” for DNS aging and scavenging and the steps for implementing it.
  • This post gives a real life example of how unscavenged records impact authentication mechanisms in Windows
  • This post explains how the configuration of aging and scavenging can be done, either via GUI or batch command line.

I’ll paint the bigger picture for the environment I’m working on right now, perhaps a good example of how typical Windows Infrastructure services are setup in global corporations.

  • AD integrated DNS zones that replicate to all DCs in forest, zones allow secure updates only. This means that if we…
  • Run local DHCP services on all locations in the infrastructure we need to standardise DHCP scopes lease time to a single value, for Windows client scopes prior to working on enabling DNS Aging + Scavenging on all our DNS zones. (the other scopes we don’t care, they can’t add/update records in DNS, they’re not domain joined and the zone only allows secure updates). Link #2 gives us the correlation between DHCP lease time and DNS aging / scavenging of records.
  • We also have clients register their DNS records, not the DHCP server itself (this hasn’t come up for change until now).

What I am going to script about is what Josh Jones from link #1 above referred to as “the setup phase”. In this phase we are merely configuring the DNS zones to age DNS records according to our requirements. The guys over at cb5 do a fine job of explaining the various scenarios to change this via DNSCMD, via the wizard and all the “bugs” of the GUI wizards.

That may be fine for just a few zones, but when you have tens of DNS zones (most of them reverse DNS) the clicky business starts to sound less fun. Also working with DNSCMD might not be everyone’s cup of tea. Luckily I’m writing this in 2013, a few months after the release of Windows Server 2012 and the shiny new cmdlets it brings, and yes, there are DNS server ones.

So you will need a client running either Windows 8 + Windows Server 2012 RSAT or a Windows Server 2012 box (doesn’t need to be domain controller or DNS server, a member server is fine).

Get DNS Aging and Scavenging Settings

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
 Import-Module DNSServer
 }

#Report on Existing Server settings
$DnsServer = 'dc1.contoso.com'
$filename = "c:\temp\AD\$($DNSServer)_Before_AgScavConfig_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"
$zones = Get-DnsServerZone -computername $DnsServer
$zones | %{ Get-DnsServerZoneAging -ComputerName $DnsServer -name $_.ZoneName} | Export-Csv -NoTypeInformation $filename

There’s nothing too fancy about this part. We get all the Zones we need using Get-DNSServerZone, then we pass the value to Get-DNSServerZonesAging. The output would return following information:

ZoneName Name of the DNS Zone
ScavengeServers Servers where this zone will be scavenged
AgingEnabled Flag wether records are aged or not
AvailForScavengeTime Time when the zone is eligible for scavenging of stale records
NoRefreshInterval Interval when the Timestamp attribute cannot be refreshed on the DNS Record
RefreshInterval Interval when the Timestamp attribute can be refreshed on the DNS Record

If no one ever configured Scavenging on the servers, the output should be pretty much blank.

Configure Aging of DNS records for all zones

This snippet accomplishes this:

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
	Import-Module DNSServer
	}

#Set New values
$DnsServer = 'dc1.contoso.com'
$DNSIP = [System.Net.DNS]::GetHostAddresses($dnsServer).IPAddressToString
$NoRefresh = "3.00:00:00"
$Refresh = "5.00:00:00"
$zones = Get-DnsServerZone -computername $DnsServer | ? {$_.ZoneType -like 'Primary' -and $_.ZoneName -notlike 'TrustAnchors' -and $_.IsDsIntegrated -like 'False'}
$zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName -Aging $true -NoRefreshInterval $NoRefresh -RefreshInterval $Refresh -ScavengeServers $DNSIP -passThru}

Learning Points

The $Zones variable now contains a filtered list of zones, the Primary zones, those that are not “TrustAnchors” and those that are not AD Integrated (the 0.in-addr.arpa … 127.in-addr.arpa and 255.in-addr.arpa zones).

Why we do this? Well in our case we only run primary and stub zones, so that explains the “primary” filter. The “Trust Anchors” Zone we don’t have a use for (more info on Trust Anchors here). Lastly the filter removes zones that are not AD integrated (we will never be able to get an IP from those zones, since they are either network addresses, loopback addresses or broadcast addresses).

Note: If you fail to filter the “0, 127 and 255” zones your last command will spit out an error like below. I looked the Win32 9611 error code up in the windows 32 error code list  and it means “Invalid Zone Type”. So filter it, ok ?!

Set-DnsServerZoneAging : Failed to set property ScavengeServers for zone 255.in-addr.arpa on server dc1.contoso.com.

<em id="__mceDel">At line:1 char:14
+ $zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : InvalidArgument: (ScavengeServers:root/Microsoft/...ServerZoneAging) [Set-DnsServerZoneA
 ging], CimException
 + FullyQualifiedErrorId : WIN32 9611,Set-DnsServerZoneAging

You should also be careful that the commandlet expects the Refresh/No-Refresh Intervals in a specific format, and the ScavengeServers parameter  needs to be an IP address, not a hostname.

The -PassThru switch displays some output on the console, as by default the commandlet doesn’t generate output.

The last commandlet (Set-DNSServerZoneAging) has kind of little documentation about it flying on the web, and I actually found some documentation to some non-existing parameters that got me all excited, something like a “SetAllZones”, but the actual parameter doesn’t exist as of this time (February 2013). So I had to use a foreach loop to configure each zone.

Wow, Initially I wanted this to be a short post, but apparently it added up to something not  so short. I hope it is useful, and helps with your DNS Aging configuration. If there are other  more simpler/better ways to accomplish this I would like to hear about them, just a leave a note in the comments.

How to remove a KMS Server from your infrastructure

These days I took a swing at some clean-up I had to do in our KMS servers list. In any large environment you are bound to find some configurations you either did not put in place (there is usually more than 1 person managing it) or put in place for testing and forgot to remove them. I’m mainly referring to KMS servers that may have once been used to activate Windows licenses, or people have attempted to set them up that way (but failed for one or more reasons). You might have this problem too in your environment, and not know about it. Usually any “rogue” or unauthorized KMS servers also publish their KMS service in DNS. This means that when a client tries to activate it will pick one of the servers that offer the _VLMCS service (license activation) in the _TCP node of the DNS suffixes he has configured or his own domain name. By default all KMS hosts publish their Service Record with equal priority and weight, so with few KMS hosts, there’s a high chance you will get sent to the wrong/rogue KMS. If the client picks the correct KMS host, all is well with the world, if not, they get an error and you get an unneeded support call that users can’t activate their Windows.

To fix this you should first find the rogue KMS hosts. Since the information is published in your DNS, this nslookup query should reveal your servers:

nslookup -q=srv _vlmcs._tcp.contoso.com

Run this for all your subdomain’s fqdn to list all servers. A sample output would be this:

Server: dc1.contoso.com
Address: 192.100.5.10

_vlmcs._tcp.contoso.com SRV service location:
 priority = 0
 weight = 0
 port = 1688
 svr hostname = KMS01.contoso.com
_vlmcs._tcp.contoso.com SRV service location:
 priority = 0
 weight = 0
 port = 1688
 svr hostname = John-Desktop.contoso.com
KMS01.contoso.com internet address = 192.41.5.4
John-Desktop.contoso.com internet address = 192.20.50.20

As you see, we have 2 KMS host entries, one seems valid, the other looks like someone attempted to activate his PC the wrong way and ended up publishing KMS service records in DNS. Here’s how to remove this, for good. Some of the steps are taken from technet documentation, some are from social.technet site.

  •  Login/RDP/PSEXEC to the affected host (John-Desktop) and uninstall KMS product key. To do this, run this from an elevated command prompt:
cscript %windir%\system32\slmgr.vbs /upk
  • Install the default KMS client key, found here:
cscript %windir%\system32\slmgr.vbs /IPK [KMS client Setup Key]"
  • Activate the computer as a client using the command below. In our case it would go to the KMS01.constoso.com host
cscript %windir%\system32\slmgr.vbs /ato"
  • Now you should stop this record from being published in DNS. You guessed it, just because you uninstalled the KMS host key and put in the client Key doesn’t mean he stopped advertising KMS in DNS. If you are running Windows 2008 R2, slmgr.vbs has  a switch which does this for you:
cscript %windir%\system32\slmgr.vbs /cdns"

Important Note: If you are running Windows 2008 not Windows 2008 R2 there is no /cdns switch. Also you cannot run slmgr.vbs from a 2008 R2 box over the 2008 machine with that switch, it will say the something like this:


Microsoft (R) Windows Script Host Version 5.8
Copyright (C) Microsoft Corporation. All rights reserved.

The remote machine does not support this version of SLMgr.vbs

This is also a good “failsafe” command in case the /cdns switch didn’t work for Windows 2008 R2. Changing this registry key worked for me, other people suggested other fixes (here) but along the same lines, I didn’t test them. You need to run this command from an elevated command prompt:

reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SL" /v DisableDnsPublishing /t REG_DWORD /d 1
  • Stop and Start the Software Licensing Service:
net stop SLSVC

net start SLSVC

Update: If running Windows 2008 R2 you should look at restarting Software Protection Service

net stop "Software Protection"

net start "Software Protection"
  • Remove the _vlmcs KMS service record for John-Desktop from the contoso.com _tcp node. You can do this via dnsmgmt.msc console

That’s about it, Hope someone finds this one useful. Any comments are welcome.

Active Directory Domain Controller Backups – Part 3

Time for the last part of the Active Directory Backup Series. The last 2 posts(#1 and #2) have been about defining what needs to be backed up and the scripts / commands that are used to describe. This time we will just discuss some administrative things, they involve Powershell and some Group Policy Preferences (GPP) Settings. So you have all the parts that make the thing “backup”, now how do you put them to work, regularly, automatically, with as little maintenance as possible from your side. This how I chose to do it, from a high level:

  • I created a central location to store your scripts then deploy a GPO with GPP that copies over the scripts  from that central location.
  • For running the backups script I used scheduled tasks created using a Powershell scripts on all remote systems.

I admit it is not the most elegant, zero-touch approach, but I had to make a compromise. Some of you might think, “hey, why didn’t you use GPP to deploy a scheduled task? That would have been easier, no reason to have this hassle with a script that creates your scheduled tasks.”

The reason why I chose to do it via a script is I need access to the network to run the actual scripts (they save data to a file share) so using system account to run the batch jobs is out of the question as it doesn’t have network access, I need a domain account (member of backup operators) and with access to the share. This means I have to be careful when passing credentials to configure the scheduled tasks. As it turns out, when passing credentials in a GPP, they are just obscured, not encrypted (more info here and here) so no way am I giving backup operators credentials over the sysvol share, it is not secure  So, I either create the backup tasks manually (2 * number of scripts * number of domains) or create a script to do it for me. They say “Efficiency is the most intelligent form of laziness”, so I wrote a script.

With this out of the way, let’s handle each task, first off….

Distributing The Scripts

Create a folder in a central location where my scripts and files would be located. In case you don’t know what I’m talking about, it’s the scripts from this post.

Create a GPO object, map it to the Domain Controllers OU, and configure its GPP settings to copy the folder you setup  at step 1, locally on the DCs like in the picture below (I’m showing you just 1 file, make sure your GPP has all 3 files included). The GPO also changes script execution policy  so you can run scripts as batch jobs. 
GPP_Settings

I’ve applied this GPO to the Domain Controllers OU but restricted it to only apply to a certain Security group in AD (and yes you guessed it, I put the DCs I want to backup in that group).

Creating the Scheduled Jobs

I found Ryan Dennis’s blog here where he gives a sample of how to create a scheduled task, and he took it form another smart man, over here.  I took his sample script and tweaked it a little, I needed to make it a little more generic, and to be able to accept credentials as parameters. Then I created another script that calls the “Create-scheduledtask.ps1” to connect to each DC and create the scheduled tasks. Needless to say you need to be Domain / Enterprise admin to run these scripts

$BackupTasks = Import-CSV -useculture "C:\Temp\AD\BCP\BackupSource.csv"
$Domains = $BackupTasks | group-object -property Domain
$DomainCreds = @()
foreach ($domain in $domains) {
 $Creds = Get-Credential -Credential "$($Domain.Name)\dom_backup"
 $row = "" | select Domain,UserName,Password
 $row.Domain = $domain.Name
 $row.UserName = $Creds.UserName
 $row.Password = $Creds.Password
 $DomainCreds += $row
 }

#Create_Tasks
Foreach ($BackupTask in $BackupTasks) {
 $curCred = $DomainCreds | ? { $_.domain -eq $BackupTask.Domain }
 $SchedTaskCreds = New-Object System.Management.Automation.PsCredential($curCred.UserName,$curCred.Password)
 #$SchedTaskCreds
 $ScriptFullPath = $BackupTask.ScriptFolder+ "\" + $BackupTask.ScriptName
 .\Create-ScheduledTask.ps1 -HostName $BackupTask.HostName -Description $BackupTask.TaskName -ScriptPath $ScriptFullPath -SchedTaskCreds $SchedTaskCreds
 }<span style="color: #222222; font-family: 'Courier 10 Pitch', Courier, monospace; line-height: 21px;">

As far as learning points go, I first determined which domains I needed to get credentials from, then I ask the user interactively to type the account and password. This saves a lot of password prompts when the scheduled task is created

The 2 scripts I mentioned are included in this rar file. SchedTask_Creation

This is it mostly. The scheduling script is the initial version, it would be more elegant if he just pulled host names from the AD group, then simply built with each name the script files and names from the .csv file.

A further refinement and securing of this process would be to sign the scripts using a windows CA and only allow execution of signed scripts on the DCs.