Delete Leaf Objects from Active Directory User Object

The Story

The past days I had a colleague of mine come to me with a user migration problem. He wanted to migrate a user between two child domains in an AD forest. For this most of the time you use Microsoft’s ADMT (Active Directory Migration Tool). He went through the whole migration wizard and had the migration fail with an error message like this:

2014-03-17 12:47:27 ERR2:7422 Failed to move source object ‘CN=Joey’. hr=0x8007208c The operation cannot be performed because child objects exist. This operation can only be performed on a leaf object.

That was strange, I was expecting the user object to be a leaf object itself, not to contain leaf objects?! Then I remembered we are in 2014, we also use Microsoft Exchange and use ActiveSync on Mobile devices. In case you didn’t know, when you configure ActiveSync on your phone a special object is created under your User Object in Active Directory. This object is of type “msExchActiveSyncDevices” and list each of the mobile phones where you have configured Active Sync. I used adsiedit.msc to confirm that the leaf objects were indeed these msExchActiveSyncDevices objects.

So that explains what the leaf objects were, and how they got there. Now since the user is being migrated across domains, it really doesn’t matter whether those leaf AS objects are there or not, because after USMT users have to reconfigure their devices anyway, so they’re “safe to delete”. To fix this you can either use ADSIEDIT to locate the leaf objects and delete them or use Exchange Shell to delete the AD devices….or use Powershell to delete them from the user object just like you would with ADSIEDIT, which is what i want to share now.

The Script

I built this script on a Windows 8.1 computer with Powershell 4.0 and RSAT tools for Windows 2012 R2 installed. Also the AD environment is Windows 2008 R2 FFL and only Windows 2008 R2 DCs run on it. I didn’t check if this runs on less than this configuration, so do report back if this is not working on older combinations of RSAT + OS + AD, though i’m pretty sure you need at least one 2008 R2 DC in the user’s source domain, otherwise the powershell cmdlets won’t wor. You can download this script from the link: Delete-AS-ChildItems.

The script takes only one parameter the SamAccountName of the user with leaf objects. From then on it will show you the leaf objects it wants to delete, and then actually delete them, if not canceled.

Learning Points

I’ve made the script “autonomous”, in the sense that it will automatically discover the closest DC running AD Web Services, and query it for the SamaccountName. This snippet accomplishes that.

$LocalSite = (Get-ADDomainController -Discover).Site
$NewTargetGC = Get-ADDomainController -Discover -Service ADWS -SiteName $LocalSite
IF (!$NewTargetGC)
    { $NewTargetGC = Get-ADDomainController -Discover -Service ADWS -NextClosestSite }
$NewTargetGCHostName = $NewTargetGC.HostName
$LocalGC = "$NewTargetGCHostName" + ":3268"

Once we have this information, we query the GC for the SamAccountName and Domain Information. We need the domain to also discover the closest DC for that domain, and get the list of leaf objects (Lines 14-26)You will want to do this for 2 reasons: first because the GC partition doesn’t contain all the information you want (the child object information) and second, you can’t write to the GC partition, so you have to find your closest respective DC anyway.

The “trick” with this script is to use Get-ADObject with a search base of the User’s DN on a DC in the User’s Domain and look for the respective msExchActiveSyncDevices object type like below:

$UserOjbActSync = Get-ADObject -SearchBase $UserObj.DistinguishedName -filter { ObjectClass -like 'msExchActiveSyncDevices'} -Server $UserobjDC -ErrorAction:SilentlyContinue

Now to actually fix the problem, we run this command, that deletes all the child items, and whatever may be inside them.

Remove-ADObject $UserOjbActSync -Server $UserobjDC -Recursive:$true -Confirm:$false -Verbose -Debug

That about wraps this up. Just wait for replication to occur on your domain and you should e good to finish the migration. As always, use with caution, this does delete things. If you found this useful share it around 🙂

Quick Tip: Update Resource Records in Microsoft DNS using Powershell

One of the great things I like about the (not so) new Windows 2008 R2 Powershell modules is that we can now more easily manage the core Microsoft Networking services (DNS, DHCP). I want to share a little script I built that will add/update Host Records fed from a CSV file.

The Script

In the past automating this kind of thing was possible using a combination of WMI and VBS/Powershell and or batch scripting and using the famous DNSCMD. My script script will not work on any DNS server, you need to run Windows 2008 or later DNS, running against Windows 2003 DNS servers will yield strange/wrong results.

#sample csv file

#DNSName,IP,<other fields not used>
#foo.fabrikam.com,192.168.1.1,<other values, not used>

Param(
 [Parameter(Mandatory=$false)][System.String]$ResourceRecordFile = "C:\Temp\somefile.txt",
 [Parameter(Mandatory=$false)][System.String]$dnsserver = "DNS.constoso.com"
 )
import-module DNSServer

Write-Warning "This script updates DNS resource records in DNS based on information in a CSV file. Details are:`n
Using file $ResourceRecordFile as source file.`nMaking changes on DNS:$dnsserver`n
If you wish to cancel Press Ctrl+C,otherwise press Enter`n"
Read-Host

$HostRecordList = Import-csv $ResourceRecordFile

foreach ($dnshost in $HostRecordList) {
 $RR = $dnshost.DNSName.split(".")[0]
 $Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
 [System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)
 $OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue
 If ($OldObj -eq $null) {
 write-host -ForegroundColor Yellow "Object does not exist in DNS, creating entry now"
 Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP
 }
 Else {
 $NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
 $NewObj.RecordData.Ipv4Address = $NewIP
 If ($NewObj -ne $OldObj) {
 write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
 Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
 }
 }
 $OldObj = $null
 $NewObj = $null
 }

Learning Points

Running this script requires Windows 2008 R2 RSAT installed. As you can see, all the script needs is a CSV file with 2 columns called “hostname” and IP, containing the FQDN, and the DNS server you want to connect and make the changes.

Lines 17-18: This is where we’re extracting the short DNS name from the FQDN and the DNS zone name. Also we are converting the IP address to the format required for entry into DNS:

$RR = $dnshost.DNSName.split(".")[0]
$Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
[System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)

Lines 19-21: Here we try to resolve the DNS record, perhaps it already exists. We will use this information in the next lines…

$OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue

Lines 23:  To create a new Host record  (“A” type record). T he command is pretty straightforward:

Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP

Lines 27-31: or To update an existing A record. No that there is a difference in how Set-DNSServerResourceRecord works compared to the ADD command. This one requires that we get the record, modify the IPV4Address field, then use it to replace the old object.

$NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
$NewObj.RecordData.Ipv4Address = $NewIP
If ($NewObj -ne $OldObj) {
write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
}

That’s about it. You can easily modify this script, so that you can pass the DNS server name from the CSV file (updating lots of records on multiple DNS servers) or updating multiple record type (A Records, CNAME Records). As always C&C is welcome.

Create your own Wifi-hotspot Windows 7 / Windows 8

The topic of making your Windows box a wireless AP, and then sharing internet connection with your wireless devices connected, is not something new, but I’ve never seen anyone wrap Powershell around it. Also this script is designed to work on Windows 7, will work under Windows 8, but with windows 8 and Powershell v. 3.0 some parts will be easier to script. There are 3 parts to creating your personal Wifi-Hotspot:

First allow Windows to control the power state of your Wireless Card. You can either do this from the GUI, or if you’re a geek, you might be looking to do this via Powershell, which is what I’ve done.

Second Enable the HostedNetwork feature available in Windows 7. Again, the technical bits of how this works, and what hosted network can do, is available from Microsoft, here. Good part about the hosted network is that is comes with its own DHCP, so Internet Access will pretty much work out of the box.

Finally Enable Internet Connection Sharing (ICS) – as much as I would like to automate this in Powershell, this just isn’t possible in Windows 7 (I’ll dig inside Windows 8, see if it can be done there). To enable ICS,  follow these Instructions from Microsoft.

The Script

To wrap Steps 1 and 2 up I’ve written a Powershell script that will enable what is needed automatically (so you still have to enable ICS by hand, but that’s easy). Click the link to download Enable-Wifi-HotSpot. You must run this script from an elevated powershell console, it won’t fully work unless you do.

Read on to get some learning points on how I did this.

CAUTION: If you just run the script out of the box, please read the instructions it spits out (it has some commands to temporarily disable your Wifi, so if you are on a wifi only connection you will get disconnected)

Learning Points

First step says….

Enable_TurnOff

I wanted to tick the “Allow the computer to turn off this device to save power” check box programatically. This tick box corresponds to the following registry key:

HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002bE10318}\<ID>\PnpCapabilities

<ID> is a device ID given by Windows to the network adapter when it is installed. When the tick box is checked the first bit of the DWORD value of PNPCapabilities is 0. If the tick box is unchecked the first bit becomes 8.

PNP-capabilities

Also this value is not accepted automatically by the OS, after changing it, you have to reboot (so the Internet says)…but I just went with disabling and enabling the WLAN Adapter and it worked for me. I figured if changing the setting works from the Windows GUI with no reboot there was a way around rebooting.

So our first order of business is to find that <ID> parameter that maps each Wifi adapter to the registry keys. I had the script find all possible Wifi adapters on the system:

$WifiAdapters = Get-WmiObject -Namespace root\wmi -Class MSNdis_PhysicalMediumType -Filter `
 "(NdisPhysicalMediumType=1 OR NdisPhysicalMediumType=8 OR NdisPhysicalMediumType=9) AND NOT InstanceName LIKE '%virtual%'"

I also included integer values for NDISSPhysicalMediumType are included at the top of the script. For reference they can be obtained from 2 places:

  • Windows SDK or WDK (more info here).
  • You can cheat a little and run this command on a Windows 8/ Windows 2012
(Get-NetAdapter | Get-Member PhysicalMediaType).Definition

Once  we have the list of all Wifi adapters, we take each adapter and see if its configuration is OK and it is not disabled. I’m using a filter on the ConfigManagerErrorCode property.  The possible values for this property can be found here.

$PhysicalAdapter = Get-WmiObject -Class Win32_NetworkAdapter -Filter "Name='$($WifiAdapter.InstanceName)'" -Property * |`
 ? {$_.ConfigManagerErrorCode -eq 0 -and $_.ConfigManagerErrorCode -ne 22}

The ID parameter we are looking for is stored in “$PhysicalAdapter.DeviceID” but unfortunately it is not stored in the format we need, (in my case DeviceID = 15, and I needed to transform into 0015). I did it with this line:

$AdapterDeviceNumber = $("{0:D4}" -f [int]$($PhysicalAdapter.DeviceID))

From here on, things get a little simpler. once you get the registry key, I just check if the last HEX digit is 0 or 8.

$PnPCapabilitiesValue = (Get-ItemProperty -Path $KeyPath).PnPCapabilities
 #convert decimal string to HEX to compare first bit
 $PNPCapHEX = [convert]::tostring($PnPCapabilitiesValue,16)

I compare the PNPCapHEX value to see what the first digit is,and decide to just do a “disable/enable” of the wifi adapter, or change the value and then “disable/enable”. Disabling the NIC can be done easily once you have the network adapter object.

$PhysicalAdapter.Disable() | out-null
 $PhysicalAdapter.Enable() | out-null

Note that the commands above return no output. If you take out the out-null, you should see return value = 0. If you get “return value = 5” that is an access denied, and it means you didn’t run the script from an elevated prompt.

Now the registry settings are done, all that is left is to build the netsh command to enable the hosted network. What is “of interest, in this section” is how we read out the wifi password ( I wanted the script to be a little secure, and then how we pass the wifi password to the netsh (that involves converting from secure string to plaintext). For this last conversion I used the function described here.

#now that WiFi adapter is configured, let's add our hotspot
$WifiPassSec = Read-host -Prompt "Enter password for your Wifi, must be at least 8 chars long, complex" -AsSecureString
#enable hosted network
$WifiPass = ConvertFrom-SecureToPlain $WifiPassSec
$SetupHN = "netsh wlan set hostednetwork mode=allow ssid=Rivnet-Wifi key=`"$WifiPass`" keyUsage=persistent`nnetsh wlan start hostednetwork"
Invoke-expression $SetupHN
$SetupHN = $null
$WifiPass = $null

So now you should be all set, just connect your devices to the Wifi and enjoy Internet access via your laptop. Finally, you might want to turn off the hosted network at some time. To do this, run this command:

netsh wlan stop hostednetwork

netsh wlan set hostednetwork mode=disallow

Hopefully this will  help someone out there, looking for a scripted way to do this. For me it was quite a learning journey, since I got to dig inside windows’s internals while scripting this.

Report DHCP Scope Settings using Powershell

It has been a busy time for me lately, but I’m back here to write about a script to Report on some basic DHCP scope settings. In my situation I used this script to find out which DHCP scopes had specific DNS servers configured, DNS servers that we planned to decommission, so it made sense to replace the IP addresses with valid ones.

keep-calm-and-import-module-dhcpserver

 I found myself lately working more and more with the Powershell V3, available in Windows  Server 2012, and the new “goodies” it brings.

Among those goodies there’s a DHCPServer module, so we can finally breathe a sigh of relief, we can dump netsh and any VBS kludges used to manage DHCP!*

(* lovely as this module is, you cannot use it fully against  Windows 2003 Server, some cmdlets will work, others, not so much, so windows 2008 or later it is)

For an overview of what commandlets are available in this new module take a look on the Technet Blogs. To get started simply deploy a Windows 2012 machine and open Powershell, then type:

import-module DhcpServer

While you are at it update help files for all your Powershell module with this command:

Update-Help –Module * –Force –Verbose

Mission Statement

I needed a report that would contain following Info: DHCPServer name, Scope Name, Subnet defined, Start and End Ranges, Lease Times, Description, DNS Servers configured, globally or explicitly defined. As you can imagine, collating all this information from netsh, vbs, or other parsing methods would be kind of time consuming. Also i’m aware there are DHCP modules out there for Powershell but personally I prefer to use a vendor supported developed method, even if it takes more effort to put together / understand (you never know when a Powershell module from someone starts going out of date, for whatever reason and all your work in scripting with them is redundant).

The Script

Anyway, I threw this script together, which isn’t much in itself, apart from the  error handling that goes on. As I mentioned before, the DhcpServer module doesn’t work 100% unless you are running Windows 2008 or later.

import-module DHCPServer
#Get all Authorized DCs from AD configuration
$DHCPs = Get-DhcpServerInDC
$filename = "c:\temp\AD\DHCPScopes_DNS_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"

$Report = @()
$k = $null
write-host -foregroundcolor Green "`n`n`n`n`n`n`n`n`n"
foreach ($dhcp in $DHCPs) {
	$k++
	Write-Progress -activity "Getting DHCP scopes:" -status "Percent Done: " `
	-PercentComplete (($k / $DHCPs.Count)  * 100) -CurrentOperation "Now processing $($dhcp.DNSName)"
    $scopes = $null
	$scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue)
    If ($scopes -ne $null) {
        #getting global DNS settings, in case scopes are configured to inherit these settings
        $GlobalDNSList = $null
        $GlobalDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
		$scopes | % {
			$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
			$row.Hostname = $dhcp.DNSName
			$row.ScopeID = $_.ScopeID
			$row.SubnetMask = $_.SubnetMask
			$row.Name = $_.Name
			$row.State = $_.State
			$row.StartRange = $_.StartRange
			$row.EndRange = $_.EndRange
			$row.LeaseDuration = $_.LeaseDuration
			$row.Description = $_.Description
            $ScopeDNSList = $null
            $ScopeDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ScopeID $_.ScopeId -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
            #write-host "Q: Use global scopes?: A: $(($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null))"
            If (($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null)) {
                $row.GDNS1 = $GlobalDNSList[0]
                $row.GDNS2 = $GlobalDNSList[1]
                $row.GDNS3 = $GlobalDNSList[2]
                $row.DNS1 = $GlobalDNSList[0]
                $row.DNS2 = $GlobalDNSList[1]
                $row.DNS3 = $GlobalDNSList[2]
                }
            Else {
                $row.DNS1 = $ScopeDNSList[0]
                $row.DNS2 = $ScopeDNSList[1]
                $row.DNS3 = $ScopeDNSList[2]
                }
			$Report += $row
			}
		}
	Else {
        write-host -foregroundcolor Yellow """$($dhcp.DNSName)"" is either running Windows 2003, or is somehow not responding to querries. Adding to report as blank"
		$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
		$row.Hostname = $dhcp.DNSName
		$Report += $row
		}
	write-host -foregroundcolor Green "Done Processing ""$($dhcp.DNSName)"""
	}

$Report  | Export-csv -NoTypeInformation -UseCulture $filename

Learning Points

As far as learning points go, Get-DHCPServerInDC lets you grab all your authorized DHCP servers in one swift line, saved me a few lines of coding against the Powershell AD module.

Get-DhcpServerv4Scope will grab all IPv4 server scopes, nothing fancy, except for the fact, that it doesn’t really honor the “ErrorAction:SilentlyContinue” switch and light up your console when you run the script.

Get-DhcpServerv4OptionValue can get scope options, either globally (do not specify a ScopeID) or on a per scope basis by specifying a scopeID. This one does play nice and gives no output when you ask it to SilentlyContinue.

Some Error Messages

I’ve tested a script in my lab, and used in production, it works fine for my environment, but do you own testing.

Unfortunately, the output is not so nice and clean you do get errors, but the script rolls over them, below are a couple of them I’ve seen. First one is like this:

Get-DhcpServerv4Scope : Failed to get version of the DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : NotSpecified: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 1753,Get-DhcpServerv4Scope

This actually happens because the Get-DhcpServerv4Scope has a subroutine to check the DHCP server version, which fails. As you can see my code does have Silentlycontinue to ommit the error, but it still shows up. I dug up the 1753 error code, and the error message is “There are no more endpoints available from the endpoint mapper“…which is I guess a Powershell way of telling us, Windows 2003 is not supported. This is what we get for playing with v1 of this module.

Another error I’ve seen is this:

Get-DhcpServerv4Scope : Failed to enumerate scopes on DHCP server dc1.contoso.com.
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : PermissionDenied: (dc1.contoso.com:root/Microsoft/...cpServerv4Scope) [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 5,Get-DhcpServerv4Scope

It is just a plain old permission denied, you need to be admin of the box you are running against…or at least member of DHCP Administrators I would think.

As far setting the correct DNS servers on option 6, you can use the same module to set it, I did it by hand, since there were just a handful of scopes.

Hope this helps someone out there with their DHCP Reporting.

Automate Replacing of Certificates in vCenter 5.1

A few days ago, VMware launched a much awaited tool, called SSL Certificate Automation Tool. This tool enables VMware administrators to automate the process by which they replace expired/self-signed certificates on all components of the VMware vCenter management suite. As many of you know this process, especially in the new v5.1 version is a complete pain to implement, error prone, and so many steps to follow that you are bound to make a mistake. Compared to say VMware vCenter 4.x, version 5.1 has more “standalone” components that need to interact with users or interact with each other to provide users with data/visualizations and must do that over secure connections.

The components I’m talking about are (the ones highlighted in orange are new to version 5.x vs 4.x)

  • InventoryService
  • SSO
  • vCenter
  • WebClient
  • LogBrowser
  • UpdateManager
  • Orchestrator

It might not look like much but it’s almost double the number of components, double the number of certificates and close to double the number of interactions between the components themselves. To make is “worse”, the workflow for replacing v4.x certificates  (I have this post where I automated the whole process for ESXi 4.x, extendable for vCenter 4.x) is different than the workflow for v5.x ones, much more complicated. Here‘s a document how to manually do it. But if you value your time read on, there’s  an easier way.

The relevant documentation for this automation process is available here from VMware. Essentially it a 2 step process:

Step1: Generate your certificates using OpenSSL and your Internal Windows Enterprise root CA, as per this document(Generating certificates for use with the VMware SSL Certificate Automation Tool (2044696)

Step2: Use the SSL Certificate Automation tool to deploy the certificates, as described here.

Automate Certificate Generation for use with the VMware SSL Certificate Automation Tool

I should say this again, your main source of information should be VMware’s KB article. The information here is either echoing that, or supplementing it where needed. Also things are presented in a different order in my post, so the whole process can be automated, unlike the KB which assumes manual work.

Prerequisites

1. The account you will use in this step must:

  • Be a Local administrator on the computer where the script presented will run
  • Be able to enroll certificates for the Certificate Template that will be used from your PKI infrastructure.
  • Be a Local Administrator on the servers where the vCenter components are installed.

2. Any commands/scripts presented here should be run from an elevated prompt.

3. Name resolution must work correctly on the client where this script will run (all vCenter components must be resolvable via DNS).

4. You must use the OpenSSLversion that VMware specifies, not newer, not older, that is openSSL 0.98 at the time of writing this post.

5. You need to have a certificate template configured according to VMware’s specifications, that means basically duplicating the default web server certificate of the Windows CA, with a few changes:

  • Go to Certificate Manager > Extensions > Key Usage > Allow encryption of user data
  • Also uncheck “allow private key to be exported” as the script will fail if the template allows this.

6. Your CA must have automatic approval activated so you can obtain the certificate using command line.

7. Obtain and save the certificates of your root CA and any intermediate CAs, as described in the KB. From the KB, as I understand it, you should name your root CA certificate root64.cer (x.509 format, base64 encoded). Any other intermediate or issuing CA certificates should be named root64-#.cer (x.509 format, base64 encoded), where # is a number starting from 1 to as many intermediates you have (where root64-1 is the intermediate closest to the issued certificate, and root64-N is the certificate closest to the root CA). Place all the certificates from the chain in a folder called “RootCA_chain“.

Note: Naming the files in a certain way is important. The KB does not do a good enough job of explaining how to build the .PEM file (either that, or I’m not doing a good enough job of understanding it), so make sure your CA chain certificates are numbered like this. if you don’t do this way you will have to rework the piece of code that sorts the certificate files for building the .PEM file.

8. Make yourself a custom openSSL configuration file. The configuration file must include only these lines, not more, not less (it’s a copy paste from VMware’s KB). Save this file as custom_openssl.cfg in the \bin directory where you installed OpenSSL.

[ req ]
default_bits = 2048
default_keyfile = rui.key
distinguished_name = req_distinguished_name
encrypt_key = no
prompt = no
string_mask = nombstr
req_extensions = v3_req

[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = digitalSignature, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = DNS:ServerShortName, IP:ServerIPAddress, DNS:server.contoso.com #examples only

[ req_distinguished_name ] # change these settings for your environment
countryName = US
stateOrProvinceName = Change State
localityName = Change City
0.organizationName = Change Company Name
organizationalUnitName = ChangeMe
commonName = Changeserver.contoso.com

9. Build a .csv file like this example below:

Name,DomainName,OUName
CNTvcS,contoso.com,vCenterServer
CNTvcO,contoso.com,VMwareOrchestrator
CNTvcIS,contoso.com,vCenterInventoryService
CNTVUM,contoso.com,VMwareUpdateManager
CNTwc,contoso.com,vCenterWebClient
CNTwc,contoso.com,,vCenterLogBrowser
CNTsso,contoso.com,vCenterSSO

As you can see, you need to specify a server name, domain name, and OU Name. The OU name is the name of the vCenter component, the value will be written in the “organizationalUnitName” part of the OpenSSl configuration file.

10. Create a folder (the script uses a static c:\temp\certs entry) where you will store all your certificates as they are generated. To this folder copy the RootCA_chain folder created above.

Just like with cooking you now have all the “ingredients” to generate your vCenter certificates.

The Script

You can download the script (Generate_vCenter_Certs_v5.1) from on my blog. My previous post on VMware certificates has a little more background information, that I won’t repeat here.

Learning Points

Lines 16-50: The script takes a few parameters, and also does some error checking on them.

    • The location of the openSSL directory ($OpenSSLPath)
    • The CSV from step 9, with all the servers and components of vcenter ($vCenterHostsFile)
    • The name\display name combination for your issuing/root Certification authority ($CAMachineName_CAName).
    • The name of the template to be used when issuing the certificates ($TemplateName).

Lines 52-71: More variables are set, and the script checks whether you have the root CA files in the specified location (default: c:\temp\certs\RootCA_chain).

Also this is where you should go in the script and change the Country, Company, State, and City Name. These will replace the values in the custom_openssl.cfg file.

Lines 76-78: Create a folder based on the Name-OUName combination to store all files used in certificate generation (CSR, private key, certificate, copy of the root CA chain certificates).

Lines 81-94: Customize the fields from the custom_openSSL.cfg file to match the specific server-service combination, according to the CSV file.

Lines 96-114:  We copy the template .cfg file to the working directory and replace each corresponding row with our values for name, city, state, etc. This is not written in the cleanest powershell for string manipulation, but it gets the job done.

Lines 116-131: This is where we generate the CSR to send to the CA. As per VMware’s instructions this is a 2 step process, first we generate the CSR, then we convert the private key to RSA format.

Lines 136-146: We use the windows tool certreq  to request a certificate and also retrieve the certificate. This is where being local admin in an elevated prompt comes into play, and also where automatic retrieval of certificates comes in handy.

Lines 151-159: We copy over the individual certificates of the Root/Intermediate/Issuing CAs to the working directory. Then we create a file called chain.pem, which will be the .PEM chained certificate file for the given server-component combination specified in the CSV.

Important!!! This is where reading the VMware KB wrong can cost you, and you won’t realize it until you run the SSL Automation Tool. I quote from the KB:

“Open the Root64.cer file in Notepad and paste the contents of the file into the chain.pem file right after the certificate section. Be sure that there is no whitespace in the file in between certificates.

Note: Complete this action for each intermediate certificate authority as well.”

This is where I initially made a mistake, and did it the wrong way.

  • I pasted the certificate in the .pem
  • I pasted the root after the certificate in the .pem
  • I pasted the intermediate/issuing CA certificates in the .pem after the root certificate

It is “do the same for each intermediate certificate” as in “sandwich your intermediate CAs certificates between the root and the certificate in descending order”, THE EXACT OPPOSITE of how Windows will display the certificate chain in the GUI. This is why I asked you in the first place to number your certificates in a specific way.

As a result, if you named your CA certificates like I explained above this code snippet will arrange them in the proper order and chain them in the .pem file correctly.

$chain_pem = "$rootdir\$($strHost.Name)-$($strHost.OUName)\chain.pem"
gc $crt | add-content $chain_pem</span>
$roots = dir "$rootdir\$($strHost.Name)-$($strHost.OUName)" root*.cer | sort name -desc
$roots | % {gc $_.FullName | add-content $chain_pem }

That wraps it up, needless to say, test this script before you let it run loose on your environment. Since the script has a lot of “Read-Host” in it, it is designed to run with pauses, to give you a chance to review the output, and cancel if there is a problem. After all you are running commands against your Certificate authority, so handle with care.

Once you have all the files you should follow step 2, and actually use the SSL automation tool.

Replace vCenter Certificates using the SSL Automation Tool

For reference here is VMware’s KB if you missed it above for using this tool. I don’t have much to say about this, other than stick to the document, but here are a few tips that will make your life easier:

  • If you have multiple servers running each service, it is best that you copy ALL the folders generated by the script (for each server-service combination) to each of the individual servers, and you can freely dispose of them once you successfully replaced the certificates.
  • Before you go ahead and copy over the Automation Tool, take the time to modify the SSL_Environment.bat file pointing each variable to its respective file/value. Then copy the SSL_Environment file to each server. This way each time you run the tool to update a certificate it will always know where to pick “inter-component-trust” certificates, user prompts and so on.
  • When you fill in the “set sso_admin_user=” variable put it in the format “user@domain” as it will give an invalid credentials error when you will run some steps.

I know it is a long read, but it is not an easy topic and I hope it helped you in your environment. Let me know in the comments if there is a way to improve this or an easier way to do this whole process.

Managing DNS Aging and Scavenging settings using Powershell

Aging and scavenging of DNS records is a topic that is pretty well covered on the web. I’m not really looking to rehash all the information out there with this post. I will however put out some resources for whoever wants to do the reading:

  • This post has a good “primer” for DNS aging and scavenging and the steps for implementing it.
  • This post gives a real life example of how unscavenged records impact authentication mechanisms in Windows
  • This post explains how the configuration of aging and scavenging can be done, either via GUI or batch command line.

I’ll paint the bigger picture for the environment I’m working on right now, perhaps a good example of how typical Windows Infrastructure services are setup in global corporations.

  • AD integrated DNS zones that replicate to all DCs in forest, zones allow secure updates only. This means that if we…
  • Run local DHCP services on all locations in the infrastructure we need to standardise DHCP scopes lease time to a single value, for Windows client scopes prior to working on enabling DNS Aging + Scavenging on all our DNS zones. (the other scopes we don’t care, they can’t add/update records in DNS, they’re not domain joined and the zone only allows secure updates). Link #2 gives us the correlation between DHCP lease time and DNS aging / scavenging of records.
  • We also have clients register their DNS records, not the DHCP server itself (this hasn’t come up for change until now).

What I am going to script about is what Josh Jones from link #1 above referred to as “the setup phase”. In this phase we are merely configuring the DNS zones to age DNS records according to our requirements. The guys over at cb5 do a fine job of explaining the various scenarios to change this via DNSCMD, via the wizard and all the “bugs” of the GUI wizards.

That may be fine for just a few zones, but when you have tens of DNS zones (most of them reverse DNS) the clicky business starts to sound less fun. Also working with DNSCMD might not be everyone’s cup of tea. Luckily I’m writing this in 2013, a few months after the release of Windows Server 2012 and the shiny new cmdlets it brings, and yes, there are DNS server ones.

So you will need a client running either Windows 8 + Windows Server 2012 RSAT or a Windows Server 2012 box (doesn’t need to be domain controller or DNS server, a member server is fine).

Get DNS Aging and Scavenging Settings

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
 Import-Module DNSServer
 }

#Report on Existing Server settings
$DnsServer = 'dc1.contoso.com'
$filename = "c:\temp\AD\$($DNSServer)_Before_AgScavConfig_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"
$zones = Get-DnsServerZone -computername $DnsServer
$zones | %{ Get-DnsServerZoneAging -ComputerName $DnsServer -name $_.ZoneName} | Export-Csv -NoTypeInformation $filename

There’s nothing too fancy about this part. We get all the Zones we need using Get-DNSServerZone, then we pass the value to Get-DNSServerZonesAging. The output would return following information:

ZoneName Name of the DNS Zone
ScavengeServers Servers where this zone will be scavenged
AgingEnabled Flag wether records are aged or not
AvailForScavengeTime Time when the zone is eligible for scavenging of stale records
NoRefreshInterval Interval when the Timestamp attribute cannot be refreshed on the DNS Record
RefreshInterval Interval when the Timestamp attribute can be refreshed on the DNS Record

If no one ever configured Scavenging on the servers, the output should be pretty much blank.

Configure Aging of DNS records for all zones

This snippet accomplishes this:

If (-not (Get-Module DNSServer -ErrorAction SilentlyContinue)) {
	Import-Module DNSServer
	}

#Set New values
$DnsServer = 'dc1.contoso.com'
$DNSIP = [System.Net.DNS]::GetHostAddresses($dnsServer).IPAddressToString
$NoRefresh = "3.00:00:00"
$Refresh = "5.00:00:00"
$zones = Get-DnsServerZone -computername $DnsServer | ? {$_.ZoneType -like 'Primary' -and $_.ZoneName -notlike 'TrustAnchors' -and $_.IsDsIntegrated -like 'False'}
$zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName -Aging $true -NoRefreshInterval $NoRefresh -RefreshInterval $Refresh -ScavengeServers $DNSIP -passThru}

Learning Points

The $Zones variable now contains a filtered list of zones, the Primary zones, those that are not “TrustAnchors” and those that are not AD Integrated (the 0.in-addr.arpa … 127.in-addr.arpa and 255.in-addr.arpa zones).

Why we do this? Well in our case we only run primary and stub zones, so that explains the “primary” filter. The “Trust Anchors” Zone we don’t have a use for (more info on Trust Anchors here). Lastly the filter removes zones that are not AD integrated (we will never be able to get an IP from those zones, since they are either network addresses, loopback addresses or broadcast addresses).

Note: If you fail to filter the “0, 127 and 255” zones your last command will spit out an error like below. I looked the Win32 9611 error code up in the windows 32 error code list  and it means “Invalid Zone Type”. So filter it, ok ?!

Set-DnsServerZoneAging : Failed to set property ScavengeServers for zone 255.in-addr.arpa on server dc1.contoso.com.

<em id="__mceDel">At line:1 char:14
+ $zones | % { Set-DnsServerZoneAging -computerName $dnsServer -Name $_.ZoneName - ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : InvalidArgument: (ScavengeServers:root/Microsoft/...ServerZoneAging) [Set-DnsServerZoneA
 ging], CimException
 + FullyQualifiedErrorId : WIN32 9611,Set-DnsServerZoneAging

You should also be careful that the commandlet expects the Refresh/No-Refresh Intervals in a specific format, and the ScavengeServers parameter  needs to be an IP address, not a hostname.

The -PassThru switch displays some output on the console, as by default the commandlet doesn’t generate output.

The last commandlet (Set-DNSServerZoneAging) has kind of little documentation about it flying on the web, and I actually found some documentation to some non-existing parameters that got me all excited, something like a “SetAllZones”, but the actual parameter doesn’t exist as of this time (February 2013). So I had to use a foreach loop to configure each zone.

Wow, Initially I wanted this to be a short post, but apparently it added up to something not  so short. I hope it is useful, and helps with your DNS Aging configuration. If there are other  more simpler/better ways to accomplish this I would like to hear about them, just a leave a note in the comments.

Active Directory Domain Controller Backups – Part 2

Time for part 2 of the “how to backup DCs” story. I’ll try to keep it more concise and just deliver the needed info.

In my previous post we established I was going for a backup to disk (another network share). I was also going to back up the system state of 2 DCs /domain, the list of GPOs and their links and the list of object DNs.

The process explained

I want to setup the backup in such a way, that it is more automated, and I don’t have to worry about checking all the bits and pieces are in place, and I also want to be able to update parts of the process without rebuilding everything. Therefore the process can be split in these parts:

1. Preparing accounts and permissions

2. Creating and delivering worker scripts (the scripts that actually do the job)

3. Setting up backup schedules (scheduled tasks run scripts from point 2, using credentials and resources setup at point 1)

Accounts and Permissions

You will need some accounts and groups setup so that you can safely transfer the backups from the DC to the backup share. The steps are outlined below:

  1. Create a Universal Security Group in one of your domains (top root domain, preferably) let’s call it “Global AD Backup Operators”. We will use this group below
  2. Create a network share on your choice for a backup backup location , where only “Domain Controllers” and “Global AD Backup Operators” have read/write access (Security and Sharing tabs). Refer to my previous post, for why this is important. You cannot use the “BUILTIN\Backup Operators” of the domain since that group is specific to DCs only.
  3. In the network share create a few subfolders, named DistinguishedNameBackup, GroupPolicyObjectBackup and WindowsImageBackup.
  4. Create an account in each domain that will run the backups. Make this account member of the “BUILTIN\Backup Operators” and the “Domain\Global AD Backup Operators” you created in Step 1. The Backup operators group is per domain, as you might know.
  5. Create a new GPO and link it to the “Domain Controllers” OU in each of your domains/ change your existing default Domain Controller Policy. In the policy you should include “BUILTIN\Backup Operators” in the list of accounts for “Allow logon as a batch job”.

Creating The Backup Scripts

Backing up the DC System State

If you don’t have the Windows Backup feature installed the snippet below will do that for you:

Import-Module ServerManager

if (!(Get-windowsFeature -Name Backup).Installed) { add-windowsfeature Backup}

Now for the backup itself you just run wbadmin wrapped up in some powershell code like below:

$TargetUNC="\\server.fqdn.com\ForestBKP$"

$WBadmin_cmd = "wbadmin.exe START BACKUP -backupTarget:$TargetUNC -allCritical -include:c: -noVerify -vssFull -quiet"
Invoke-expression $WBadmin_cmd

I used -allCritical instead of -SystemState, to include all that is necessary to do a bare metal recovery, other than that, nothing major to write home about. More info here.

Backing up Group Policy Objects and Links

Next step is to configure the backup of the GPO objects and the GP-Links. GP-links must be backed up separately since it is not stored in the GPO object, but in AD Database, on each object where the GPO is linked. This gets even more convoluted when you link GPO’s in different domains, than the domain they are created it. There are multiple ways to backup the GPOs:

-Using GPMC sample scripts, there is a script for backing up GPOs

-Using powershell module grouppolicy, running on windows 2008 R2 – I chose this one.

I also wanted to handle GPO backup history at script level, so, the script Backup-GPOs.ps1, attached to this post contains the “delete older than x days” logic to handle accumulating backups. The command to backup a GPO, looks like this:

backup-GPO -all  -domain $domain -path $path

The options are pretty self explanatory I suppose.The command to get all gpLink objects looks like this:

$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $env:ComputerNameps]</pre>
Now, I've read some of the people's experiences online and it seems that using the wildcard character for the backup GPO command has some inconsistent results, meaning, past a certain number of GPOs backed up, the cmdlet stops working properly. The solution would be to grab all GPOs and back them up in a <strong>for-each</strong> loop. This ties in pretty well with the fact that we need to map the GPO name with the gP-link information, so the core piece of the GPO backup script looks like this (most of the code is reused from <a href="http://gallery.technet.microsoft.com/scriptcenter/Backup-Group-Policy-with-a24f1a1b">here</a>):

import-module grouppolicy
import-module activedirectory

#build a list of GPOs in current domains
$domobj = get-addomain
$dom = $domobj.dnsroot
$domdn = $domobj.distinguishedname
$gpocol += get-gpo -all -domain $dom

$gpl = $null

#build a list of gplink objects across the enterprise
$domains = get-adforest | select -ExpandProperty domains
$domains | % {
$domobj = get-addomain $_
$domdn = $domobj.distinguishedname
$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $domobj.PDCEmulator
}

#backup GPOs, map, GPOs to Target DNs
$section = "backup"
foreach ($gpo in $gpocol) {

$name = $gpo.displayname
new-item $curpath\$name -type directory -erroraction:silentlycontinue | out-null
$id = $gpo.id
$configdn = (get-adrootdse).configurationNamingContext
backup-gpo -guid $id -domain $dom -path $curpath\$name | tee-object -variable msg
write-log
get-gporeport -guid $id -domain $dom -reporttype html -path $curpath\$name\$name.html
$gpl | % {if ($_.gplink -match $id) {$_.distinguishedname + "#" +  $_.gpoptions + "#" + $_.gplink} } | out-file -filepath $curpath\$name\gplinks-$id.txt -append

}

Just a little note here, the script is designed to get the gp-Links outside of the current domain of the account the script is running under. What differs from Frank Czepat’s script is the fact that i added a lookup to the $gpl variable and i pointed the get-adobject command to a specific DC (leaving it go for the default, would result in errors).

Backing up DistinguishedNames List

This is fairly easy and straightforward. While I could do this using powershell, i decided to go for the old and trusted dsquery, as it is faster than powershell code. Here we also have to deal with accumulating backups, as i built the script to output a timestamped file. The command that actually does the backup is this one:

$DomainDNsFile = "DomainDNs_$(get-date -Uformat "%Y%m%d-%H%M%S").txt"
$FilePath = "$curpath\$DomainDNsFile"
$DomainDNList_cmd = "dsquery * domainroot -scope subtree -attr modifytimestamp distinguishedname -limit 0 > $FilePath"
Invoke-expression $DomainDNList_cmd

I built these backup scripts as 3 individual files, you can download them from here Backup-Scripts.

That’s about it for how the backup is done, I guess more than half these items are sort of trivial to setup, the only tricky part is grabbing the gP-links, and creating a mapping between the GPO, and DNs in the gp-links

Next post in the series will discuss how to deliver and schedule these scripts on your domain controllers.

Discover Missing Subnets in Active Directory

The past days I stumbled upon the “regular” Event ID:5807, “During the last xx hours there have been <<lots and lots>> of logons …. from computers that are not defined in Active Directory Sites”. This is not such a big deal, not that it’s something you should ignore, but usually there are other things to worry about than some IPs connecting to your DCs and not being included in an AD site. Most of the time there are “operational” reasons behind this (someone setup a new location in your company and didn’t think to include you in the email chains, so you can adjust your configuration). But this time I wanted to nip this in the bottom, since I’m pretty sure no one else had bothered with it until now. Again , I didn’t reinvent the wheel, but I did manage to improve on some of the resources I found, and come up with a more scalable and convenient solution, so you could say “I made the wheel get more traction than before” :).

The Problem

The Event Viewer event I’m talking about is described here. A short snippet:

“During the past %1 hours there have been %2 connections to this Domain Controller from client machines whose IP addresses don’t map to any of the existing sites in the enterprise. Those clients, therefore, have undefined sites and may connect to any Domain Controller including those that are in far distant locations from the clients”

The solution to this is to map the client IPs to subnet in AD. To do this, you need to build a report of all unmapped client IPs, on all Domain Controllers, from all domains in the forest. This information, like the event says is stored mainly in the file “%systemroot%\debug\netlogon.log“. The output of this file looks like this:

05/03 08:37:06 Contoso: NO_CLIENT_SITE: Marry-PC 172.17.17.3
12/15 13:18:23 Contoso: NO_CLIENT_SITE: Bob-PC 172.172.2.240

What others have tried

From the way the file looks, you can see this is something we can convert to CSV, using powershell, and then process in Excel/Database. This is exactly what this person here did. His script works in the current domain, and starts hitting a wall when handling too many DCs/big files, because variables that store the data, keep getting bigger and bigger. There is a workaround in the comments section for this, but I wanted the whole data, to look around at my leisure, and I didn’t think I had time to wait for the script to work.

Also the environment I’m working on is spread across 6 continents, increasing the changes my script would take forever, and I really do want to get home in time for dinner. Just for the record, at the time of writing this, there were 52 domain controllers, total combined size of log files was over 180MB. Assuming each row had about 70 Bytes , that means over 2.5 Million entries. I’m really hoping that after I fix this, these numbers will go down significantly.

I also wanted to add some more information in the report, separating the IP address (A.B.C.D.) into octet strings, so I could more easily report on the data, in Excel. Granted, there are some ways to split the IP in Excel, but hey, if excel can do it, so can my powershell script.

My Solution

I took a different solution than Jean Louw on his blog. My approach was this:

  1. Get all Global Catalogs in the forest, using the one liner in my quick info article.
  2. Copy all netlogon files to a local network share. from here I could unleash powershell onto the “unsuspecting log files”
  3. Go through each file and import each one into a variable, as CSV. On each variable get the unique values and add them to a Reporting variable.
  4. Add some Reg-ex code to find the 1st, 1st-2nd, 1st-3rd octets in the IP string and add it to the report.
  5. Finally to make sure the final report only has Unique IPs in it, by filtering the Reporting variable, then exporting it to CSV

Then I put this all together in a script, added some basic error checking, the result you can download in this script, Report_DebugNetlogon.

Learning Points

Using regular expressions to find network address for /8,/16,/24 IPs, is done using this code. I used the code for detecting an IPv4 Address from here, this is found in other places on the web, but I stuck with this one:

#Extract Entire IP v4 address (A.B.C.D)
Function ExtractValidIPAddress($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
} 

For detecting the first/first 2/first 3 octets of an IP address you just adjust the {3} variable in the $IPRegex variable to {2} – first 3 octets, {1} – first 2 octets, and {0} for first octet, or just shorten the regex as below:

#Extract 1st Octet of IP v4 address (A)
Function Extract1IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st and 2nd Octet of IP v4 address (A.B)
Function Extract2IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){1}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st, 2nd and 3rd Octet of IP v4 address (A.B.C)
Function Extract3IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){2}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

This approach needs some careful consideration, I first used the ExtractIP function, and on that value I applied the ExtractxIPOctet function, since throwing it on the initial variable would give incorrect results (more than 1, first octet, for example).

As far as speed is concerned, processing those 2.5 million took 32 minutes, of which, 14 minutes spent copying all files to a central location, 18 minutes going though all files, finding unique IPs.

A final word of advice: For networks defined as A.0.C.D, A.B.0.D – there is a “bug” when opening the CSV straight in excel, that it considers the column to be a number, and the .0 is omitted. To get around this import that data using data import into excel, and specify the Octet fields as Text fields.

Now your next step should be building a pivot table with all network prefixes, and sending it over to your friendly network admin, and work together to find out what each subnet does, then to add the to Active Directory subnet list.

Get Basic Information on Active Directory Domain Controllers

Lately I found myself doing a lot of work around AD, since I’m responsible for migrating the forest to 2008 R2 Functional Level. As you may already know, in order to raise forest Functional Level you have to raise the Functional Level of all child domains. To be able to do this, each DC in child domains must run on Windows 2008 R2 or later. To get started you need a list of all systems in the AD infrastructure, and a list of those that need replacing their OS. If your infrastructure is like mine, you have lots of DCs, most of them setup before your time with the company, and lots of them lacking documentation. Also lots of them probably run on antiquated hardware, some of which probably will not support Windows 2008 R2. The most stringent requirement, in my book, for installing Windows 2008 R2 is that the CPU must support 64bit, since Windows 2008 R2 only comes in 64bit flavor.

When I first started inventorying our DCs, I made a list of the basic things that interested me for transitioning to 2008 R2 FL (Functional Level):

  • HostName, Operating System and Domain
  • Site and IPAddress
  • FSMO roles Installed
  • Hardware and Model
  • CPU x64 Readiness and Memory size

The first 3 above are low hanging fruits, you can extract them using a modified one liner from my “Quick Tip #1” article.

Hardware, Model and Memory size, are also not so difficult, you have WMI to each server for this.

The most challenging part is finding out if the CPU supports 64-bit instructions. The first place where you will probably think to look is environment variables within Windows (type “echo %processor_architecture%” in a cmd prompt to see the output, anything not x86 is 64bit). You are out of luck, because what that variable actually stores is the capabilities of the Operating System, and unless you are running 64bit OS on 64bit hardware (in which case you don’t need this script in the first place) that is of no use. Then I thought: “Hey, there must be a way to find this out via Powershell/WMI” … indeed you can find out some information about the CPU bandwidth (wmic cpu get datawidth) … however the data is inaccurate, it also refers to Operating System. You can crosscheck your results with a tool from the overclocking world (CPU-Z) – you will see it shows the CPU can do 64bit instructions, while WMI says it can’t (because the OS is 32 bit).

Finally my quest bought me to a tool written by this gentleman, the tool is called chkcpu32. It was created a long time ago, but i see it is actively being maintained, last update was September 2012. This tool actually queries the CPU for this information rather than the WMI. The latest version added XML support, a real treat for us powershell scripters, now we don’t have to do text parsing. Here’s a sample non XML output from one of my systems:

C:\>chkcpu32 /v

CPU Identification utility v2.10                 (c) 1997-2012 Jan Steunebrink
──────────────────────────────────────────────────────────────────────────────
CPU Vendor and Model: Intel Core i7 Quad M i7-2600QM/2700QM series D2-step
Internal CPU speed  : 2195.0 MHz
System CPU count    : 1 Physical CPU(s), 4 Core(s) per CPU, 8 Thread(s)
CPU-ID Vendor string: GenuineIntel
CPU-ID Name string  : Intel(R) Core(TM) i7-2720QM CPU @ 2.20GHz
CPU-ID Signature    : 0206A7
CPU Features        : Floating-Point Unit on chip  : Yes
Time Stamp Counter           : Yes
Enhanced SpeedStep Technology: Yes
Hyper-Threading Technology   : Yes
Execute Disable protection   : Yes
64-bit support               : Yes
Virtualization Technology    : Yes
Instr set extensions: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2
Size of L1 cache    : 4 x 64 KB
Integrated L2 cache : 4 x 256 KB
Integrated L3 cache : 6144 KB

I bundled all this information gathering bits and pieces in this script and below you can find short learning points from some of the key parts.

Learning Points

First of all the script assumes that you are running under enterprise admin credentials, and all your GCs are DCs, if you don’t have this setup, you will have to come up with another way to list all your domain controllers.

I found it nowadays that it is more of a headache to not have all GCs as DCs than to just make sure they all are. Dcpromo by default in Windows 2008 R2 will make a DC a GC and DNS server.

My previous post, on how to get all domain controllers lists the one liner to get basic information about DCs (hostname, domain name,  sitename, IP, FSMO roles). The only real challenge here is how to handle the formatting of the “Roles” Property, I used a switch command to loop through all of the FSMO roles a DC might have.

foreach ($role in $($dc.roles)) {
Switch ($role) {
"PdcRole" { $row.PdcRole = "TRUE"}
"RidRole"  {$row.RidRole = "TRUE"}
"InfrastructureRole"  {$row.InfrastructureRole = "TRUE"}
"NamingRole"  {$row.NamingRole = "TRUE"}
"SchemaRole"  {$row.SchemaRole = "TRUE"}
}
}

As far as getting the CPU 64bit support this is done with chkcpu32 with these 2 lines. You will also need psexec from the sysinternals Toolkit(at least v1.98 of psexec), and you should run it at least once before, to get rid of the EULA accept pop-up.

Set-alias pshell-psexec "c:\windows\system32\psexec.exe"
& pshell-psexec \\$dc -n 25 -c -f -h CHKCPU32.exe /v /X > "$($dc)_x64status.log"

If (((Get-Content "$($dc)_x64status.log")).chkcpu32._64bit ='1') {
$row.CPUx64Ready = 'True'
}
Else {
$row.CPUx64Ready = 'False'
} 

The rest of the code in the script is just putting all of this together in a nicely formatted CSV file.

This is kind of all of it, nothing to difficult once you find the right tools and use them properly. Any comments or feedback are most welcomed!

Automate vSphere Certificate Generation

A couple of weeks ago I was working on some audit internally, and I discovered we had some vSphere servers working with self generated certificates. While these servers were un-managed servers (esxi free license servers), they still needed certificates, as it is the case with such servers, they are “critical”, just not critical enough to warrant licenses :).

The “problem’ with vSphere certificates is that they have to be generated using OpenSSL and you cannot generate them using Windows tools like, certreq. With certreq you could potentially have done this process much easier. Also there is an issue with using the request files given out by OpenSSL as it does not have template information written in it, and the Windows CA cannot generate a certificate if it does not know which kind of certificate you want.

I trawled the internet for ways to automate this, and I didn’t find an end to end solution for certificate generation. I only found bits and pieces, and people were writing how to do each certificate one by one. This didn’t sit well with me, and looking at the workflows I discovered there was really no point not having a script that does “it” automatically. I will define what “it’ is, by making a short description of the steps required for generating a vSphere certificate:

  1. Generate CSR file and key file using OpenSSL
  2. Submit CSR file to certification authority
  3. Retrieve response from certification authority
  4. Rename certificate file and key file  and upload to vSphere host

Some notes regarding the setup in which this would work:

  • I used Powershell to automate this, so this won’t work on other platforms.
  • I used a Windows 2008 R2 PKI CA with a “Web Server” Template.
  • The CA also had automatic approval for this type of certificate (which made automating the response retrieval easier)
  • User running this script needs to have the right to request/issue the given certificate template, also should be local admin on the box you are running the script, otherwise you would have to modify script to run some parts of the commands with “runas”

The script

I used a preexisting script to get started, the one for certificate mass generation from valcolabs.com, found here.

What differs from the way they did it, is that I’ve changed the way variables are passed for building the “config file”,  and the fact that each CSR has its own config file, specified on command line. This will help you track your work better for troubleshooting purposes. Something that should be noted is that their script, and also mine, use a special openssl config file, in the sense that the lines to be modified by the script are numbered, not searched in the file, so beware of making changes to the “custom_openssl.cfg” file. It could have probably been more elegant to search for the lines in the file, but I didn’t want to spend time getting it to work.

The download link for the script I built is this one; Generate-vSphere-Cert, below you will find some explanations on how it works.

Learning points

The script takes some parameters as input (get some of them wrong and your script might not work as intended or quit)

a) vSphereHostFile – is a CSV file that must contain the host name and domain name in 2 separate columns.

b) CAMachineName_CAName is the name of your CA in the format (hostname\display name)

c) TemplateName is the name of the certificate template you want to use for certificate generation, as defined on your CA

Lines 32 – 44 you should change the variables there to match your requirements (different paths, different location, country, email, company, etc). There is room for improvement here, you can include this info in the csv file, useful for creating certificates for multiple companies, with different contact information.

Lines 49 – 73 – build out a folder structure, one folder per host where all host files will be stored. Also builds CN, SAN’s (Subject Alternate Names)  – you may wish to customize what you add here. I added short name, FQDN, i left out IP address as that can change more easily than the name.

Lines 80-97 – use a temporary file from the original openssl config file containing the parameters we setup until now – this piece of code uses numbered lines, so if you make changes to the original file, change the line numbers here)

Lines 99-104 – build out the file/paths to generate a CSR with openssl. The command i used is slightly different than the ones on the internet, I needed a special length for the RSA, so I used:

"$openssldir\openssl.exe req -newkey rsa:2048 -out $csr -keyout $key -config $config"

Lines 109-114 – build paths for files to send/receive to/from the Windows CA. I also used something “unusual” (as in, not your first page results on google search) which is specifying the CAName and Template name.

The CA name is needed so you do not get a prompt each time certreq is invoked.

The certificate template is specified using the attrib parameter, the missing piece of my “how to automate” CSR submitting, see below:

$ConfigString = """$CAMachineName_CAName"""
$attrib = "CertificateTemplate:$TemplateName"
$issuecerts_cmd = "certreq -submit -attrib $attrib -config $ConfigString $csr $crt $p7b $rsp "

Lines 117-122 Unless you use this script for automating creation of vCenter Certificates, you can comment these lines out. They generate a PFX certificate which is required with vCenter. PFX certificates are not not required for vSphere host certificates.

The next step to automation would be to upload these files to your vSphere host. I used this script here and changed some paths to suit my folder structure. You can also use SCP or other methods to upload the file. After the files are uploaded you need to reboot the host for the certificates to take effect.

As always with these scripts, do your best to try them in a test environment before unleashing them into production. You are dealing with Certification Authorities and your vSphere hosts. Failure to upload a correct certificate to the hosts will result in you not being able to connect with vSphere Client, and having to go to console (NOT SSH) and regenerate self signed certificate.

I hope this was a useful read, comments and critique are open, as always.