Delete Leaf Objects from Active Directory User Object

The Story

The past days I had a colleague of mine come to me with a user migration problem. He wanted to migrate a user between two child domains in an AD forest. For this most of the time you use Microsoft’s ADMT (Active Directory Migration Tool). He went through the whole migration wizard and had the migration fail with an error message like this:

2014-03-17 12:47:27 ERR2:7422 Failed to move source object ‘CN=Joey’. hr=0x8007208c The operation cannot be performed because child objects exist. This operation can only be performed on a leaf object.

That was strange, I was expecting the user object to be a leaf object itself, not to contain leaf objects?! Then I remembered we are in 2014, we also use Microsoft Exchange and use ActiveSync on Mobile devices. In case you didn’t know, when you configure ActiveSync on your phone a special object is created under your User Object in Active Directory. This object is of type “msExchActiveSyncDevices” and list each of the mobile phones where you have configured Active Sync. I used adsiedit.msc to confirm that the leaf objects were indeed these msExchActiveSyncDevices objects.

So that explains what the leaf objects were, and how they got there. Now since the user is being migrated across domains, it really doesn’t matter whether those leaf AS objects are there or not, because after USMT users have to reconfigure their devices anyway, so they’re “safe to delete”. To fix this you can either use ADSIEDIT to locate the leaf objects and delete them or use Exchange Shell to delete the AD devices….or use Powershell to delete them from the user object just like you would with ADSIEDIT, which is what i want to share now.

The Script

I built this script on a Windows 8.1 computer with Powershell 4.0 and RSAT tools for Windows 2012 R2 installed. Also the AD environment is Windows 2008 R2 FFL and only Windows 2008 R2 DCs run on it. I didn’t check if this runs on less than this configuration, so do report back if this is not working on older combinations of RSAT + OS + AD, though i’m pretty sure you need at least one 2008 R2 DC in the user’s source domain, otherwise the powershell cmdlets won’t wor. You can download this script from the link: Delete-AS-ChildItems.

The script takes only one parameter the SamAccountName of the user with leaf objects. From then on it will show you the leaf objects it wants to delete, and then actually delete them, if not canceled.

Learning Points

I’ve made the script “autonomous”, in the sense that it will automatically discover the closest DC running AD Web Services, and query it for the SamaccountName. This snippet accomplishes that.

$LocalSite = (Get-ADDomainController -Discover).Site
$NewTargetGC = Get-ADDomainController -Discover -Service ADWS -SiteName $LocalSite
IF (!$NewTargetGC)
    { $NewTargetGC = Get-ADDomainController -Discover -Service ADWS -NextClosestSite }
$NewTargetGCHostName = $NewTargetGC.HostName
$LocalGC = "$NewTargetGCHostName" + ":3268"

Once we have this information, we query the GC for the SamAccountName and Domain Information. We need the domain to also discover the closest DC for that domain, and get the list of leaf objects (Lines 14-26)You will want to do this for 2 reasons: first because the GC partition doesn’t contain all the information you want (the child object information) and second, you can’t write to the GC partition, so you have to find your closest respective DC anyway.

The “trick” with this script is to use Get-ADObject with a search base of the User’s DN on a DC in the User’s Domain and look for the respective msExchActiveSyncDevices object type like below:

$UserOjbActSync = Get-ADObject -SearchBase $UserObj.DistinguishedName -filter { ObjectClass -like 'msExchActiveSyncDevices'} -Server $UserobjDC -ErrorAction:SilentlyContinue

Now to actually fix the problem, we run this command, that deletes all the child items, and whatever may be inside them.

Remove-ADObject $UserOjbActSync -Server $UserobjDC -Recursive:$true -Confirm:$false -Verbose -Debug

That about wraps this up. Just wait for replication to occur on your domain and you should e good to finish the migration. As always, use with caution, this does delete things. If you found this useful share it around 🙂

Quick Tip: Update Resource Records in Microsoft DNS using Powershell

One of the great things I like about the (not so) new Windows 2008 R2 Powershell modules is that we can now more easily manage the core Microsoft Networking services (DNS, DHCP). I want to share a little script I built that will add/update Host Records fed from a CSV file.

The Script

In the past automating this kind of thing was possible using a combination of WMI and VBS/Powershell and or batch scripting and using the famous DNSCMD. My script script will not work on any DNS server, you need to run Windows 2008 or later DNS, running against Windows 2003 DNS servers will yield strange/wrong results.

#sample csv file

#DNSName,IP,<other fields not used>,,<other values, not used>

 [Parameter(Mandatory=$false)][System.String]$ResourceRecordFile = "C:\Temp\somefile.txt",
 [Parameter(Mandatory=$false)][System.String]$dnsserver = ""
import-module DNSServer

Write-Warning "This script updates DNS resource records in DNS based on information in a CSV file. Details are:`n
Using file $ResourceRecordFile as source file.`nMaking changes on DNS:$dnsserver`n
If you wish to cancel Press Ctrl+C,otherwise press Enter`n"

$HostRecordList = Import-csv $ResourceRecordFile

foreach ($dnshost in $HostRecordList) {
 $RR = $dnshost.DNSName.split(".")[0]
 $Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
 [System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)
 $OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue
 If ($OldObj -eq $null) {
 write-host -ForegroundColor Yellow "Object does not exist in DNS, creating entry now"
 Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP
 Else {
 $NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
 $NewObj.RecordData.Ipv4Address = $NewIP
 If ($NewObj -ne $OldObj) {
 write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
 Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver
 $OldObj = $null
 $NewObj = $null

Learning Points

Running this script requires Windows 2008 R2 RSAT installed. As you can see, all the script needs is a CSV file with 2 columns called “hostname” and IP, containing the FQDN, and the DNS server you want to connect and make the changes.

Lines 17-18: This is where we’re extracting the short DNS name from the FQDN and the DNS zone name. Also we are converting the IP address to the format required for entry into DNS:

$RR = $dnshost.DNSName.split(".")[0]
$Zone = $dnshost.DNSName.Remove(0,$RR.length+1)
[System.Net.IPAddress]$NewIP = [System.Net.IPAddress]($dnshost.IP)

Lines 19-21: Here we try to resolve the DNS record, perhaps it already exists. We will use this information in the next lines…

$OldObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver -ErrorAction SilentlyContinue

Lines 23:  To create a new Host record  (“A” type record). T he command is pretty straightforward:

Add-DnsServerResourceRecord -Name $RR -ZoneName $Zone -A -CreatePtr:$true -ComputerName $dnsserver -IPv4Address $NewIP

Lines 27-31: or To update an existing A record. No that there is a difference in how Set-DNSServerResourceRecord works compared to the ADD command. This one requires that we get the record, modify the IPV4Address field, then use it to replace the old object.

$NewObj = Get-DnsServerResourceRecord -Name $RR -ZoneName $Zone -RRType "A" -ComputerName $dnsserver
$NewObj.RecordData.Ipv4Address = $NewIP
If ($NewObj -ne $OldObj) {
write-host -ForegroundColor Yellow "Object to write different, making change in DNS"
Set-DnsServerResourceRecord -NewInputObject $NewObj -OldInputObject $OldObj -ZoneName $Zone -ComputerName $dnsserver

That’s about it. You can easily modify this script, so that you can pass the DNS server name from the CSV file (updating lots of records on multiple DNS servers) or updating multiple record type (A Records, CNAME Records). As always C&C is welcome.

Report DHCP Scope Settings using Powershell

It has been a busy time for me lately, but I’m back here to write about a script to Report on some basic DHCP scope settings. In my situation I used this script to find out which DHCP scopes had specific DNS servers configured, DNS servers that we planned to decommission, so it made sense to replace the IP addresses with valid ones.


 I found myself lately working more and more with the Powershell V3, available in Windows  Server 2012, and the new “goodies” it brings.

Among those goodies there’s a DHCPServer module, so we can finally breathe a sigh of relief, we can dump netsh and any VBS kludges used to manage DHCP!*

(* lovely as this module is, you cannot use it fully against  Windows 2003 Server, some cmdlets will work, others, not so much, so windows 2008 or later it is)

For an overview of what commandlets are available in this new module take a look on the Technet Blogs. To get started simply deploy a Windows 2012 machine and open Powershell, then type:

import-module DhcpServer

While you are at it update help files for all your Powershell module with this command:

Update-Help –Module * –Force –Verbose

Mission Statement

I needed a report that would contain following Info: DHCPServer name, Scope Name, Subnet defined, Start and End Ranges, Lease Times, Description, DNS Servers configured, globally or explicitly defined. As you can imagine, collating all this information from netsh, vbs, or other parsing methods would be kind of time consuming. Also i’m aware there are DHCP modules out there for Powershell but personally I prefer to use a vendor supported developed method, even if it takes more effort to put together / understand (you never know when a Powershell module from someone starts going out of date, for whatever reason and all your work in scripting with them is redundant).

The Script

Anyway, I threw this script together, which isn’t much in itself, apart from the  error handling that goes on. As I mentioned before, the DhcpServer module doesn’t work 100% unless you are running Windows 2008 or later.

import-module DHCPServer
#Get all Authorized DCs from AD configuration
$DHCPs = Get-DhcpServerInDC
$filename = "c:\temp\AD\DHCPScopes_DNS_$(get-date -Uformat "%Y%m%d-%H%M%S").csv"

$Report = @()
$k = $null
write-host -foregroundcolor Green "`n`n`n`n`n`n`n`n`n"
foreach ($dhcp in $DHCPs) {
	Write-Progress -activity "Getting DHCP scopes:" -status "Percent Done: " `
	-PercentComplete (($k / $DHCPs.Count)  * 100) -CurrentOperation "Now processing $($dhcp.DNSName)"
    $scopes = $null
	$scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue)
    If ($scopes -ne $null) {
        #getting global DNS settings, in case scopes are configured to inherit these settings
        $GlobalDNSList = $null
        $GlobalDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
		$scopes | % {
			$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
			$row.Hostname = $dhcp.DNSName
			$row.ScopeID = $_.ScopeID
			$row.SubnetMask = $_.SubnetMask
			$row.Name = $_.Name
			$row.State = $_.State
			$row.StartRange = $_.StartRange
			$row.EndRange = $_.EndRange
			$row.LeaseDuration = $_.LeaseDuration
			$row.Description = $_.Description
            $ScopeDNSList = $null
            $ScopeDNSList = (Get-DhcpServerv4OptionValue -OptionId 6 -ScopeID $_.ScopeId -ComputerName $dhcp.DNSName -ErrorAction:SilentlyContinue).Value
            #write-host "Q: Use global scopes?: A: $(($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null))"
            If (($ScopeDNSList -eq $null) -and ($GlobalDNSList -ne $null)) {
                $row.GDNS1 = $GlobalDNSList[0]
                $row.GDNS2 = $GlobalDNSList[1]
                $row.GDNS3 = $GlobalDNSList[2]
                $row.DNS1 = $GlobalDNSList[0]
                $row.DNS2 = $GlobalDNSList[1]
                $row.DNS3 = $GlobalDNSList[2]
            Else {
                $row.DNS1 = $ScopeDNSList[0]
                $row.DNS2 = $ScopeDNSList[1]
                $row.DNS3 = $ScopeDNSList[2]
			$Report += $row
	Else {
        write-host -foregroundcolor Yellow """$($dhcp.DNSName)"" is either running Windows 2003, or is somehow not responding to querries. Adding to report as blank"
		$row = "" | select Hostname,ScopeID,SubnetMask,Name,State,StartRange,EndRange,LeaseDuration,Description,DNS1,DNS2,DNS3,GDNS1,GDNS2,GDNS3
		$row.Hostname = $dhcp.DNSName
		$Report += $row
	write-host -foregroundcolor Green "Done Processing ""$($dhcp.DNSName)"""

$Report  | Export-csv -NoTypeInformation -UseCulture $filename

Learning Points

As far as learning points go, Get-DHCPServerInDC lets you grab all your authorized DHCP servers in one swift line, saved me a few lines of coding against the Powershell AD module.

Get-DhcpServerv4Scope will grab all IPv4 server scopes, nothing fancy, except for the fact, that it doesn’t really honor the “ErrorAction:SilentlyContinue” switch and light up your console when you run the script.

Get-DhcpServerv4OptionValue can get scope options, either globally (do not specify a ScopeID) or on a per scope basis by specifying a scopeID. This one does play nice and gives no output when you ask it to SilentlyContinue.

Some Error Messages

I’ve tested a script in my lab, and used in production, it works fine for my environment, but do you own testing.

Unfortunately, the output is not so nice and clean you do get errors, but the script rolls over them, below are a couple of them I’ve seen. First one is like this:

Get-DhcpServerv4Scope : Failed to get version of the DHCP server
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : NotSpecified: ( [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 1753,Get-DhcpServerv4Scope

This actually happens because the Get-DhcpServerv4Scope has a subroutine to check the DHCP server version, which fails. As you can see my code does have Silentlycontinue to ommit the error, but it still shows up. I dug up the 1753 error code, and the error message is “There are no more endpoints available from the endpoint mapper“…which is I guess a Powershell way of telling us, Windows 2003 is not supported. This is what we get for playing with v1 of this module.

Another error I’ve seen is this:

Get-DhcpServerv4Scope : Failed to enumerate scopes on DHCP server
At C:\Scripts\Get-DHCP-Scopes-2012.ps1:14 char:13
+ $scopes = (Get-DhcpServerv4Scope -ComputerName $dhcp.DNSName -ErrorAction:Silen ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : PermissionDenied: ( [Get-DhcpServerv4Scope], CimException
 + FullyQualifiedErrorId : WIN32 5,Get-DhcpServerv4Scope

It is just a plain old permission denied, you need to be admin of the box you are running against…or at least member of DHCP Administrators I would think.

As far setting the correct DNS servers on option 6, you can use the same module to set it, I did it by hand, since there were just a handful of scopes.

Hope this helps someone out there with their DHCP Reporting.

Active Directory Domain Controller Backups – Part 3

Time for the last part of the Active Directory Backup Series. The last 2 posts(#1 and #2) have been about defining what needs to be backed up and the scripts / commands that are used to describe. This time we will just discuss some administrative things, they involve Powershell and some Group Policy Preferences (GPP) Settings. So you have all the parts that make the thing “backup”, now how do you put them to work, regularly, automatically, with as little maintenance as possible from your side. This how I chose to do it, from a high level:

  • I created a central location to store your scripts then deploy a GPO with GPP that copies over the scripts  from that central location.
  • For running the backups script I used scheduled tasks created using a Powershell scripts on all remote systems.

I admit it is not the most elegant, zero-touch approach, but I had to make a compromise. Some of you might think, “hey, why didn’t you use GPP to deploy a scheduled task? That would have been easier, no reason to have this hassle with a script that creates your scheduled tasks.”

The reason why I chose to do it via a script is I need access to the network to run the actual scripts (they save data to a file share) so using system account to run the batch jobs is out of the question as it doesn’t have network access, I need a domain account (member of backup operators) and with access to the share. This means I have to be careful when passing credentials to configure the scheduled tasks. As it turns out, when passing credentials in a GPP, they are just obscured, not encrypted (more info here and here) so no way am I giving backup operators credentials over the sysvol share, it is not secure  So, I either create the backup tasks manually (2 * number of scripts * number of domains) or create a script to do it for me. They say “Efficiency is the most intelligent form of laziness”, so I wrote a script.

With this out of the way, let’s handle each task, first off….

Distributing The Scripts

Create a folder in a central location where my scripts and files would be located. In case you don’t know what I’m talking about, it’s the scripts from this post.

Create a GPO object, map it to the Domain Controllers OU, and configure its GPP settings to copy the folder you setup  at step 1, locally on the DCs like in the picture below (I’m showing you just 1 file, make sure your GPP has all 3 files included). The GPO also changes script execution policy  so you can run scripts as batch jobs. 

I’ve applied this GPO to the Domain Controllers OU but restricted it to only apply to a certain Security group in AD (and yes you guessed it, I put the DCs I want to backup in that group).

Creating the Scheduled Jobs

I found Ryan Dennis’s blog here where he gives a sample of how to create a scheduled task, and he took it form another smart man, over here.  I took his sample script and tweaked it a little, I needed to make it a little more generic, and to be able to accept credentials as parameters. Then I created another script that calls the “Create-scheduledtask.ps1” to connect to each DC and create the scheduled tasks. Needless to say you need to be Domain / Enterprise admin to run these scripts

$BackupTasks = Import-CSV -useculture "C:\Temp\AD\BCP\BackupSource.csv"
$Domains = $BackupTasks | group-object -property Domain
$DomainCreds = @()
foreach ($domain in $domains) {
 $Creds = Get-Credential -Credential "$($Domain.Name)\dom_backup"
 $row = "" | select Domain,UserName,Password
 $row.Domain = $domain.Name
 $row.UserName = $Creds.UserName
 $row.Password = $Creds.Password
 $DomainCreds += $row

Foreach ($BackupTask in $BackupTasks) {
 $curCred = $DomainCreds | ? { $_.domain -eq $BackupTask.Domain }
 $SchedTaskCreds = New-Object System.Management.Automation.PsCredential($curCred.UserName,$curCred.Password)
 $ScriptFullPath = $BackupTask.ScriptFolder+ "\" + $BackupTask.ScriptName
 .\Create-ScheduledTask.ps1 -HostName $BackupTask.HostName -Description $BackupTask.TaskName -ScriptPath $ScriptFullPath -SchedTaskCreds $SchedTaskCreds
 }<span style="color: #222222; font-family: 'Courier 10 Pitch', Courier, monospace; line-height: 21px;">

As far as learning points go, I first determined which domains I needed to get credentials from, then I ask the user interactively to type the account and password. This saves a lot of password prompts when the scheduled task is created

The 2 scripts I mentioned are included in this rar file. SchedTask_Creation

This is it mostly. The scheduling script is the initial version, it would be more elegant if he just pulled host names from the AD group, then simply built with each name the script files and names from the .csv file.

A further refinement and securing of this process would be to sign the scripts using a windows CA and only allow execution of signed scripts on the DCs.

Active Directory Domain Controller Backups – Part 2

Time for part 2 of the “how to backup DCs” story. I’ll try to keep it more concise and just deliver the needed info.

In my previous post we established I was going for a backup to disk (another network share). I was also going to back up the system state of 2 DCs /domain, the list of GPOs and their links and the list of object DNs.

The process explained

I want to setup the backup in such a way, that it is more automated, and I don’t have to worry about checking all the bits and pieces are in place, and I also want to be able to update parts of the process without rebuilding everything. Therefore the process can be split in these parts:

1. Preparing accounts and permissions

2. Creating and delivering worker scripts (the scripts that actually do the job)

3. Setting up backup schedules (scheduled tasks run scripts from point 2, using credentials and resources setup at point 1)

Accounts and Permissions

You will need some accounts and groups setup so that you can safely transfer the backups from the DC to the backup share. The steps are outlined below:

  1. Create a Universal Security Group in one of your domains (top root domain, preferably) let’s call it “Global AD Backup Operators”. We will use this group below
  2. Create a network share on your choice for a backup backup location , where only “Domain Controllers” and “Global AD Backup Operators” have read/write access (Security and Sharing tabs). Refer to my previous post, for why this is important. You cannot use the “BUILTIN\Backup Operators” of the domain since that group is specific to DCs only.
  3. In the network share create a few subfolders, named DistinguishedNameBackup, GroupPolicyObjectBackup and WindowsImageBackup.
  4. Create an account in each domain that will run the backups. Make this account member of the “BUILTIN\Backup Operators” and the “Domain\Global AD Backup Operators” you created in Step 1. The Backup operators group is per domain, as you might know.
  5. Create a new GPO and link it to the “Domain Controllers” OU in each of your domains/ change your existing default Domain Controller Policy. In the policy you should include “BUILTIN\Backup Operators” in the list of accounts for “Allow logon as a batch job”.

Creating The Backup Scripts

Backing up the DC System State

If you don’t have the Windows Backup feature installed the snippet below will do that for you:

Import-Module ServerManager

if (!(Get-windowsFeature -Name Backup).Installed) { add-windowsfeature Backup}

Now for the backup itself you just run wbadmin wrapped up in some powershell code like below:


$WBadmin_cmd = "wbadmin.exe START BACKUP -backupTarget:$TargetUNC -allCritical -include:c: -noVerify -vssFull -quiet"
Invoke-expression $WBadmin_cmd

I used -allCritical instead of -SystemState, to include all that is necessary to do a bare metal recovery, other than that, nothing major to write home about. More info here.

Backing up Group Policy Objects and Links

Next step is to configure the backup of the GPO objects and the GP-Links. GP-links must be backed up separately since it is not stored in the GPO object, but in AD Database, on each object where the GPO is linked. This gets even more convoluted when you link GPO’s in different domains, than the domain they are created it. There are multiple ways to backup the GPOs:

-Using GPMC sample scripts, there is a script for backing up GPOs

-Using powershell module grouppolicy, running on windows 2008 R2 – I chose this one.

I also wanted to handle GPO backup history at script level, so, the script Backup-GPOs.ps1, attached to this post contains the “delete older than x days” logic to handle accumulating backups. The command to backup a GPO, looks like this:

backup-GPO -all  -domain $domain -path $path

The options are pretty self explanatory I suppose.The command to get all gpLink objects looks like this:

$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $env:ComputerNameps]</pre>
Now, I've read some of the people's experiences online and it seems that using the wildcard character for the backup GPO command has some inconsistent results, meaning, past a certain number of GPOs backed up, the cmdlet stops working properly. The solution would be to grab all GPOs and back them up in a <strong>for-each</strong> loop. This ties in pretty well with the fact that we need to map the GPO name with the gP-link information, so the core piece of the GPO backup script looks like this (most of the code is reused from <a href="">here</a>):

import-module grouppolicy
import-module activedirectory

#build a list of GPOs in current domains
$domobj = get-addomain
$dom = $domobj.dnsroot
$domdn = $domobj.distinguishedname
$gpocol += get-gpo -all -domain $dom

$gpl = $null

#build a list of gplink objects across the enterprise
$domains = get-adforest | select -ExpandProperty domains
$domains | % {
$domobj = get-addomain $_
$domdn = $domobj.distinguishedname
$gpl += get-adobject -filter '(objectCategory -eq "organizationalunit" -or distinguishedname -eq $domdn) -and gplink -like "[ldap://cn=*"' -searchbase $domdn -searchscope subtree -properties gplink,distinguishedname,gpoptions -server $domobj.PDCEmulator

#backup GPOs, map, GPOs to Target DNs
$section = "backup"
foreach ($gpo in $gpocol) {

$name = $gpo.displayname
new-item $curpath\$name -type directory -erroraction:silentlycontinue | out-null
$id = $
$configdn = (get-adrootdse).configurationNamingContext
backup-gpo -guid $id -domain $dom -path $curpath\$name | tee-object -variable msg
get-gporeport -guid $id -domain $dom -reporttype html -path $curpath\$name\$name.html
$gpl | % {if ($_.gplink -match $id) {$_.distinguishedname + "#" +  $_.gpoptions + "#" + $_.gplink} } | out-file -filepath $curpath\$name\gplinks-$id.txt -append


Just a little note here, the script is designed to get the gp-Links outside of the current domain of the account the script is running under. What differs from Frank Czepat’s script is the fact that i added a lookup to the $gpl variable and i pointed the get-adobject command to a specific DC (leaving it go for the default, would result in errors).

Backing up DistinguishedNames List

This is fairly easy and straightforward. While I could do this using powershell, i decided to go for the old and trusted dsquery, as it is faster than powershell code. Here we also have to deal with accumulating backups, as i built the script to output a timestamped file. The command that actually does the backup is this one:

$DomainDNsFile = "DomainDNs_$(get-date -Uformat "%Y%m%d-%H%M%S").txt"
$FilePath = "$curpath\$DomainDNsFile"
$DomainDNList_cmd = "dsquery * domainroot -scope subtree -attr modifytimestamp distinguishedname -limit 0 > $FilePath"
Invoke-expression $DomainDNList_cmd

I built these backup scripts as 3 individual files, you can download them from here Backup-Scripts.

That’s about it for how the backup is done, I guess more than half these items are sort of trivial to setup, the only tricky part is grabbing the gP-links, and creating a mapping between the GPO, and DNs in the gp-links

Next post in the series will discuss how to deliver and schedule these scripts on your domain controllers.

Get Basic Information on Active Directory Domain Controllers

Lately I found myself doing a lot of work around AD, since I’m responsible for migrating the forest to 2008 R2 Functional Level. As you may already know, in order to raise forest Functional Level you have to raise the Functional Level of all child domains. To be able to do this, each DC in child domains must run on Windows 2008 R2 or later. To get started you need a list of all systems in the AD infrastructure, and a list of those that need replacing their OS. If your infrastructure is like mine, you have lots of DCs, most of them setup before your time with the company, and lots of them lacking documentation. Also lots of them probably run on antiquated hardware, some of which probably will not support Windows 2008 R2. The most stringent requirement, in my book, for installing Windows 2008 R2 is that the CPU must support 64bit, since Windows 2008 R2 only comes in 64bit flavor.

When I first started inventorying our DCs, I made a list of the basic things that interested me for transitioning to 2008 R2 FL (Functional Level):

  • HostName, Operating System and Domain
  • Site and IPAddress
  • FSMO roles Installed
  • Hardware and Model
  • CPU x64 Readiness and Memory size

The first 3 above are low hanging fruits, you can extract them using a modified one liner from my “Quick Tip #1” article.

Hardware, Model and Memory size, are also not so difficult, you have WMI to each server for this.

The most challenging part is finding out if the CPU supports 64-bit instructions. The first place where you will probably think to look is environment variables within Windows (type “echo %processor_architecture%” in a cmd prompt to see the output, anything not x86 is 64bit). You are out of luck, because what that variable actually stores is the capabilities of the Operating System, and unless you are running 64bit OS on 64bit hardware (in which case you don’t need this script in the first place) that is of no use. Then I thought: “Hey, there must be a way to find this out via Powershell/WMI” … indeed you can find out some information about the CPU bandwidth (wmic cpu get datawidth) … however the data is inaccurate, it also refers to Operating System. You can crosscheck your results with a tool from the overclocking world (CPU-Z) – you will see it shows the CPU can do 64bit instructions, while WMI says it can’t (because the OS is 32 bit).

Finally my quest bought me to a tool written by this gentleman, the tool is called chkcpu32. It was created a long time ago, but i see it is actively being maintained, last update was September 2012. This tool actually queries the CPU for this information rather than the WMI. The latest version added XML support, a real treat for us powershell scripters, now we don’t have to do text parsing. Here’s a sample non XML output from one of my systems:

C:\>chkcpu32 /v

CPU Identification utility v2.10                 (c) 1997-2012 Jan Steunebrink
CPU Vendor and Model: Intel Core i7 Quad M i7-2600QM/2700QM series D2-step
Internal CPU speed  : 2195.0 MHz
System CPU count    : 1 Physical CPU(s), 4 Core(s) per CPU, 8 Thread(s)
CPU-ID Vendor string: GenuineIntel
CPU-ID Name string  : Intel(R) Core(TM) i7-2720QM CPU @ 2.20GHz
CPU-ID Signature    : 0206A7
CPU Features        : Floating-Point Unit on chip  : Yes
Time Stamp Counter           : Yes
Enhanced SpeedStep Technology: Yes
Hyper-Threading Technology   : Yes
Execute Disable protection   : Yes
64-bit support               : Yes
Virtualization Technology    : Yes
Instr set extensions: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2
Size of L1 cache    : 4 x 64 KB
Integrated L2 cache : 4 x 256 KB
Integrated L3 cache : 6144 KB

I bundled all this information gathering bits and pieces in this script and below you can find short learning points from some of the key parts.

Learning Points

First of all the script assumes that you are running under enterprise admin credentials, and all your GCs are DCs, if you don’t have this setup, you will have to come up with another way to list all your domain controllers.

I found it nowadays that it is more of a headache to not have all GCs as DCs than to just make sure they all are. Dcpromo by default in Windows 2008 R2 will make a DC a GC and DNS server.

My previous post, on how to get all domain controllers lists the one liner to get basic information about DCs (hostname, domain name,  sitename, IP, FSMO roles). The only real challenge here is how to handle the formatting of the “Roles” Property, I used a switch command to loop through all of the FSMO roles a DC might have.

foreach ($role in $($dc.roles)) {
Switch ($role) {
"PdcRole" { $row.PdcRole = "TRUE"}
"RidRole"  {$row.RidRole = "TRUE"}
"InfrastructureRole"  {$row.InfrastructureRole = "TRUE"}
"NamingRole"  {$row.NamingRole = "TRUE"}
"SchemaRole"  {$row.SchemaRole = "TRUE"}

As far as getting the CPU 64bit support this is done with chkcpu32 with these 2 lines. You will also need psexec from the sysinternals Toolkit(at least v1.98 of psexec), and you should run it at least once before, to get rid of the EULA accept pop-up.

Set-alias pshell-psexec "c:\windows\system32\psexec.exe"
& pshell-psexec \\$dc -n 25 -c -f -h CHKCPU32.exe /v /X > "$($dc)_x64status.log"

If (((Get-Content "$($dc)_x64status.log")).chkcpu32._64bit ='1') {
$row.CPUx64Ready = 'True'
Else {
$row.CPUx64Ready = 'False'

The rest of the code in the script is just putting all of this together in a nicely formatted CSV file.

This is kind of all of it, nothing to difficult once you find the right tools and use them properly. Any comments or feedback are most welcomed!

Restoring mailboxes in Exchange 2007 (part 1)

Lately I’ve been doing a number of mailbox restore procedures on Exchange 2007, so I thought it would be a good idea to make my own posts about it (yes it involves scripting), because things are not always as straightforward as MS or TechNet say it is. This is going to be a multi-part post: Create the RSG and mount the DB to be restored, Restore mailbox(es), Remove the restored DB and RSG. Before you think about it I’m going to answer it for you:

Q: But why don’t we use the nice GUI Tool from Exchange Management Console (Extra.exe) and do it from there, “we don’t need no scripting”?

A: My experience tells me the scripted method is safer and works “as expected” unlike the GUI, which says it did something, when it didn’t (I’ve spent days trying to figure out why a RSG Database won’t actually dismount when the GUI said: “Completed Successfully”.

OK, let’s get on with it. All that I am about to explain requires Exchange Administrator privileges on the Exchange servers.

We will be creating a Recovery Storage Group, this is the first step in the restore process. To create the RSG you need following:

  • Adequate disk-space to restore the mailbox database, locally on the Exchange Server where the DB was residing
  • Exchange Management Shell running as Administrator (especially on CCR clusters)
  • No other Recovery Storage Group already created on that server with an existing RSG database (you can only have 1 RSG with 1 DB in the RSG). It is best to remove any previous RSG completely then recreate it for your needs.
  • Specific information like which DB to link to the RSG and the list of mailboxes to restore.

Creating a Recovery Storage Group can be as easy as this:

New-StorageGroup -Server <MBX Role Server Name> -Name <StorageGroup Name> -LogFolderPath <Logs Folder> -SystemFolderPath <SystemFiles Path> -Recovery -Verbose

The command is very similar to creating a new SG, except for the -Recovery switch, designating it as a Recovery Storage Group. I added the -Verbose switch so you can see what is going on behind the scenes.

New-MailboxDatabase -MailboxDatabaseToRecover <Mailbox Name> -StorageGroup <Recovery Storage Group Name> -EdbFilePath <path to store edb file> -Verbose

Here it is just as easy as creating a new mailbox database, only you are creating it in the recovery storage group you created with the previous command. The key thing to remember here is that the value of the “MailboxDatabaseToRecoverParameter” must be the exact same name of the mailboxDB of which you want to recover from. If the name is different you will not be able to run any restore commands, because it will not be able to find any mailboxes when it searches the recovered database.

A working script for creating the RSG

Below I’m sharing with you a working snippet that should help in creating a recovery storage group and DB. In short here is what the code does:

Using a given UserPrincipalName…

  • Attempts to retrieve the mailbox for the UPN (it is a “forest friendly” coding for retrieving the mailbox). If it fails it quits
  • Checks if a folder structure for placing, logs, system files and the edb file exists (I used a location called d:, use a variable if you like).
  • If folders already exist, it will quit, otherwise it will create a folder with the MDB name, and logs and edb subfolders,
  • Next it checks if a Recovery Storage Group already exists, unless you cancel the script it will continue to use this RSG, with the given details. Otherwise it will create a RSG on its own.
  • It will then create a mailbox database where you / your backup admin will restore your exchange backup.
$MBX_UPN = Read-Host
$Filter = "UserPrincipalName -like '$MBX_UPN'"
$SourceMBX =  get-mailbox -IgnoreDefaultScope -Filter $Filter
If ($SourceMBX -eq $null) {
	Write-Host -foregroundcolor Red "No Mailbox for $MBX_ID found`nScript will Quit"
	exit }
Write-Host -ForegroundColor Green "Source Mailbox is`n $SourceMBX"

$LinkedMDB = Get-MailboxDatabase -Identity $SourceMBX.Database
Write-Host -ForegroundColor Green "Ok, Database ($($LinkedMDB.StorageGroup.Name)) is grabbed, now creating RSG Folders and RSG`nPress Enter to continue or Ctrl+C to Cancel"

#Checking if the RSG folders already exist, if not attempt to create them
If ((Test-Path "d:\$($LinkedMDB.StorageGroup.Name)")) {
 Write-Host -ForegroundColor Red "Folder already exists. Please remove d:\$($LinkedMDB.StorageGroup.Name) before running this script again.`nScript will quit"
$SysPath = New-Item -Type Directory -Path d: -Name $LinkedMDB.StorageGroup.Name | Get-Item
If ((Test-path $SysPath)) {
 $DBPath = New-Item -Type Directory -Path $SysPath -Name DB | Get-Item
 $LogsPath = New-Item -Type Directory -Path $SysPath -Name Logs | Get-Item
#If folders were created successfully we can continue
If ((Test-path $SysPath) -and (Test-path $DBPath) -and (Test-path $LogsPath)) {
 #Checking if RSG already exists
$RSG_check = Get-StorageGroup -Server $LinkedMDB.ServerName | where {$_.Recovery -like "True"}
 If ($RSG_check -ne $null) {
 Write-Host -ForegroundColor Magenta "A RSG was found on $($RSG_check.ServerName). Here are RSG Details:"
 $RSG_Check | select-object Name,Identity,Recovery,LogFolderPath,SystemFolderPath | fl
 Write-Host -ForegroundColor Magenta "To use this RSG Press Enter, to cancel Press Ctrl+C"
 Else {
 Write-Host -ForegroundColor Green "Now creating Recovery Storage Group..."
 New-StorageGroup -Server $LinkedMDB.Server -Name "Recovery Storage Group" -LogFolderPath $LogsPath.FullName -SystemFolderPath $SysPath.FullName -Recovery -Verbose
 Write-Host -ForegroundColor Green "OK! No RSG found. Now creating RSG Database..."
 New-MailboxDatabase -MailboxDatabaseToRecover $LinkedMDB.AdminDisplayName -StorageGroup "$($LinkedMDB.ServerName)\Recovery Storage Group" -EdbFilePath "$($DBPath.FullName)\$($LinkedMDB.Name).edb" -Verbose
Else {
 Write-Host -ForegroundColor Red "Could not Create folder or folder structure in d:\$($LinkedMDB.StorageGroup.Name). Check messages above for errors! Script will quit."

This is about it with creating a Recovery Storage Group, it is actually not difficult, just remember to name the MDB inside the RSG with the same name as the source MDB (this was also required on Exchange 2003, as far as I know). Also you cannot have more than one RSG per Maibox Server, it is best to remove any RSG you have after you are finished recovering data. Next post we will discuss how to restore data from a MDB and how to remove the RSG.

As always I value your feedback and hope you found this post useful.

Log Battery and Power Levels using Powershell

This is a let’s say lighter post, I came up while trying to compare battery life of my laptop and some buddies of mine. I wanted to know, how fast my battery depleted using different settings, use profile and power saving modes. Then I did some digging around Microsoft’s MSDN site, and I found some interesting WMI classes, that apparently provide a lot of “power related data”. I also wanted to have a way to log this data, and that’s how I ended up learning how to create a new event-log file and write data to it to use that as a log. So this is what I will try to show: get power related data and write it to the Event-Log.

“Energy” Related WMI Classes

Here are a few interesting classes I stumbled upon. Some of them are only available under Windows7 probably also Vista, but I’m not sure.

  • WmiMonitorBrightness – gives information about monitor brightness. For example these line give the max. “value” and current value of brightness
$MaxBrightness = get-wmiobject -class WmiMonitorBrightness -Namespace root/wmi).level | measure-object -Maximum).maximum
$CrtBrightness = "{0:P0}" -f ((get-wmiobject -class WmiMonitorBrightness -Namespace root/wmi).CurrentBrightness/$MaxBrightness)
  • Win32_PowerPlan – provides information and identifiers about the powerplans defined. In this class ALL powerplans are defined, and just the active plan has an “IsActive” flag attached it, here’s how to get it:
$powerplan = (Get-WmiObject -Class win32_powerplan -Namespace 'root/cimv2/power' | where {$_.IsActive -eq $true}).ElementName
  • Win32_Processor – gets information about the CPU (I was interested in the CPU load for statistical purposes). This one was pretty easy to find, the value was written in plain sight. Take a look:
$cpu = (Get-WmiObject Win32_Processor).LoadPercentage
  • Win32_Battery – Provides information about the battery itself (estimated time, remaining load, power status). Running “Get-WmiObject -Class Win32_Battery | gm” take a closer look at these members:
    • BatteryStatus – this will toggle between ‘1’ meaning on Battery and ‘2’ meaning on AC Power
    • EstimatedRuntime – this will be the number of minutes running on battery, as the OS estimates it, and if you get a very high value (tens of thousands) when you plug the AC Power, it means the battery is charging
    • EstimatedChargeRemaining – percentage-wise representation of battery charge remaining

Powershell + Event-Log “101”

I used this battery and power experiment to learn more about working and writing data to the Event-Log. I wanted to create a new “Event-Log” in Windows (windows 7 as you probably know allows for a lot of application logs) and then write events to it. Then at any point you can export the Event-log to csv. The following creates an Event-Log, with the name “BatteryMonitor” from the Information category (for my uses “Source” was not needed but it is a required parameter:

New-EventLog -Source BattMon -LogName BatteryMonitor -CategoryResourceFile Information

You can also check if an Event-log is created exists you can use this scriptlet (the answer lies in WMI this time, I didn’t find a cmdlet that does it faster):

(get-wmiobject -class "Win32_NTEventlogFile" | where {$_.LogFileName -like 'BatteryMonitor'} | measure-object ).count -eq '1'

Finally here’s how to write to the event-log, a new event. This bit I used in a script to mark the execution of the script in the event-log:

Write-EventLog -LogName BatteryMonitor -Source BattMon -EventID 65533 -Message 'Starting new Execution of BatteryCharge Monitor Script. The script will pump here CSV values. Values are listed in this order, as CSV: PowerPlan,PowStatMsg,ChargeRemMsg,RemTimeMsg,RAM,CPU,CrtBrightness' -EntryType Information -ComputerName $env:computername -ErrorAction:SilentlyContinue

So that is about it, as usual I tried to tie all of these scriptlets into a usable script, you can download it from here.

Removing specific message(s) from multiple Exchange 2007 mailboxes

I seem to be doing quite a bit Powershell scripting these days, and some of related to MS Exchange 2007. One issue we had recently was that loose permissions on Distribution Lists with hundreds of users + too much spare time for some users generated a lot of unwanted message traffic. I don’t want to discuss prevention measures like restricting who can send emails to big DLs or using Microsoft AD RMS to restrict what can be done with emails. Our goal here is to clean up the mess ;).

Essentially you can get some info about the message and mailboxes, and use it with Export-mailbox to remove the data. That is how I initially found this link, but what is not written there is that you need to have all the prerequisites for running export-mailbox, and also running it on hundreds of mailboxes may take a while. I decided to do it my way,by building on what I found on that blog.

This is “Mass Remove message(s) from mailboxes – My Way”.Depending on your situation you can apply these steps multiple times:

  1. Identify the message that started it all
  2. Track the message on the Exchange Servers and compile a list of unique recipients of the message.
  3. Remove the message from the offended mailboxes (there may be special requirements to perform the task, see here)

Identify Message

Getting the information should be pretty easy, someone probably forwarded you the copy of the message(s) to be dealt with. You want to get this info as a minimum: subject,sender,date and time message(s) was/were sent. When you have enough info, open the Exchange Management Console > Tools > Message Tracking and from there identify which of the events represent the time the message originally arrived on the Transport Servers. For that event grab the “MessageID” Value. We will use this in the following steps to find all events relating to that specific messageID.

Track Message

Assuming the worst case scenario you have to do tracking across all Exchange Transport Servers, the speed of the process depends on how close to your Exchange Transport Role Servers you are running the tracking. I suggest you make sure this process runs in the same LAN as the Exchange, especially the export-mailbox part. Anyhow, to get all messages sent by “” across all transport servers in your Exchange run this:

$TrackingLogResults = get-transportserver | where {$_.Name -like "<optional filter>"} | foreach-object  {Get-MessageTrackingLog -EventId DELIVER -MessageID <MessageID from Step1> -ResultSize Unlimited -server $_}
  • Get-TransportServer gives you all the transport servers in the organization
  • Where clause filters the servers list, you can leave it out, it is helpful if your HUB transport servers are named in a specific way, and you know the message did not leave the organization, so you can exclude a search on the Edge Servers.
  • Foreach-Object cycles through all servers and performs the search
  • Get-MessageTrackingLog searches each transport server tracking log for DELIVER Events that correspond to messages with that specific MessageID. It returns unlimited results. The server that is being searched is piped from the Foreach cmdlet.
  • If you run the last cmdlet without the EventID filter, you will get lots of other EventID’s like fail,send,receive,routing,expand. You just need deliver, DELIVER is important because it basically says “OK, this message passed all of my checks I am now sending it to the Mailbox Server so it can submit it to the mailbox store”, so you get a list of just the actually affected mailboxes.

This may take a while to run. Once it is finished we have to get the list of people that the message was sent to. The easy answer would be “why not just do $TrackingLogsResults | select-object Recipients and pipe it along to something else?”

Well you can do that, but in some cases Recipients means actually a bunch of other addresses, and each recipient may appear multiple times in the entire list.

e.g. – this could be a list returned by the “easy” command




Having duplicates is inefficient, everything will take longer in next steps. What I wanted was to have a list without duplicates, plus I get to show you some more “nice” scripting stuff 😉

Compile Recipients List

I spent quite some time figuring this out, so someone out there better find it useful :). The next step involved a “google shovel” to “dig up” how to break up those objects into one big list. Then the plan was to have a list that just had the unique email addresses – ideally. So here’s the “magic”:

$RecipientsExpanded = @()
$RecipientsExpanded = $TrackingLogResults | foreach-object {$RecipientsExpanded  = $RecipientsExpanded  + ($_.Recipients)}
$RecipientsGrouped = $RecipientsExpanded | group-object
$UniqueRecipients = $RecipientsGrouped | select-object Name | sort-object -property name
  • We created a blank array object that will host all recipients addresses in “expanded form”.
  • For each result from the TrackingLog we added the array ($_.Recipients) to the $RecipientsExpanded array. At the end of this we have a single array with all the addresses, each an individual element in the array.
  • The Group-Object cmdlet is used to group all addresses by their name and in the end you have the list of unique email addresses.

Actually remove offending messages

Please see this link if you are planning to export the messages to PST. What is left to do is to take a page from the MSExchangeTeam blog and run get-mailbox| export-mailbox combo, only we are doing it on a reduced scale, only on the mailboxes that need it, that why I went through all the trouble of making that list!

$MailboxesList = $UniqueRecipients | foreach-object {
      $Filter = "PrimarySmtpAddress -eq '"+$_.Name+"'"
      get-mailbox -ignoredefaultscope -resultsize unlimited -Filter $Filter}

The code above handles this task for forests with child domains. I covered reasoning and use of –Ignoredefaultscope and –Filter in a previous post.

#get current admin UserPrincipalName
$Admin = [Security.Principal.WindowsIdentity]::GetCurrent().Name
#elevating the administrator's account to fulll access over all affected mailboxes
Add-MailboxPermission $MailboxesList -AccessRights FullAccess -User $Admin
export-Mailbox -Identity $MailboxesList –ContentKeywords <enter part of message body> -Recipients <add recipients list> –TargetMailbox admin_ –TargetFolder "RecoveredEmails" –DeleteContent
  • The final step grants the admin user full access over the mailbox. The account being granted that right is $Admin, the account under which the script is running, it contains the UserPrincipalName of that account.
  • You also need to have admin rights on the “TargetMailbox” and the “TargetFolder” should also exist beforehand.
  • We export the offending message(s) using Export-Mailbox. Here it is important to be very careful and make the filtering as strict as possible, since here you cannot remove a message based on the MessageID, so you could end up removing many more messages. Refer to the documentation for export-mailbox, for all available switches for this purpose.

After you run the last command get ready for some really long waiting, as it goes through all the mailboxes. Once it is finished, remove your permissions from those mailboxes.

Remove-MailboxPermission $MailboxesList -AccessRights FullAccess -User $Admin

Phew this was a long post, but validating everything I explained here, took a while. The post is also packed with bits and pieces that can be your building block for other Exchange Shell scripts. I tried to show you how to take Exchange TrackingLog data and build a list of unique recipient addresses that you can use to filter out an unwanted message you tracked in the logs, and do that using export-mailbox commandlet. If you have any feedback/corrections/omissions please feel free to leave a comment.

Happy Scripting!

Get Forwarding Email Address of Forwarded Mailboxes

This second article, let’s start by talking a little about the “-Filter” parameter, used in most Exch Shell cmd-lets that I’ve found myself abusing of these past days. I have supporting role in an Exchange 2007 Migration Project, and we were tasked with “cleaning up” what is left of the Legacy Exchange Servers in terms of mailboxes. The Domain structure contains multiple forests, one of which has quite a few child domains. In “everything is a possibility” domain structures say you may be required to

Task #1 Get a list of all legacy mailboxes that have forwarding addresses enabled or that DO NOT have a forwarding address at all.

and then

Task #2Based on the list from 1, create a report with forwarded email address and the forwarding address.

Task #1Is done via this one liner

$LegacyFWList = get-mailbox -ResultSize unlimited -RecipientTypeDetails LegacyMailbox -IgnoreDefaultScope -Filter {(ForwardingAddress -like '*')} | select-object ForwardingAddress,WindowsEmailAddress,RealFWAddress

Code Breakdown:

get-mailbox -ResultSize unlimited -RecipientTypeDetails LegacyMailbox -IgnoreDefaultScope -Filter {(ForwardingAddress -like '*')} -ResultSize unlimited
  • By default the cmdlet returns only the first 1000 results. I used unlimited but it also accepts a number as input
  • get-mailbox cmdlet extracts only “LegacyMailbox” objects – this means mailboxes on servers pre Exchange 2007.
  • IgnoreDefaultScope performs a forest-wide search. Without this only legacy MBX’s on exchange servers in the forest root domain would have been returned.
  • Using this does have some limitations to it, check the help for what those are.
  • Biggest limitation is that –Identity must now be the a valid DN of the object.
  • I did not want to extract all DN’s in the forest and pipe them to the cmdlet, I looked for an alternative to this.

I discovered the –Filter parameter. It allows you to apply filters to the search on more Object Properties, much better than –Identity which we cannot use because of the –IgnoreDefaultScope.

A great thing about it is that it can be used without any limitations with IgnoreDefaultScope. Full details about how to use –Filter Parameter and what properties of AD objects are filterable can be found here and here. Back to our task, you can see what we did.

-Filter {(ForwardingAddress -like '*')}

We looked for any objects that contained any value in the “ForwardingAddress” Property of the returned object

    But say you want the opposite of this “get all legacy mailboxes that have no forwarding address”…if would stand to reason that this should work:

    -Filter {(ForwardingAddress -notlike '*')}

    You are out of luck, it returns the entire list of legacy recipients. I went through a few filters until I found this:

      -Filter {(ForwardingAddress -ne $null)}


      -Filter {(ForwardingAddress -ne*')}

      I have to say that it was a surprise when I stumbled upon this, as it doesn’t really make sense. TechNet articles say –like, and I assume also –notlike accept wildcards, while –eq does not, you can also assume -ne does not. Well -ne does take wildcards, which I particularly do not mind in this case. I’m not sure if this is a bug, a feature, or just a flaw in my logic somehow and it’s working because it is supposed to.

      Moving on…

        select-object ForwardingAddress,WindowsEmailAddress,RealFWAddress
        • I can save you running a “| get-member” on what the get-mailbox returned previously and tell you that only “ForwardingAddress,WindowsEmailAddress” are properties returned by get-mailbox.
        • RealFWAddress” is something I made up. Why? Well unless you know already ForwardingAddress is not an address actually, it is a collection of properties for the recipient where emails are forwarded, the list contains DN, ObjectGUID, CN and others, not an actual email address.
        • RealFWAddress is a blank property, attached to the $LegacyFWList object, as a placeholder for when we do script a way to get that email address…which is now! read on

        Task #2 – Create a list of forwarded addresses and forwarding addresses.

        Now we come to the second part of my post, the “get-recipientcmdlet. Since there is no way for us to know to what object are the emails forwarded to [it could be a DL, a contact, a mailbox, another legacy user] we have to make our search as broad as possible. If you try to run get-mailbox on a contact object you will get nothing obviously. We are using foreach-object to go through the entire list. ForwardingAddress is actually the CN of the object. We pass the DN of the Property to a cmdlet that will return a primary email address, if any.

        This following code will accomplish just that:

        $LegacyFWList | foreach-object {
         $ForwardingDN = $_.ForwardingAddress.DistinguishedName
         $Filter = "DistinguishedName -like '$ForwardingDN'"
         $_.RealFWAddress = get-recipient -Filter $Filter -IgnoreDefaultScope | select-object PrimarySmtpAddress}
        $LegacyFWList | export-csv "c:\Change\This\Path.csv"

        Code Breakdown:

        $ForwardingDN = $_.ForwardingAddress.DistinguishedName
        $Filter = "DistinguishedName -like '$ForwardingDN'"
        • I used these 2 variables in the code, $DistinguishedName and $Filter, trying to write a one liner resulted in somehow “deformed” filter command that get-recipient would not understand.
        $_.RealFWAddress = get-recipient -Filter $Filter -IgnoreDefaultScope | select-object PrimarySmtpAddress
        • Lastly, RealFWAddress property I introduced above gets populated with PrimarySmtpAddress value.
        $LegacyFWList | export-csv "c:\Change\This\Path.csv"
        • In the end all that’s left is export the $LegacyFWList variable to CSV and process however you wish, or just store it in xml / pass it on to another part of a script.

        I have to admit the last part of the code is less ideal. Why? Well, the export does not output a clean forwarding email address. I have tried various ways to fix it, but in the end I concluded that it took me 30 minutes of trying to get a nicer export, and 15 seconds to fix it after importing the csv in excel.

        UPDATE 1: 26.01.2010 – added proper way to filter people with no forwarding address
        UPDATE 1: 21.02.2010 – added syntax highlighting and small rewrites