Discover Missing Subnets in Active Directory

The past days I stumbled upon the “regular” Event ID:5807, “During the last xx hours there have been <<lots and lots>> of logons …. from computers that are not defined in Active Directory Sites”. This is not such a big deal, not that it’s something you should ignore, but usually there are other things to worry about than some IPs connecting to your DCs and not being included in an AD site. Most of the time there are “operational” reasons behind this (someone setup a new location in your company and didn’t think to include you in the email chains, so you can adjust your configuration). But this time I wanted to nip this in the bottom, since I’m pretty sure no one else had bothered with it until now. Again , I didn’t reinvent the wheel, but I did manage to improve on some of the resources I found, and come up with a more scalable and convenient solution, so you could say “I made the wheel get more traction than before” :).

The Problem

The Event Viewer event I’m talking about is described here. A short snippet:

“During the past %1 hours there have been %2 connections to this Domain Controller from client machines whose IP addresses don’t map to any of the existing sites in the enterprise. Those clients, therefore, have undefined sites and may connect to any Domain Controller including those that are in far distant locations from the clients”

The solution to this is to map the client IPs to subnet in AD. To do this, you need to build a report of all unmapped client IPs, on all Domain Controllers, from all domains in the forest. This information, like the event says is stored mainly in the file “%systemroot%\debug\netlogon.log“. The output of this file looks like this:

05/03 08:37:06 Contoso: NO_CLIENT_SITE: Marry-PC 172.17.17.3
12/15 13:18:23 Contoso: NO_CLIENT_SITE: Bob-PC 172.172.2.240

What others have tried

From the way the file looks, you can see this is something we can convert to CSV, using powershell, and then process in Excel/Database. This is exactly what this person here did. His script works in the current domain, and starts hitting a wall when handling too many DCs/big files, because variables that store the data, keep getting bigger and bigger. There is a workaround in the comments section for this, but I wanted the whole data, to look around at my leisure, and I didn’t think I had time to wait for the script to work.

Also the environment I’m working on is spread across 6 continents, increasing the changes my script would take forever, and I really do want to get home in time for dinner. Just for the record, at the time of writing this, there were 52 domain controllers, total combined size of log files was over 180MB. Assuming each row had about 70 Bytes , that means over 2.5 Million entries. I’m really hoping that after I fix this, these numbers will go down significantly.

I also wanted to add some more information in the report, separating the IP address (A.B.C.D.) into octet strings, so I could more easily report on the data, in Excel. Granted, there are some ways to split the IP in Excel, but hey, if excel can do it, so can my powershell script.

My Solution

I took a different solution than Jean Louw on his blog. My approach was this:

  1. Get all Global Catalogs in the forest, using the one liner in my quick info article.
  2. Copy all netlogon files to a local network share. from here I could unleash powershell onto the “unsuspecting log files”
  3. Go through each file and import each one into a variable, as CSV. On each variable get the unique values and add them to a Reporting variable.
  4. Add some Reg-ex code to find the 1st, 1st-2nd, 1st-3rd octets in the IP string and add it to the report.
  5. Finally to make sure the final report only has Unique IPs in it, by filtering the Reporting variable, then exporting it to CSV

Then I put this all together in a script, added some basic error checking, the result you can download in this script, Report_DebugNetlogon.

Learning Points

Using regular expressions to find network address for /8,/16,/24 IPs, is done using this code. I used the code for detecting an IPv4 Address from here, this is found in other places on the web, but I stuck with this one:

#Extract Entire IP v4 address (A.B.C.D)
Function ExtractValidIPAddress($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
} 

For detecting the first/first 2/first 3 octets of an IP address you just adjust the {3} variable in the $IPRegex variable to {2} – first 3 octets, {1} – first 2 octets, and {0} for first octet, or just shorten the regex as below:

#Extract 1st Octet of IP v4 address (A)
Function Extract1IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st and 2nd Octet of IP v4 address (A.B)
Function Extract2IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){1}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

#Extract 1st, 2nd and 3rd Octet of IP v4 address (A.B.C)
Function Extract3IPOctet($String){
$IPregex=‘(?<Address>((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){2}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?))’
If ($String -Match $IPregex) {$Matches.Address}
}

This approach needs some careful consideration, I first used the ExtractIP function, and on that value I applied the ExtractxIPOctet function, since throwing it on the initial variable would give incorrect results (more than 1, first octet, for example).

As far as speed is concerned, processing those 2.5 million took 32 minutes, of which, 14 minutes spent copying all files to a central location, 18 minutes going though all files, finding unique IPs.

A final word of advice: For networks defined as A.0.C.D, A.B.0.D – there is a “bug” when opening the CSV straight in excel, that it considers the column to be a number, and the .0 is omitted. To get around this import that data using data import into excel, and specify the Octet fields as Text fields.

Now your next step should be building a pivot table with all network prefixes, and sending it over to your friendly network admin, and work together to find out what each subnet does, then to add the to Active Directory subnet list.

Active Directory Domain Controller Backups – Part 1

I decided to write down for posterity and my own forgetfulness the workflow I developed for backing up domain controllers running Windows 2008 R2. I didn’t really reinvent the wheel, I merely adapted and put together some disparate pieces of code I found on the Internet.

Backup Overview

I guess this is the time we should ask ourselves the 5 Ws:

  • Who is being backed up?
  • What to backup?
  • Why do we need the backups?
  • When will backups run?
  • Where will backups be stored?

Who?

Our backup sources must be at least 2 domain controllers / domain. Why 2? Well because 2 is better than 1, in the remote case one of the backups is not working properly you are in trouble, having 2 backups, at least diminishes this chance. Also I would advice that your backup targets are, if possible, among the FSMO roles holders in the domain, so in case of a forest/domain recovery to avoid doing FSMO role seizing. Suppose you are managing a single forest, multiple child domain infrastructure. The forest root domain DCs that make the best candidate for backup are:

  • PDC Emulator
  • RID Master
  • Schema Master

I’m not saying the Domain Naming Master and Infrastructure master are not important, only that in a Forest/Domain Recovery Scenario, you must make sure you have the roles above working properly, and not have to go through the trouble of seizing them from dead DCs. Domain Naming Master is useful for setting up new domains – not something you do in a recovery, and Infrastructure masters are not so much used, if all your DCs are Global Catalogs, a common situation in my opinion  nowadays). On domain level I would pick servers that are PDCs (Primary Domain Controllers) and RID Master Servers, for mostly the same reasons above.

What?

  • System State (at least), Critical Volume Recommended
  • List of objects (distinguishedNames) – useful for restoring data
  • GPOs (contents)
  • GPO-Links

Why?

Well, the “System State” is an obvious choice, since that is the entire operating system, and in the case of DCs, the Active Directory files (database, logs, etc). Keep in mind though, that your restore options, for a supported setup are limited, as explained here.

The “List of Objects ” (object name and DN) is needed, because in some restore cases (e.g.: object restore from accidental deletion) you must provide a DN for the restored object, that is not included in deleted objects information in AD.

The “GPO objects” are backed up for convenience sake (they are included in the “system state backup”). In case we need to restore a GPO from backup, it is more easy to have it backed-up separately, compared to restoring a system state backup, mounting the backup, searching for the files, etc). The GPO links are something special that are not backed-up with the usual GPO backup tools. Also special care must be taken so that GP-links to GPOs outside of the domain where the GPO exists are also backed-up.

When?

In my case I did a backup to network share, which was included in the regular backup policy of the company. So the answer would be: “At a time of low activity on the server, before the regular backup runs over your backup location?”

Where?

The backup destination in my case was a network share in the same subnet and datacenter as my domain controllers. This obviously lowers backup duration and any load on the WAN links. I am fortunate enough to be able to have most of the DCs for my entire forest, with at least 1 member in a single datacenter, where I setup a network share to store the backups.

A word about security here. Take as much care in securing this network share and the OS that hosts it, as you would if it were a very sensitive system. Remember, this is where all of your domain controller backups are stored. Anyone gaining access to this share would be able to mount your AD files, view the content, steal password hashes from the backups, the attack possibilities are quite numerous. If you are setting up this share on a windows machine CHANGE the default security settings to only allow the Domain Administrators and Backup Operators access to it.

Tools and Requirements

To setup the workflow that I will describe you need to have these resources/permissions:

1. Domain Admin over all domains/child domains

2. You should be running Windows 2008 R2 with Powershell installed on your DCs (including AD-Modules)

I will discuss later on about specific requirements for the backup itself, when the time comes.

Resources

Some links and resources helped me build this workflow are below:

A good deep-dive video explaining a lot of the concepts around Domain Controller backups, and restore techniques, presented at TechEd US this year. I picked up form here what needs to be backed up, and some command for doing the backup.

This Technet gallery script for backing up GPO objects, the stuff there is pretty accurate.

This is it for an introductory post, giving an overview of how we will get the job done. Part 2 will cover the scripts/commands used to achieve this backup.

Get Basic Information on Active Directory Domain Controllers

Lately I found myself doing a lot of work around AD, since I’m responsible for migrating the forest to 2008 R2 Functional Level. As you may already know, in order to raise forest Functional Level you have to raise the Functional Level of all child domains. To be able to do this, each DC in child domains must run on Windows 2008 R2 or later. To get started you need a list of all systems in the AD infrastructure, and a list of those that need replacing their OS. If your infrastructure is like mine, you have lots of DCs, most of them setup before your time with the company, and lots of them lacking documentation. Also lots of them probably run on antiquated hardware, some of which probably will not support Windows 2008 R2. The most stringent requirement, in my book, for installing Windows 2008 R2 is that the CPU must support 64bit, since Windows 2008 R2 only comes in 64bit flavor.

When I first started inventorying our DCs, I made a list of the basic things that interested me for transitioning to 2008 R2 FL (Functional Level):

  • HostName, Operating System and Domain
  • Site and IPAddress
  • FSMO roles Installed
  • Hardware and Model
  • CPU x64 Readiness and Memory size

The first 3 above are low hanging fruits, you can extract them using a modified one liner from my “Quick Tip #1” article.

Hardware, Model and Memory size, are also not so difficult, you have WMI to each server for this.

The most challenging part is finding out if the CPU supports 64-bit instructions. The first place where you will probably think to look is environment variables within Windows (type “echo %processor_architecture%” in a cmd prompt to see the output, anything not x86 is 64bit). You are out of luck, because what that variable actually stores is the capabilities of the Operating System, and unless you are running 64bit OS on 64bit hardware (in which case you don’t need this script in the first place) that is of no use. Then I thought: “Hey, there must be a way to find this out via Powershell/WMI” … indeed you can find out some information about the CPU bandwidth (wmic cpu get datawidth) … however the data is inaccurate, it also refers to Operating System. You can crosscheck your results with a tool from the overclocking world (CPU-Z) – you will see it shows the CPU can do 64bit instructions, while WMI says it can’t (because the OS is 32 bit).

Finally my quest bought me to a tool written by this gentleman, the tool is called chkcpu32. It was created a long time ago, but i see it is actively being maintained, last update was September 2012. This tool actually queries the CPU for this information rather than the WMI. The latest version added XML support, a real treat for us powershell scripters, now we don’t have to do text parsing. Here’s a sample non XML output from one of my systems:

C:\>chkcpu32 /v

CPU Identification utility v2.10                 (c) 1997-2012 Jan Steunebrink
──────────────────────────────────────────────────────────────────────────────
CPU Vendor and Model: Intel Core i7 Quad M i7-2600QM/2700QM series D2-step
Internal CPU speed  : 2195.0 MHz
System CPU count    : 1 Physical CPU(s), 4 Core(s) per CPU, 8 Thread(s)
CPU-ID Vendor string: GenuineIntel
CPU-ID Name string  : Intel(R) Core(TM) i7-2720QM CPU @ 2.20GHz
CPU-ID Signature    : 0206A7
CPU Features        : Floating-Point Unit on chip  : Yes
Time Stamp Counter           : Yes
Enhanced SpeedStep Technology: Yes
Hyper-Threading Technology   : Yes
Execute Disable protection   : Yes
64-bit support               : Yes
Virtualization Technology    : Yes
Instr set extensions: MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2
Size of L1 cache    : 4 x 64 KB
Integrated L2 cache : 4 x 256 KB
Integrated L3 cache : 6144 KB

I bundled all this information gathering bits and pieces in this script and below you can find short learning points from some of the key parts.

Learning Points

First of all the script assumes that you are running under enterprise admin credentials, and all your GCs are DCs, if you don’t have this setup, you will have to come up with another way to list all your domain controllers.

I found it nowadays that it is more of a headache to not have all GCs as DCs than to just make sure they all are. Dcpromo by default in Windows 2008 R2 will make a DC a GC and DNS server.

My previous post, on how to get all domain controllers lists the one liner to get basic information about DCs (hostname, domain name,  sitename, IP, FSMO roles). The only real challenge here is how to handle the formatting of the “Roles” Property, I used a switch command to loop through all of the FSMO roles a DC might have.

foreach ($role in $($dc.roles)) {
Switch ($role) {
"PdcRole" { $row.PdcRole = "TRUE"}
"RidRole"  {$row.RidRole = "TRUE"}
"InfrastructureRole"  {$row.InfrastructureRole = "TRUE"}
"NamingRole"  {$row.NamingRole = "TRUE"}
"SchemaRole"  {$row.SchemaRole = "TRUE"}
}
}

As far as getting the CPU 64bit support this is done with chkcpu32 with these 2 lines. You will also need psexec from the sysinternals Toolkit(at least v1.98 of psexec), and you should run it at least once before, to get rid of the EULA accept pop-up.

Set-alias pshell-psexec "c:\windows\system32\psexec.exe"
& pshell-psexec \\$dc -n 25 -c -f -h CHKCPU32.exe /v /X > "$($dc)_x64status.log"

If (((Get-Content "$($dc)_x64status.log")).chkcpu32._64bit ='1') {
$row.CPUx64Ready = 'True'
}
Else {
$row.CPUx64Ready = 'False'
} 

The rest of the code in the script is just putting all of this together in a nicely formatted CSV file.

This is kind of all of it, nothing to difficult once you find the right tools and use them properly. Any comments or feedback are most welcomed!