How to perform an offline audit of your Active Directory NTLM hashes

It’s read-only Friday so I decided to perform a offline audit of our Active Directory passwords.

I found this great tool: https://gitlab.com/chelmzy/five-minute-password-audit which in turn is a fork of this tool: https://github.com/DGG-IT/Match-ADHashes

What I’m going to write here is mostly a repeat of these two Gitrepos with a few tweaks and corrections.

To perform this procedure you will need to be able to login to a Domain Controller. You’re also going to want a secure location to perform all of this work so the dumped list of usernames and hashes doesn’t escape your control.

The secure location should be a workstation or server running the same or a newer version of Windows than your Domain Controller. For example if you’re running AD 2012R2 you can’t complete this on a 2008R2 box. You’re secure workstation or server will need to be running PowerShell 5.0 or newer.

Step 1 – Export NTDS.dit and the SYSTEM hive

  1. Login to a domain controller
  2. Open a Command Prompt window
  3. Type “ntdsutil”
  4. Click ‘Yes’ if the UAC prompts you
  5. Run the following commands:
    activate instance ntds
    ifm
    
    # Replace <DOMAINNAME> with your domains name
    create full c:\temp\<DOMAINNAME>-audit
    
    # Wait for command to complete
    quit
    quit
  6. Transfer “C:\Temp\<DOMAINNAME>-audit” to the secure location you’ll work on it. I do not recommend performing the rest of these steps on your Domain Controllers

Step 2 – Download the latest Have I Been Pwned Offline NTLM password list

  1. Go to https://haveibeenpwned.com/Passwords
  2. Scroll to the bottom and download the “ordered by prevalence” NTLM link
  3. Once downloaded, transfer the password list to your secure location in the audit directory and extract it

Step 3 – Covert the hashes in the NTDS.dit file to Hashcat formatting

  1. On your secure workstation/server launch PowerShell as an administrator (right click, run as administrator on the PowerShell shortcut)
  2. Install the DSInternals tools by running
    Install-Module -Name DSInternals -Force
  3. Go into the audit directory
    cd c:\temp\<DOMAINNAME>-audit
  4. Convert the hashes
    $key = Get-BootKey -SystemHivePath .\registry\SYSTEM
    
    # Change <DOMAINNAME> to your domains name
    Get-ADDBAccount -All -DBPath '.\Active Directory\ntds.dit' -BootKey $key | Format-Custom -View HashcatNT | Out-File <DOMAINNAME>-hashes.txt -Encoding ASCII

Step 4 – Compare your hashes to HIBP

The code in the Git Repos I linked at the beginning of the article are written as functions. For myself I just wanted a script I could execute with the appropriate parameters instead of futzing around with importing the function.

I also tweaked the original script for formatting (I like a bit more white space personally), added CSV headers, removed the spaces between commas, had the script append it’s execution time to the end of the CSV file and allowed for relative filenames as parameters instead of requiring absolute paths.

Here is my version of the script:

<#
This is a slightly altered version of https://gitlab.com/chelmzy/five-minute-password-audit/blob/master/Match-ADHashes.ps1 which is a slightly alter version of https://github.com/DGG-IT/Match-ADHashes/ for no nonsense output. All credit to them.
.NAME
    Match-ADHashes
.SYNOPSIS
    Matches AD NTLM Hashes against other list of hashes
.DESCRIPTION
    Builds a hashmap of AD NTLM hashes/usernames and iterates through a second list of hashes checking for the existence of each entry in the AD NTLM hashmap
        -Outputs results as object including username, hash, and frequency in database
        -Frequency is included in output to provide additional context on the password. A high frequency (> 5) may indicate password is commonly used and not necessarily linked to specific user's password re-use.
.PARAMETER ADNTHashes
    File Path to 'Hashcat' formatted .txt file (username:hash)
.PARAMETER HashDictionary
    File Path to 'Troy Hunt Pwned Passwords' formatted .txt file (HASH:frequencycount)
.PARAMETER Verbose
    Provide run-time of function in Verbose output
.EXAMPLE
    $results = Match-ADHashes -ADNTHashes C:\temp\adnthashes.txt -HashDictionary -C:\temp\Hashlist.txt 
.OUTPUTS
    Array of HashTables with properties "User", "Frequency", "Hash"
    User                            Frequency Hash                            
    ----                            --------- ----                            
    {TestUser2, TestUser3}             20129     H1H1H1H1H1H1H1H1H1H1H1H1H1H1H1H1
    {TestUser1}                     1         H2H2H2H2H2H2H2H2H2H2H2H2H2H2H2H2
.NOTES
    If you are seeing results for User truncated as {user1, user2, user3...} consider modifying the Preference variable $FormatEnumerationLimit (set to -1 for unlimited)
    
    =INSPIRATION / SOURCES / RELATED WORK
        -DSInternal Project https://www.dsinternals.com
        -Checkpot Project https://github.com/ryhanson/checkpot/
    =FUTURE WORK
        -Performance Testing, optimization
        -Other Languages (golang?)
.LINK
    https://github.com/DGG-IT/Match-ADHashes/
#>

param(
    [Parameter(Mandatory = $true)]
    [System.IO.FileInfo] $ADNTHashes,
    [Parameter(Mandatory = $true)]
    [System.IO.FileInfo] $HashDictionary
)

process {
    $stopwatch = [System.Diagnostics.Stopwatch]::StartNew()

    # Set the current location so .NET will be nice and accept relative paths
    [Environment]::CurrentDirectory = Get-Location

    # Declare and fill new hashtable with ADNThashes. Converts to upper case to 
    $htADNTHashes = @{}
    Import-Csv -Delimiter ":" -Path $ADNTHashes -Header "User","Hash" | % {$htADNTHashes[$_.Hash.toUpper()] += @($_.User)}
       
    # Create Filestream reader
    $fsHashDictionary = New-Object IO.Filestream $HashDictionary,'Open','Read','Read'
    $frHashDictionary = New-Object System.IO.StreamReader($fsHashDictionary)

    # Output CSV headers
    Write-Output "Username,Frequency,Hash"

    #Iterate through HashDictionary checking each hash against ADNTHashes
    while ($null -ne ($lineHashDictionary = $frHashDictionary.ReadLine())) {
        if($htADNTHashes.ContainsKey($lineHashDictionary.Split(":")[0].ToUpper())) {
                $user = $htADNTHashes[$lineHashDictionary.Split(":")[0].ToUpper()]
                $frequency = $lineHashDictionary.Split(":")[1]
                $hash = $linehashDictionary.Split(":")[0].ToUpper()
                Write-Output "$user,$frequency,$hash"
            }
        }

    $stopwatch.Stop()

    Write-Output "Function Match-ADHashes completed in $($stopwatch.Elapsed.TotalSeconds) Seconds"
}
    
end {
}

 

To execute it, copy/paste it into notepad and save it as ‘myAudit.ps1’ or what ever file name you’d like.

Now perform your audit:

# Replace <DOMAINNAME> with your domain name
.\myAudit.ps1 -ADNTHashes <DOMAINNAME>-hashes.txt -HashDictionary <HIBP TEXT FILE> | Out-File <DOMAINNAME>-PasswordAudit.csv

# Example
.\myAudit.ps1 -ADNTHashes myDomain-hashes.txt -HashDictionary pwned-passwords-ntlm-ordered-by-count-v5.txt | Out-File myDomain-PasswordAudit.csv

The final result will be a CSV file you can dig through.

Step 6 – Clean it all up

The output may or may not surprise you but what ever the outcome, when you’re done you want to get rid of the <DOMAINNAME>-hashes.txt and the NTDIR.dis file as soon as possible. If someone snags a copy of that you’ll likely get in some serious trouble.

Head on over to SysInternals and grab SDelete

.\sdelete.exe -p 7 -r -s <DIRECTORY OR FILE>

 

List/Audit all folder delegate permissions on an Exchange mailbox

We recently needed a way to see what delegate permissions a client had given across the vastness that is their mailbox and it’s folder structure.

Digging around online I found this script from John Hopkins which got me 90% of the way there.

Their script was missing three things for my use case:

  • Delegate permissions on the root folder of the mailbox
  • Exclude the actual user from the report
  • Little tidier formatting of the output

This script has been tested against Exchange 2016 only.

# Use displayname
$mailbox = "<DISPLAY NAME>"

$permissions = @()
$folders = Get-Mailboxfolderstatistics $mailbox | % {$_.folderpath} | % {$_.replace(“/”,”\”)}

# Get the root folder of the mailbox
$folderKey = $mailbox + ":" + "\"
$permissions += Get-MailboxFolderPermission -identity $folderKey -ErrorAction SilentlyContinue | Where-Object {$_.User -notlike “Default” -and $_.User -notlike “Anonymous” -and $_.AccessRights -notlike “None” -and $_.User -notlike $mailbox}

# Get the rest of the folders
$list = ForEach ($folder in $folders)
   {
    $folderKey = $mailbox + ":" + $folder
    $permissions += Get-MailboxFolderPermission -identity $folderKey -ErrorAction SilentlyContinue | Where-Object {$_.User -notlike “Default” -and $_.User -notlike “Anonymous” -and $_.AccessRights -notlike “None” -and $_.User -notlike $mailbox}
   }

$permissions | Format-Table -AutoSize

 

 

Some Microsoft Storage Spaces Benchmarks

My backup server has a ASRock H370M-ITX/AC motherboard in it and at the time of these benchmarks 3x6TB Seagate Ironwolf SATA disks.

I run Veeam and used the SATA disks as my backup repository.

My original configuration was 2x6TB Ironwolfs in a RAID0 (using Microsoft Storage Spaces) with Bitlocker enabled. This worked perfectly fine and I had no performance issues.

A project I was working on required me to add some redundancy to my backup storage so I purchased the 3rd disk and re-configured the Microsoft Storage Space as a RAID5 and re-enabled Bitlocker. Since then I had nothing but performance issues. When two backup jobs ran at the same time the server became nearly unresponsive. The jobs still ran and completed but it was very difficult to use the server and jobs took longer with the RAID5 configuration than the RAID0 configuration. A performance difference makes sense but this amount seemed abnormal.

I ended up with a spare LSI MegaRAID 9270-8i I couldn’t sell so I decided to throw it into my backup server and try running the above configuration with hardware RAID but before I did that, I ran some benchmarks.

As you can see BitLocker has a huge negative impact on this configuration even though the server is running a Intel Core i3-8100 which has hardware acceleration built in for encryption.

You can probably guess how this is all going to end now.

First a 3 disk RAID0:

Last but not least, a 3 disk RAID5:

It’ still a ~70% hit on sequential writes but the server is completely usable and backup jobs run at the speeds I would expect over 1GBe.

Event ID 20292 from DHCP-Server

Checking over our DHCP server we were seeing quite a few of these errors appearing in the ‘Microsoft-Windows-DHCP Server Events/Admin’ event log:

Log Name:      DhcpAdminEvents
Source:        Microsoft-Windows-DHCP-Server
Date:          1/28/2019 10:23:49 PM
Event ID:      20292
Task Category: DHCP Failover
Level:         Error
Keywords:      
User:          NETWORK SERVICE
Computer:      dc2.mydomain.com
Description:
A BINDING-ACK message with transaction id: 943568 was received for IP address: 10.253.166.162 with reject reason:  (Reject Reason Unknown ) from partner server: dc1.mydomain.com for failover relationship: dc1.mydomain.com-dc2.mydomain.com.

Researching this error I came across this forum post: https://social.technet.microsoft.com/Forums/en-US/15d00412-3dfc-4520-a74e-1f32fe1329ef/windows-server-2012-dhcp-event-id-20291?forum=winserveripamdhcpdns

Which lead me to this KB article: https://support.microsoft.com/en-ca/help/2955135/event-id-20291-is-logged-in-the-system-log-when-a-client-computer-is-m

The hotfix that Microsoft mentions is from November 2014 and has been installed on our server for a very long time. We never noticed this error back in 2014 when the hotfix was installed so we were not able to “first remove the failover relationship, install the update to both DHCP nodes and restart them, and then reestablish the failover relationship” per Microsoft’s article.

The article leads me to believe you have to deconfigure failover on all subnets, destroy the failover relationship, re-create the failover relationship and then re-configure failover on each subnet.

Turns out you can just right click ‘Deconfigure failover’ and then right click ‘Configure failover’ on the specific subnets having the issue and re-use the existing failover relationship to resolve this issue assuming you’ve installed the November 2014 hotfix.

Microsoft RAS VPN and VXLAN not quite working

I’m not overly knowledgeable about advanced networking but I figured I’d share this since I couldn’t find anything online about it at the time.

We run a Microsoft Remote Access Server (RAS) for our VPN server. We provide L2TP primarily for users.

Due to a limitation in the Windows VPN client our RAS server has two network interfaces, one directly on the internet with a public IP (VLAN1) and one internally with a private IP (VLAN2).

The private IP relays VPN users DHCP/DNS requests to our internal DHCP and DNS servers.

RAS handles the authentication instead of RADIUS and we have our internal routes published via RIP to the RAS server so they can be provided to VPN clients when they connect.

I believe this is a fairly common design.

On the network end, our original design involved spanning VLAN1 and 2 all the way from our edge into our data center so the VM could pretty much sit directly on them. This worked fine.

As part of a major network redesign we performed we changed the VLAN spanning design over to using a VXLAN from our edge into our data center.

After making this change we ran into the strangest VPN issues. Users could connect and ping anything they wanted, do DNS lookups and browse most HTTP websites. HTTPS websites would partially load or fail to load and network share (SMB) access would partially work (you could get to the DFS root but not down to an actual file server).

After many hours of troubleshooting we determined our problem.

The MTU of most devices is configured to default to 1500 bytes. When we started tunneling the traffic through a VXLAN the tunneling added 52 bytes to the packet size making the total packet size 1552 bytes which is just over what most network cards are expecting. This caused large packets to drop (loading a HTTPS website, connecting to a share) but small packets (pings, some HTTP websites) to work fine.

I believe the final solution from our network team was to enable Jumbo packets from end to end of the VXLAN tunnel so it could transmit slightly larger than normal packets.

If you have any specific questions I can relay them to our Network Team and try to get you an answer. No promises :)