Script for detecting potentially vulnerable Log4j jars [CVE-2021-44228] on Windows Server

Update 2021-12-18 – This looks like a much more competent script for detecting this vulnerability and there is a python version for Linux:

Updated 2021-12-17 – Script is v1.4 and looks for .war files now too

Original post below

Inspired by the one-liner here:

gci 'C:\' -rec -force -include *.jar -ea 0 | foreach {select-string "JndiLookup.class" $_} | select -exp Path

I wrote a script to expand on the command, support Windows Server 2008 onward and to be more automated.

This script is basically the one liner with a bit of logic to get all the local fixed disks on a server and iterate through them all looking for Log4j jar file:

    Checks the local system for Log4Shell Vulnerability [CVE-2021-44228]
    Gets a list of all volumes on the server, loops through searching each disk for Log4j stuff
    Using base search from

    Version History
        1.0 - Initial release
        1.1 - Changed ErrorAction to "Continue" instead of stopping the script
        1.2 - Went back to SilentlyContinue, so much noise
        1.3 - Borrowed some improvements from @cedric2bx (
                Replace attribute -Include by -Filter (prevent unauthorized access exception stopping scan)
                Remove duplicate path with Get-Unique cmdlet
        1.4 - Added .war support thanks to @djblazkowicz (
    Created by Eric Schewe 2021-12-13
    Modified by Cedric BARBOTIN 2021-12-14

# Get Windows Version string
$windowsVersion = (Get-WmiObject -class Win32_OperatingSystem).Caption

# Server 2008 (R2)
if ($windowsVersion -like "*2008*") {

    $disks = [System.IO.DriveInfo]::getdrives() | Where-Object {$_.DriveType -eq "Fixed"}

# Everything else
else {

    $disks = Get-Volume | Where-Object {$_.DriveType -eq "Fixed"}


# I have no idea why I had to write it this way and why .Count didn't just work
$diskCount = $disks | Measure-Object | Select-Object Count -ExpandProperty Count

Write-Host -ForegroundColor Green "$(Get-Date -Format "yyyy-MM-dd H:mm:ss") - Starting the search of $($diskCount) disks"

foreach ($disk in $disks) {

    # One liner from
    # gci 'C:\' -rec -force -include *.jar -ea 0 | foreach {select-string "JndiLookup.class" $_} | select -exp Path

    # Server 2008 (R2)
    if ($windowsVersion -like "*2008*") {

        Write-Host -ForegroundColor Yellow "  $(Get-Date -Format "yyyy-MM-dd H:mm:ss") - Checking $($disk.Name): - $($disk.VolumeLabel)"
        Get-ChildItem "$($disk.Name)" -Recurse -Force -Include @("*.jar","*.war") -ErrorAction SilentlyContinue | ForEach-Object { Select-String "JndiLookup.class" $_ } | Select-Object -ExpandProperty Path | Get-Unique

    # Everything else
    else {

        Write-Host -ForegroundColor Yellow "  $(Get-Date -Format "yyyy-MM-dd H:mm:ss") - Checking $($disk.DriveLetter): - $($disk.VolumeLabel)"
        Get-ChildItem "$($disk.DriveLetter):\" -Recurse -Force -Include @("*.jar","*.war") -ErrorAction SilentlyContinue | ForEach-Object { Select-String "JndiLookup.class" $_ } | Select-Object -ExpandProperty Path | Get-Unique



Write-Host -ForegroundColor Green "$(Get-Date -Format "yyyy-MM-dd H:mm:ss") - Done checking all drives"

Sample output with nothing found:

check_CVE-2021-44228.ps1 sample output

Sample output with something found:

check_CVE-2021-44228.ps1 sample output 2

Good luck everyone.

Windows Defender Advanced Threat Protection Service will not start after November 2021 updates

Update – 2021-12-15 – I can confirm that the December Windows Updates have fixed this issue for us.


After installing OS updates on all of our servers in November 2021 we ended up with three servers, all running 2019 Core and all Domain Controllers, where the Windows Defender Advanced Threat Protection Service would not start.

With out the Windows Defender Advanced Threat Protection Service running these servers do not report to M365 ATP.

Manually trying to start the service results in an Error 1053:

Error 1053

and via PowerShell:

PS C:\Users\me> Start-Service sense
Start-Service : Service 'Windows Defender Advanced Threat Protection Service (sense)' cannot be started due to the
following error: Cannot start service sense on computer '.'.
At line:1 char:1
+ Start-Service sense
+ ~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OpenError: (System.ServiceProcess.ServiceController:ServiceController) [Start-Service],
    + FullyQualifiedErrorId : CouldNotStartService,Microsoft.PowerShell.Commands.StartServiceCommand

Microsoft Support has confirmed with me this is a known issue with the November 2021 updates and should be addressed in December 2021 updates.

Hopefully this saves you a support ticket.

Is enabling SMB Signing on your NetApp a non-disruptive change?

We received the following alert from our ActiveIQ Unified Management Appliance (and a similiar one in ActiveIQ / AutoSupport): Alert from Active IQ Unified Manager: Advisory ID: NTAP-20160412-0001

You can find more details here:

After reviewing it, fixing it seemed like a straight forward change but I wanted to know, is enabling SMB signing on your NetApp a non-disruptive change?

Everything I’ve read says it has been supported since Windows 98 and if you’ve disabled SMBv1 (which you hopefully have) everyone should be using it anyway with SMBv2 and newer which signs by default. On top of that, Domain Controllers use signing by default for things like SysVol and I assume DFS if you have that on your Domain Controllers. Windows also negotiates whether or not to use SMB signing based on client/server settings and by default it prefers more the more secure use of signing unless someone is man-in-the-middling you and downgrading your connection or you’re using…. Windows 95?

Since I couldn’t find any kind of answer to my question I figured I’d post something to hopefully help the next person wondering the same thing and faced with this security alert.

So, is enabling SMB signing on your NetApp a non-disruptive change? He asked again, out loud, like a crazy person.

Short answer: No.

Long answer: Nope but it’s probably not that bad.

I enabled SMB signing on our NetApp (OnTap 9.7P14) and about 95% of clients didn’t even notice but 5% did.

The 5% of clients that had a problem with SMB signing immediately lost access to all shares hosted on the NetApp and would get a “You do not have permissions to access this” error messages.

For remote workers it was easy, disconnect/reconnect your VPN and that solved it. On-premise workers had to logoff/on or reboot. Servers though, they had to be rebooted.

The kicker? Clients that had problems ranged from Windows 7 (I KNOW) to Windows 10. Servers that had problems? Server 2008 R2 (I KNOW) up to 2012 R2. Surprising none of our 2016 or 2019 servers had a problem but we have significantly less of those so plan accordingly if you’re doing this.

Here is an example: We had two identical 2012 R2 servers, one worked post change, one didn’t. We had to reboot one with the issue and then everything was good again.

My advice if you are tasked with implementing this in your organization?

For desktops: Ask your clients to logoff when they go for the day and make the change in the evening.

For servers: Had I been smarter I could have enabled SMB signing on Patch Tuesday right before server reboots. That would have caused the lease disruption and folded in nicely to our existing maintenance window. If that isn’t an option for you have a quick test plan to check if each server can access a share and if it can’t, reboot it.

There is potentially another option I was exploring but abandoned. You could build a GPO that makes SMB signing required and apply it to your Desktops/Servers ahead of time. After the GPO has propagated, in theory, you should be able to enable SMB signing on the NetApp and since all systems are already required to use it, there should be no disruption.

There you go. My lessons learned from this experience. Good luck. Hopefully this helps someone.

NetApp provides documentation here on how to enable SMB signing:

How to use CIRA Canadian Shield with a Pi-Hole and DoH

CIRA (Canadian Internet Registration Authority) has recently launched a new DNS service called the “Canadian Shield” which is basically a DNS service similar to OpenDNS or Cloudflares for Canadians, by Canadians.

CIRA offers three levels of protection depending on how safe you want to be:

  • Private: DNS resolution service that keeps your DNS data private from third-parties.
  • Protected: Includes Private features and adds malware and phishing blocking.
  • Family: Includes Protected and Private features and blocks pornographic content.

We use the Enterprise version of this service at my place of work and based on how we use it I’d say we’re using the equivalent of their “Protected” offering. We’ve had zero issues with the service and defiantly feels like it adds an extra layer or protection to our users.

Alright, enough free advertising (I am not receiving compensation from CIRA for this post).

When this was first announced I was eager to try it at home. CIRA’s instructions are how to either configure DNS over HTTPS (DoH) on a per-browser basis (not ideal for me since I have many devices on my network and don’t only use Firefox/Chrome) or configure your outbound DNS to use their servers over traditional, un-encrypted, DNS queries.

What I really want is to configure my Pi-hole which is my DNS endpoint for anything on my network to use the new CIRA service. This would capture ALL outbound DNS traffic and send it to CIRA making it so I only have to configure things in one place.

My current setup is: Clients -> 1 of 2 Active Directory DNS Servers -> Pi-hole -> Cloudflare via DoH ( / Cloudfared)

This will easily work on a more traditional deployment of: Clients -> Pi-hole -> Cloudflare via DoH ( / Cloudfared)

My Pi-hole is a basic CentOS 8 VM with the Pi-hole software installed on it with cloudflared so I can take advantage of DoH for all of my outbound DNS traffic. This is the minimum you need to get this working, a functional Pi-hole that is already sending it’s outbound DNS queries to Cloudflare (or another DoH provider) via cloudflared.

The first thing I did was re-configure my cloudflared to simply try using the CIRA DoH:

# Edit the cloudflared configuration file
vim /etc/default/cloudflared

# Commandline args for cloudflared to use
CLOUDFLARED_OPTS=--port 5053 --upstream --upstream

# Changed the above to:
CLOUDFLARED_OPTS=--port 5053 --upstream

# Save and close the file

# Restart cloudflared
systemctl restart cloudflared

# Test DNS

# This failed


This ended up not working and after I tried it I realized why. To be able to use you have to have functioning DNS. Cloudflare has skirted this issue by using and for it’s service, they are IPs and do not need DNS to be working to function.

Fortunately the solution was very easy:

# I did a nslookup on the specific CIRA service I wanted to use (Private) via traditional DNS



Non-authoritative answer:
Address: 2620:10a:80bb::10
Address: 2620:10a:80bc::10

# If you want to use Protected or Family instead of private 
# do a nslookup on either
# or instead and use those IPs

# I then edited my /etc/hosts file on my Pi-hole
vim /etc/hosts

# and added the following (I don't use IPv6 at this time):

# CloudA DNS

# Save and close the file

# Test DNS

Non-authoritative answer:


That’s it. I now have the CIRA Canadian Shield working on my Pi-hole using the cloudflared software.

Now for some DNS benchmarks, what’s faster? CIRA or

I flushed the DNS cache on my Active Directory DNS servers and then restarted cloudflared on my Pi-hole before running these benchmarks: /

  192.168.  0.  4 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.009 | 0.052 | 0.196 | 0.043 | 100.0 |
  + DotCom Lookup | 0.009 | 0.013 | 0.027 | 0.004 | 100.0 |
                Local Network Nameserver

  192.168.  0.  5 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.009 | 0.051 | 0.195 | 0.044 | 100.0 |
  + DotCom Lookup | 0.010 | 0.014 | 0.028 | 0.003 | 100.0 |
                Local Network Nameserver


and here are the three CIRA services:

CIRA - Private

  192.168.  0.  4 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.019 | 0.062 | 0.236 | 0.048 | 100.0 |
  + DotCom Lookup | 0.022 | 0.046 | 0.086 | 0.020 | 100.0 |
                Local Network Nameserver

  192.168.  0.  5 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.014 | 0.068 | 0.238 | 0.056 | 100.0 |
  + DotCom Lookup | 0.023 | 0.047 | 0.075 | 0.019 | 100.0 |
                Local Network Nameserver

CIRA - Protected

  192.168.  0.  4 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.019 | 0.062 | 0.236 | 0.048 | 100.0 |
  + DotCom Lookup | 0.022 | 0.046 | 0.086 | 0.020 | 100.0 |
                Local Network Nameserver

  192.168.  0.  5 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.014 | 0.068 | 0.238 | 0.056 | 100.0 |
  + DotCom Lookup | 0.023 | 0.047 | 0.075 | 0.019 | 100.0 |
                Local Network Nameserver

CIRA - Family

  192.168.  0.  4 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.014 | 0.064 | 0.246 | 0.053 | 100.0 |
  + DotCom Lookup | 0.022 | 0.039 | 0.080 | 0.018 | 100.0 |
                Local Network Nameserver

  192.168.  0.  5 |  Min  |  Avg  |  Max  |Std.Dev|Reliab%|
  + Cached Name   | 0.000 | 0.000 | 0.000 | 0.000 | 100.0 |
  + Uncached Name | 0.014 | 0.073 | 0.248 | 0.062 | 100.0 |
  + DotCom Lookup | 0.024 | 0.049 | 0.082 | 0.020 | 100.0 |
                Local Network Nameserver


If I am interpreting this data correctly, CIRA is 0.010s-0.030s slower on average compared to Hardly worth mentioning.

I’m happily switching over to a Canadian based DoH service (full disclosure, I’m Canadian). No offence Cloudflare, you still get to hold my DNS until CIRA starts offering their DNS Anycast Service for home users (hint hint).

Oh and just in case your curious, if you choose their ‘Family’ service and try and hit up Pornhub, you’re greeted with this:

How to perform an offline audit of your Active Directory NTLM hashes

It’s read-only Friday so I decided to perform a offline audit of our Active Directory passwords.

I found this great tool: which in turn is a fork of this tool:

What I’m going to write here is mostly a repeat of these two Gitrepos with a few tweaks and corrections.

To perform this procedure you will need to be able to login to a Domain Controller. You’re also going to want a secure location to perform all of this work so the dumped list of usernames and hashes doesn’t escape your control.

The secure location should be a workstation or server running the same or a newer version of Windows than your Domain Controller. For example if you’re running AD 2012R2 you can’t complete this on a 2008R2 box. You’re secure workstation or server will need to be running PowerShell 5.0 or newer.

Step 1 – Export NTDS.dit and the SYSTEM hive

  1. Login to a domain controller
  2. Open a Command Prompt window
  3. Type “ntdsutil”
  4. Click ‘Yes’ if the UAC prompts you
  5. Run the following commands:
    activate instance ntds
    # Replace <DOMAINNAME> with your domains name
    create full c:\temp\<DOMAINNAME>-audit
    # Wait for command to complete
  6. Transfer “C:\Temp\<DOMAINNAME>-audit” to the secure location you’ll work on it. I do not recommend performing the rest of these steps on your Domain Controllers

Step 2 – Download the latest Have I Been Pwned Offline NTLM password list

  1. Go to
  2. Scroll to the bottom and download the “ordered by prevalence” NTLM link
  3. Once downloaded, transfer the password list to your secure location in the audit directory and extract it

Step 3 – Covert the hashes in the NTDS.dit file to Hashcat formatting

  1. On your secure workstation/server launch PowerShell as an administrator (right click, run as administrator on the PowerShell shortcut)
  2. Install the DSInternals tools by running
    Install-Module -Name DSInternals -Force
  3. Go into the audit directory
    cd c:\temp\<DOMAINNAME>-audit
  4. Convert the hashes
    $key = Get-BootKey -SystemHivePath .\registry\SYSTEM
    # Change <DOMAINNAME> to your domains name
    Get-ADDBAccount -All -DBPath '.\Active Directory\ntds.dit' -BootKey $key | Format-Custom -View HashcatNT | Out-File <DOMAINNAME>-hashes.txt -Encoding ASCII

Step 4 – Compare your hashes to HIBP

The code in the Git Repos I linked at the beginning of the article are written as functions. For myself I just wanted a script I could execute with the appropriate parameters instead of futzing around with importing the function.

I also tweaked the original script for formatting (I like a bit more white space personally), added CSV headers, removed the spaces between commas, had the script append it’s execution time to the end of the CSV file and allowed for relative filenames as parameters instead of requiring absolute paths.

Here is my version of the script:

This is a slightly altered version of which is a slightly alter version of for no nonsense output. All credit to them.
    Matches AD NTLM Hashes against other list of hashes
    Builds a hashmap of AD NTLM hashes/usernames and iterates through a second list of hashes checking for the existence of each entry in the AD NTLM hashmap
        -Outputs results as object including username, hash, and frequency in database
        -Frequency is included in output to provide additional context on the password. A high frequency (> 5) may indicate password is commonly used and not necessarily linked to specific user's password re-use.
    File Path to 'Hashcat' formatted .txt file (username:hash)
.PARAMETER HashDictionary
    File Path to 'Troy Hunt Pwned Passwords' formatted .txt file (HASH:frequencycount)
    Provide run-time of function in Verbose output
    $results = Match-ADHashes -ADNTHashes C:\temp\adnthashes.txt -HashDictionary -C:\temp\Hashlist.txt 
    Array of HashTables with properties "User", "Frequency", "Hash"
    User                            Frequency Hash                            
    ----                            --------- ----                            
    {TestUser2, TestUser3}             20129     H1H1H1H1H1H1H1H1H1H1H1H1H1H1H1H1
    {TestUser1}                     1         H2H2H2H2H2H2H2H2H2H2H2H2H2H2H2H2
    If you are seeing results for User truncated as {user1, user2, user3...} consider modifying the Preference variable $FormatEnumerationLimit (set to -1 for unlimited)
        -DSInternal Project
        -Checkpot Project
        -Performance Testing, optimization
        -Other Languages (golang?)

    [Parameter(Mandatory = $true)]
    [System.IO.FileInfo] $ADNTHashes,
    [Parameter(Mandatory = $true)]
    [System.IO.FileInfo] $HashDictionary

process {
    $stopwatch = [System.Diagnostics.Stopwatch]::StartNew()

    # Set the current location so .NET will be nice and accept relative paths
    [Environment]::CurrentDirectory = Get-Location

    # Declare and fill new hashtable with ADNThashes. Converts to upper case to 
    $htADNTHashes = @{}
    Import-Csv -Delimiter ":" -Path $ADNTHashes -Header "User","Hash" | % {$htADNTHashes[$_.Hash.toUpper()] += @($_.User)}
    # Create Filestream reader
    $fsHashDictionary = New-Object IO.Filestream $HashDictionary,'Open','Read','Read'
    $frHashDictionary = New-Object System.IO.StreamReader($fsHashDictionary)

    # Output CSV headers
    Write-Output "Username,Frequency,Hash"

    #Iterate through HashDictionary checking each hash against ADNTHashes
    while ($null -ne ($lineHashDictionary = $frHashDictionary.ReadLine())) {
        if($htADNTHashes.ContainsKey($lineHashDictionary.Split(":")[0].ToUpper())) {
                $user = $htADNTHashes[$lineHashDictionary.Split(":")[0].ToUpper()]
                $frequency = $lineHashDictionary.Split(":")[1]
                $hash = $linehashDictionary.Split(":")[0].ToUpper()
                Write-Output "$user,$frequency,$hash"


    Write-Output "Function Match-ADHashes completed in $($stopwatch.Elapsed.TotalSeconds) Seconds"
end {


To execute it, copy/paste it into notepad and save it as ‘myAudit.ps1’ or what ever file name you’d like.

Now perform your audit:

# Replace <DOMAINNAME> with your domain name
.\myAudit.ps1 -ADNTHashes <DOMAINNAME>-hashes.txt -HashDictionary <HIBP TEXT FILE> | Out-File <DOMAINNAME>-PasswordAudit.csv

# Example
.\myAudit.ps1 -ADNTHashes myDomain-hashes.txt -HashDictionary pwned-passwords-ntlm-ordered-by-count-v5.txt | Out-File myDomain-PasswordAudit.csv

The final result will be a CSV file you can dig through.

Step 6 – Clean it all up

The output may or may not surprise you but what ever the outcome, when you’re done you want to get rid of the <DOMAINNAME>-hashes.txt and the NTDIR.dis file as soon as possible. If someone snags a copy of that you’ll likely get in some serious trouble.

Head on over to SysInternals and grab SDelete

.\sdelete.exe -p 7 -r -s <DIRECTORY OR FILE>