ESXi patching fails on ESXi 7 Update 2a

I suspect this issue is related to VMware screwing up USB support in ESXi 7.0 Update 2a. I suspect these threads are related

I found a quick workaround for my Homelab so I can get patched past Update 2a and hopefully not have to deal with this anymore:

Here is the error I got when trying to update:

[[email protected]:~] esxcli software profile update -p ESXi-7.0U2c-18426014-standard \
> -d
Failed to query file system stats: Errors:
Cannot open volume:
cause = Errors:
Cannot open volume:
Please refer to the log file for more details.

I ran the following: esxcli storage core adapter rescan -a

Once completed I was able to patch my ESXi to 7.0U2c 18426014

Is enabling SMB Signing on your NetApp a non-disruptive change?

We received the following alert from our ActiveIQ Unified Management Appliance (and a similiar one in ActiveIQ / AutoSupport): Alert from Active IQ Unified Manager: Advisory ID: NTAP-20160412-0001

You can find more details here:

After reviewing it, fixing it seemed like a straight forward change but I wanted to know, is enabling SMB signing on your NetApp a non-disruptive change?

Everything I’ve read says it has been supported since Windows 98 and if you’ve disabled SMBv1 (which you hopefully have) everyone should be using it anyway with SMBv2 and newer which signs by default. On top of that, Domain Controllers use signing by default for things like SysVol and I assume DFS if you have that on your Domain Controllers. Windows also negotiates whether or not to use SMB signing based on client/server settings and by default it prefers more the more secure use of signing unless someone is man-in-the-middling you and downgrading your connection or you’re using…. Windows 95?

Since I couldn’t find any kind of answer to my question I figured I’d post something to hopefully help the next person wondering the same thing and faced with this security alert.

So, is enabling SMB signing on your NetApp a non-disruptive change? He asked again, out loud, like a crazy person.

Short answer: No.

Long answer: Nope but it’s probably not that bad.

I enabled SMB signing on our NetApp (OnTap 9.7P14) and about 95% of clients didn’t even notice but 5% did.

The 5% of clients that had a problem with SMB signing immediately lost access to all shares hosted on the NetApp and would get a “You do not have permissions to access this” error messages.

For remote workers it was easy, disconnect/reconnect your VPN and that solved it. On-premise workers had to logoff/on or reboot. Servers though, they had to be rebooted.

The kicker? Clients that had problems ranged from Windows 7 (I KNOW) to Windows 10. Servers that had problems? Server 2008 R2 (I KNOW) up to 2012 R2. Surprising none of our 2016 or 2019 servers had a problem but we have significantly less of those so plan accordingly if you’re doing this.

Here is an example: We had two identical 2012 R2 servers, one worked post change, one didn’t. We had to reboot one with the issue and then everything was good again.

My advice if you are tasked with implementing this in your organization?

For desktops: Ask your clients to logoff when they go for the day and make the change in the evening.

For servers: Had I been smarter I could have enabled SMB signing on Patch Tuesday right before server reboots. That would have caused the lease disruption and folded in nicely to our existing maintenance window. If that isn’t an option for you have a quick test plan to check if each server can access a share and if it can’t, reboot it.

There is potentially another option I was exploring but abandoned. You could build a GPO that makes SMB signing required and apply it to your Desktops/Servers ahead of time. After the GPO has propagated, in theory, you should be able to enable SMB signing on the NetApp and since all systems are already required to use it, there should be no disruption.

There you go. My lessons learned from this experience. Good luck. Hopefully this helps someone.

NetApp provides documentation here on how to enable SMB signing:

Silencing my Dell T340 – Part 3

At long last, part 3 of my journey to try and cool my T340 with out having to listen to a hair dryer.

Here is part 1 and part 2 if you’re curious about what I’ve done so far.

I ended up getting a 3D Printer sometime after I wrote part 2 and one of the projects I had in mind was designing and printing a shroud that I could attach fan(s) to and slide over top of the heatsink in my T340 to create a better seal for airflow and get rid of the zap strap solution from part 2.

I was hoping that having a proper shroud would increase cooling efficiency, unfortunately I don’t think it did much for my overall temperatures BUT it did make it so my fan is now easily replaceable and just slides overtop the heatsink. More on that later (or just scroll to the bottom).

Here is what I came up with:

It’s hard to tell in the photos but there is a tiny lip at the bottom that snugly tucks over the base of the heatsink to prevent the whole shroud from just sliding off over time.

I used the rubber fan holders that Noctua includes with their fans and they fit very nicely in the holes. If you’re going to use a different fan I can’t guarantee the screw holes will hold up to standard case fan screws. A M4 screw and nut should work just fine though.

When mounting the fan be very careful. I printed at 0.3mm layer height and found that if I yanked too hard when installing/removing the rubber stoppers the layers would peel apart. This might be solved by printing at 0.2mm.

Here it is installed:

I used a Noctua NF-A9 PWM (92mm*92mm*25mm). I originally planned to buy two and set them up in a push/pull configuration but Amazon sold out. Turns out this was lucky for me because it appears Dells engineers left a really sweet hunk of plastic sticking up from the motherboard which prevents mounting a 25mm thick fan to the back of the shroud:

I see Noctua sells 92mm*92mm*14mm fans that might fit in there. If someone wants to donate two I will totally update the shroud design with two fan mounts and post an update. Based on my reading I don’t think a push/pull setup will benefit overall temperatures much though since this heatsink is pretty small and has a simple design.

Ok, what you probably care about, was there a performance improvement in cooling over my original zap strap design? Possibly.

I say possibly because I stupidly didn’t blow out my server of dust before starting all of this. I ended up blowing the dust out during some size checks but before installing the shroud. Here are my recorded temperatures:

  1. Transcoding a Bluray, all CPU workload with the old cooling setup, average temperature of 80c
  2. I blew the dust out of the case. You can see I ended up dropping my average idle load temperatures by 5c
  3. Point where I installed the new shroud
  4. Transcoding a Bluray, all CPU workload with the shroud installed, there is a 15c drop compared to (1) at an average temperature of 65c. This is probably partially the shroud and partially blowing out all the dust

Another discrepancy between (1) and (4) is the fan itself. Originally I installed a NF-B9 redux-1600 PWM which only runs at 1600RPM and pushes 64.3m3/h of air. The new fan is a NF-A9 PWM that runs at 2000RPM and pushes 78.9m3/h of air.

All that being said, I’m happy with ~65c at peak load and I can’t hear a thing. Idle temps seem to be roughly the same.

Now for what you’re probably here for, the STL file: Dell T340 Heatsink Shroud v1.6

You can also find it on Thingiverse.

I printed at 0.3mm. I’d recommend doing 0.2mm to hopefully make it a bit stronger so you don’t have to be as careful when installing the fan. 100% infill. You might also want to rotate the print so the fan screw holes are flat on the bed.

Alternatively you can skip ALL of this and try CJ’s suggestion he recently posted on my Part 1 which is a BIOS setting change.

Update 2022-08-12 – Here is the last 365 days of temperatures. The spike to 72c is likely the CPU under 100% load for a sustained amount of time. I think my Cookie Clicker VM was causing it.

Mac OS clients using Microsoft Remote Desktop are unable to connect via Remote Desktop Gateway Servers

Over the summer we build a Remote Desktop Gateway Cluster to provide remote access to workstations for some of our clients.

Initial testing worked great for Mac OS, Windows and Linux users. For Mac OS we had clients download the official Microsoft RDP App from the App Store.

Right before go-live day we updated our RDP template we provide to clients and that’s when things started going wrong for only Mac users…. and only some Mac users.

Clients using Mac OS 10.15.x and Microsoft RDP 1.14.x were greeted with this error message:

Unable to connect

We couldn’t connect to the Remote PC. This might be due to an expired password. If this keeps happening, contact your network administrator for assistance.

Error code: 0x207

I originally came cross this Technet thread when researching the issue:

Turns out that didn’t apply to us. The registry entries it mentioned did not exist on our servers.

We found that rolling back the Microsoft RDP Client to 1.13.8 (the latest 1.13.x build) would solve the problem.

We also found that the latest Microsoft RDP Client, 1.14.0, worked fine on Mac OS 10.14.1 but the same was not true for Mac OS 10.15.6.

On a whim one of our Techs still had a copy of our original RDP template we used for initial testing where everything worked and found that it still worked on Mac OS 10.15.6 with Microsoft RDP 1.14.0.

We cracked open the RDP file (it’s just text) to find what the difference was:

We had added the following line:


In an attempt to make it easier for clients to connect by auto-populating our domain name into the shortcut.

When we removed this line from our template the problem went away.

OctoPrint Firmware Updater plugin settings for Creality CR-10 V3

Just wanted to post my settings for this plugin to save others time. I took me a little bit before I found working settings by combing through multiple forums/comment sections.

  • Flash Method: avrdude (Atmel AVR Family)
  • AVR MCU: ATmega2560
  • Path to avrdude: <Your path, you can easily find this by typing “which avrdude”  when logged into your OctoPrint via SSH. If the command is not found run “sudo apt-get install avrdude” to install avrdude then re-run “which avrdude”>
  • AVR Programmer Type: wiring

I left everything else default and am able to load firmware without issue.

Firmware Plugin Settings


I’ve also added some post-flash configuration

These gcodes do the following after a flash:

M502; Factory reset your printer
M851 Z-2.630; Set Z Probe Offset (mine is -2.630mm, yours will likely be different)
M500; Save settings
M501; Load settings