Silencing my Dell T340

I recently upgraded my Homelab (the thing that hosts this very blog) from a custom built server to a Dell T340. I have experience with the tower line of Dell servers from work and they’ve always been fairly quiet I’ve found once they’ve finished their power-on self-tests.

Not the case with the new T340. I did some testing with my iPhone and the dB Meter & Spectrum Analyzer App, holding it about 1 foot away from the front of the case, and got a average reading of ~52db-62db while the server wasn’t working very hard, in fact it was nearly idle. The ramping up and down of the fan is fairly annoying to, fair hairdryer like.

The T340 is cooled by one single 120mm fan at the back of the case. There is a plastic shroud that Dell installed that basically creates a air channel from the back of the HHD backplane, through the heatsink on the CPU and out the back of the case:

After looking up the fans model number I found out why it was so loud.

Model: Sunon PSD1212PMB1-A
Dell P/N: 9X5J5-A00
Airflow: 226.5
CFM Speed: 6000 RPM
Noise: 65.5 dBA

Since this server sits in my office at home I wanted to find a solution to this problem with out removing the OEM CPU heatsink.

After taking a few measurements I determined that a 96mm fan should fit nicely on the OEM CPU heatsink and any old 120mm fan will work to replace the case fan. I ended up purchasing the following:

One other problem with the T340. There is only a single on-motherboard connector for a fan and it’s a proprietary 5-pin connection and there are only two SATA power connections available so in addition to the fans I purchases:

With the above adapters I can connect the 120mm fan directly to the motherboard with PWM support which should keep the iDRAC (BMC) happy and adapt one of my SATA power connectors into two fan connectors in case I decided to install a 3rd fan (more on that later).

Replacing the 120mm is trivial. Release the bracket holding the current fan, disconnect the power cable, remove the OEM fan, install your replacement fan, connect the 4-pin -> 5-pin adapter, slide the fan back in place and connect it to the motherboard.

Next up is the CPU fan, this one is a bit more tricky since there are no screw mounts for it. I found this trick online:

You’ll notice in the heat sink photo I ran 4 zap straps through the heat sink. In the end I removed the bottom two and went with an alternative solution. That small heat sink you see is just high enough that it pushed the fan I was going to put on the CPU heat sink up 2-3mm which would have caused the top of the fan to rest above the top of the CPU heat sink. This would have likely caused problems re-installing the shroud and could have caused the fan blades to impact the heat sink causing a failure or annoying ticking noise.

Instead I did the following for the bottom two fan mounting holes:

Finally I took two additional zap straps and fed their locking heads on to the top two straps on the fan and tightened everything up:

After verifying everything was snug and the fan fired up I snipped off the excess zap strap ends and re-installed the shroud over the CPU and powered everything back up.

My T340 is now nearly silent. I think the hard drives might be louder than the fans inside the case at this point.

Shortly after booting the system back up the iDRAC started complaining about the fan’s speed. The OEM fan is 6000RPM and the replacement fan is only 1700RPM. To address this I did the following:

  1. Login to the iDRAC
  2. Click ‘System’ and ‘Overview’
  3. Click ‘Cooling’
  4. To the right of ‘Fan Overview’ click ‘Configure Fans’
  5. Change the ‘Minimum Fan Speed in PWM (% of Max)’ to “50” and click ‘Apply’

Since making this change I haven’t received further fan speed warnings and I still can’t hear the fans. 

Unfortunately I was not doing any kind of temperature logging with the OEM fan installed so I can’t comment on whether or not this has made the CPU temperatures higher or lower. If I were to guess I’d say the overall operating temperature has increased with the custom fans over the OEM fans.

I ran my server with the custom fans for about 24 hours before I got some logging configured.

Right after I configured temperature logging I shut down my server and installed one additional 140mm fan in the 5.25″ cage to suck fresh air into the case. I also removed the shroud thinking that cooling would be better with out it.

To get this 140mm fan to stay in place I used two small pieces of double sided duct tape.

Finally, here is the CPU temperature over the last 3 days:

Looks like my average temperature is 40c with spikes up to 55-60c. These numbers are within the Intel recommended maximum of 100c but I’m not liking these numbers.

Before I removed the shroud and installed the 140mm fan I did have some temperature readings:

The area to the left of the red line was with the shroud still installed. It appears with the shroud installed the average temperature is still the same but the spikes are less severe.

I’m out of town right now but when I return I am going to re-install the shroud and run my server for a few days to see if the above remains true. I will update this post accordingly.

Event ID 20292 from DHCP-Server

Checking over our DHCP server we were seeing quite a few of these errors appearing in the ‘Microsoft-Windows-DHCP Server Events/Admin’ event log:

Researching this error I came across this forum post: https://social.technet.microsoft.com/Forums/en-US/15d00412-3dfc-4520-a74e-1f32fe1329ef/windows-server-2012-dhcp-event-id-20291?forum=winserveripamdhcpdns

Which lead me to this KB article: https://support.microsoft.com/en-ca/help/2955135/event-id-20291-is-logged-in-the-system-log-when-a-client-computer-is-m

The hotfix that Microsoft mentions is from November 2014 and has been installed on our server for a very long time. We never noticed this error back in 2014 when the hotfix was installed so we were not able to “first remove the failover relationship, install the update to both DHCP nodes and restart them, and then reestablish the failover relationship” per Microsoft’s article.

The article leads me to believe you have to deconfigure failover on all subnets, destroy the failover relationship, re-create the failover relationship and then re-configure failover on each subnet.

Turns out you can just right click ‘Deconfigure failover’ and then right click ‘Configure failover’ on the specific subnets having the issue and re-use the existing failover relationship to resolve this issue assuming you’ve installed the November 2014 hotfix.

Microsoft RAS VPN and VXLAN not quite working

I’m not overly knowledgeable about advanced networking but I figured I’d share this since I couldn’t find anything online about it at the time.

We run a Microsoft Remote Access Server (RAS) for our VPN server. We provide L2TP primarily for users.

Due to a limitation in the Windows VPN client our RAS server has two network interfaces, one directly on the internet with a public IP (VLAN1) and one internally with a private IP (VLAN2).

The private IP relays VPN users DHCP/DNS requests to our internal DHCP and DNS servers.

RAS handles the authentication instead of RADIUS and we have our internal routes published via RIP to the RAS server so they can be provided to VPN clients when they connect.

I believe this is a fairly common design.

On the network end, our original design involved spanning VLAN1 and 2 all the way from our edge into our data center so the VM could pretty much sit directly on them. This worked fine.

As part of a major network redesign we performed we changed the VLAN spanning design over to using a VXLAN from our edge into our data center.

After making this change we ran into the strangest VPN issues. Users could connect and ping anything they wanted, do DNS lookups and browse most HTTP websites. HTTPS websites would partially load or fail to load and network share (SMB) access would partially work (you could get to the DFS root but not down to an actual file server).

After many hours of troubleshooting we determined our problem.

The MTU of most devices is configured to default to 1500 bytes. When we started tunneling the traffic through a VXLAN the tunneling added 52 bytes to the packet size making the total packet size 1552 bytes which is just over what most network cards are expecting. This caused large packets to drop (loading a HTTPS website, connecting to a share) but small packets (pings, some HTTP websites) to work fine.

I believe the final solution from our network team was to enable Jumbo packets from end to end of the VXLAN tunnel so it could transmit slightly larger than normal packets.

If you have any specific questions I can relay them to our Network Team and try to get you an answer. No promises :)

DHCP stops serving IPs when audit log is full

We run two DHCP servers in a HA configuration. The HA is configured to split the scopes in half. Depending on how high up the scope your IP is will determine which DHCP server you get your IP from. We have DHCP audit logging enabled.

DHCP1 handles 0-127 and DHCP2 handles 128-254 (we mostly use /24’s right now).

We started getting reports of random devices on the network not being able to connect or login to the domain. By the time a technician got to the PC to check it the issue was resolved magically.

We dug into the DHCP servers and found the DHCP audit log on DHCP1 was full (36MB in size). The log on DHCP2 was not full (yet, only 34MB in size).

Stopping DHCP on DHCP1, renaming the audit log and then starting DHCP on DHCP1 again appeared to resolve the issue.

The thing that had us scratching our heads is we’ve had this problem before and we had re-configured DHCP on these servers to allow the log files to grow to 250MB but things had stopped at 36MB.

We used this PowerShell to make the change a long while ago and restarted the DHCP service: https://docs.microsoft.com/en-us/powershell/module/dhcpserver/set-dhcpserverauditlog?view=win10-ps

Per the above link it states “-MaxMBFileSize Specifies the maximum size of the audit log, in megabytes (MB).”

It turns out this PowerShell command simply changes the registry value for HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DHCPServer\Parameters\DhcpLogFilesMaxSize which you can just do manually if you’d prefer.

I have no idea how I found it but after some digging I found this article for Server 2008 (we’re using 2012R2): https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc726869(v=ws.10)

It states:

 

Dynamic Host Configuration Protocol (DHCP) servers include several logging features and server parameters that provide enhanced auditing capabilities. You can specify the following features:

  • The file path in which the DHCP server stores audit log files. DHCP audit logs are located by default at %windir%\System32\Dhcp.
  • A maximum size restriction (in megabytes) for the total amount of disk space available for all audit log files created and stored by the DHCP service.
  • An interval for disk checking that is used to determine how many times the DHCP server writes audit log events to the log file before checking for available disk space on the server.
  • A minimum size requirement (in megabytes) for server disk space that is used during disk checking to determine if sufficient space exists for the server to continue audit logging.

 

I’ve bolded and italicized the relevant line. The article also specifically references the registry key the PowerShell command changes.

This leads me to believe the PowerShell documentation is incorrect and “-MaxMBFileSize” specifies the maximum size of all audit logs added together. Not a maximum size per individual audit log.

I checked the directory size of “%windir%\system32\dhcp” on both servers and they were very close to 250MB.

We’ve since made the following change:

I will update this article if this does not resolve the issue for us.

 

Update 2019-01-10: I can confirm this resolved the issue for us. The log file for the following day reached 54MB with no issue.

How to (almost) automatically backup your Steam library

Update 2018-11-12

It appears a recent Steam update broke the way I was originally launching Steam via the scheduled task. I’ve updated the post accordingly with a different method of accomplishing the same task.

 

Original Post

I recently started making an effort to make sure all of my digital purchases are backed up. Apps, eBooks, Music and Comics are fairly easy to deal with plus I don’t have much as compared to my Steam library.

My Steam library, while compared to other people, might not be huge but it’s the single largest collection of digital content I own and have zero backups of it. In my case my library totals around $6000 as of this writing and based on the ever fluctuation price of content on Steam.

Steam itself offers a method for backing up your games. You launch Stream, install a game and then click ‘Steam’ in the top left and ‘Backup and Restore Programs’. You’re then presented with a wizard that will guide you through the process of backing up your selected games. The flaws with this system are that it’s manual, would need to be re-run every time a game was updated if you care about having the latest patched version and requires you to install every single game in your library.

The solution I’ve come up with still requires you to have all of your games installed locally but it does address the rest of the problems.

I found this utility called “The Steam Backup Tool” (latest version mirrored locally just in case) which takes care of the majority of the automation part of this process.

Here is how I set everything up.

First I needed storage equal to double my Steam Library, half for the library itself and half for the backups. In reality I’ll need less but this is a good starting point. I used Steam Gauge to figure out my library is 2.29TB. I have a 4TB USB hard drive lying around, that should just barely do the trick.

Second I needed something to run all of this off of. Fortunately for me I have a homelab and a dedicated system for backing up all of my personal computers and my homelab itself. The system is a small form factor PC with a Core i3-8100, 16GB of RAM and is running Windows Server 2016 Standard. You can accomplish the same thing with any old laptop, desktop or even just use your gaming rig. It just needs to be powerful enough to run Windows 7 (or newer) and Steam. You can just use your gaming rig if you leave it on 24/7 and have all of your Steam games installed on it. I only install the games I’m playing or have yet to play on my gaming rig which is why I’m using a separate system.

Now that I have everything I need I got started:

  1. I attached the USB drive to the system I am going to use to store all of my Steam games and perform the backups
  2. I formatted the USB drive with NTFS
  3. (Optional) If you want to make your storage got further and don’t mind a performance impact you can enable compression. To enable compression right click the drive, choose properties, and check mark ‘Compress this drive to save disk space’. I recommend doing this before you start downloading your Steam library. Since I’m using Windows Server 2016 and not a regular version of Windows I chose to do something different. More on that here.
  4. I then downloaded and installed the Steam client on the USB drive and logged into it
  5. I then queued up every single game in my library to download/install and waited…… like three days

While the download was happening I did some quick testing and confirmed that I can have Steam running on a second PC in my home downloading games and still use Steam on my gaming rig to play games. This is ideal because now I don’t have to worry about scheduling game updates to occur on the backup server when I most likely won’t be play games on my gaming rig.

Three days later my entire library was downloaded and I can get into the automation part. We need to accomplish a few things:

  1. Automatically launch Steam on a schedule so games will patch and then gracefully exit Steam upon completion
  2. Run The Steam Backup Tool against the Steam library so new games are backed up and games that were patched are re-backed up

To accomplish #1 I chose to use Windows Tasks.

Launching Steam is straight forward, just create a scheduled task in Windows that launches “<STEAM PATH>\Steam.exe” and be sure to configure the task to ‘Run only when the user is logged on’. I set mine to run at 9:00am on Monday. This will take care of automatically starting Steam, letting updates download over the next few days.

After the task was created I disabled it (right click, disable). I want to run a full backup of everything first before enabling the automatic start/exit of Steam. The first full backup is going to take a long time and if Steam launches while it’s running bad things will likely happen.

To accomplish #2 I again chose to use Windows Tasks and some PowerShell.

I decided I am going to backup my Steam games to the same drive they are installed on. This is because my backup software (Veeam) can then copy those files to my actual backup hard drive giving me the advantage of compression, deduplication, proper incrementals and the ability to replicate the data later to the cloud all using the Veeam software. This ends up wasting a ton of storage but that’s fine for my use case. This 4TB hard drive I’m using has been doing nothing for a year. For the average person you’re probably going to want to choose a different location to save your backups.

The Steam Backup Tool has a command line (or CLI) version of it’s executable. The flags are:

The backup command I settled on is:

I chose to use LZMA2 compression [-2] (because why not), Ultra compression [-C 5] because I don’t care how long this takes and want the least amount of disk space to be used, I want anything newly installed to get backed up and anything that’s been patched to be re-backed up [-L] and I limited the backup to 2 threads [-T 2] because server I am running this on only has 4 threads and I want to leave 2 for the OS and Veeam to be able to continue doing their job while these backups run.

The next part was a bit tricky to figure out due to the design of SteamBackupCLI.exe. It turns out the application is expecting a command prompt or PowerShell console window to be created when it runs so it can resize the window and dump all of it’s output to the screen. Trying to run SteamBackupCLI in the background is a no-go which makes it slightly annoying to get running in an automated fashion but not impossible. I’ve submitted a bug to the developer in hopes they will update SteamBackupCLI and offer a command line flag to dump all output to a log file instead of expecting a console window to be opened.

What this means for us is we have to remain logged in to the system that is going to run SteamBackupCLI. In my case I use Remote Desktop (RD) to get into my server so all I need to do is login to the server after a reboot and just click the ‘x’ to close RD with out actually logging out. If you’re using your regular desktop you need to do is make sure you log into as the user running the backup task after a reboot. You can likely “switch desktops” or lock the screen and everything will continue to function but I have not tested this.

Here is the PowerShell script I wrote to kill Steam, wait 30 seconds and then start SteamBackupCLI minimized:

This script assumes you installed Steam Backup Tool in “C:\Program Files (x86)\Steam Backup Tool\v1_8_6\“. If you didn’t, update the 3rd line of this script accordingly.

I saved this script as “startSteamBackup.ps1” in “C:\Scripts” on my system. You can save it anywhere you want, just remember where.

I configured the backup to run on Wednesday at 9:00am which will give the backup 4 days to run before Steam will try to launch itself again on Monday. It is very important with this task that you select ‘Run only when the user is logged on’. If you choose the other option this task will crash on launch as mentioned earlier in this post.

The parameters in the last screenshot are:

Program/script: powershell.exe
Add arguments (optional): -ExecutionPolicy Bypass C:\Scripts\startSteamBackup.ps1
Start in (optional): C:\Scripts

After the task was created I disabled it (right click, disable).

Before enabling the scheduled tasks I manually ran the backup command “steamBackupCLI.exe -O E:\Steam-Backups -S E:\Steam -2 -C 5 -L -T 2” once since this was going to take longer than the 4 days I’m allowing with the schedule. Once the initial backup was done I then enabled both tasks (right click, enable).

 

 

(Bonus) Using deduplication instead of compression

Since I’m running this on Windows Server 2016 I have the option to use deduplication instead of compression. De-duplication occurs on a schedule you set and I believe is less CPU intensive than compression. I do not know if it’s more efficient though. My results were fairly impressive with roughly 27% of my Steam library being redundant data that could be deduplicated.

 

 

Update 2018-11-10

After one full run of backups