Check out my first novel, midnight's simulacra!
Schwarzgerät III
My 2020 rebuild, Schwarzgerät II, was a beast of a machine. Like the Hubble or LHC, however, I had to give it more powah. The 2022 upgrade, Schwarzgerät III, does just that despite scrotumtightening supply chain madness. This rebuild focused on cooling, power, and aesthetics.
“But out here, down here among the people, the truer currencies come into being.”―Gravity's Rainbow (1973)
I hesitated to call this the third iteration of Schwarzgerät, as there was no real compute upgrade (I did go from 128GB of DDR4-2400 to 256GB of DDR4-3000, but that hardly counts). The CPU, motherboard, and GPU are unchanged from 2020's upgrade. Nonetheless, the complete rebuild of the cooling system (and attendant costs, both in parts and labor) and radically changed appearance seemed to justify it.
This is my first machine with internal lighting, and also my first to use custom-designed parts (both mechanical and electric). I learned OpenSCAD and improved my 3d printing techniques during this build, and also extended my knowledge of electronics, cooling, and fluids. I furthermore developed a better understanding of power distribution. In that regard―and also with regards to the final product―I consider the build a complete success.
I bought the Corsair iCUE Commander Core Pro after having been informed that it had a Linux driver. Unfortunately, this driver only provides control of the fans. I extended it, along with OpenRGB, to fully support the device. Once perfected, these patches will of course make their ways upstream. I must say that it's incredibly satisfying to use a computer for which you wrote code and designed parts. In the future, I'd like to try fabricating my own chassis, and perhaps even my own PSU.
I also enjoyed my first leak, or more properly my first four leaks. The first three were gross connection failures, resulting in incredible deluges covering most of my office floor. The last was a slow, insidious leak on top of my GPU waterblock. In the course of repairing this, I was told by some thirteen year old Romanian redditor to "use my brain". He ought consider himself lucky not to find my foot in his ass. None of that was very much fun.
Future directions
I'm not sure where else I can go with this machine. There doesn't appear to be much useful work I can do beyond what I've aleady done. Some thoughts:
- Case modding. I could add a window to the PSU door, and improve on the piping in the back. I'm very hesitant to go cutting apart the irreplaceable CaseLabs Magnum T10, though; it's not like I can go buy another one.
- Grow down. If I could find (or more likely fabricate) a pedestal for the machine, I could go extensively HAM with radiators, or add a second motherboard for a virtual-but-not-really machine. I don't really need either, though, and the latter would require a second (or at least significantly larger) PSU.
- Mobility. Work towards the Rolling War Machine by augmenting the existing accelerometer with sensing and movement capabilities. Kind of a big (and expensive) box (and small condo) to be tearing around on its own initiative.
- Hard tubing. Regarded a superior look by many in the watercooling community, but I don't really think so—in a big case like this, I dig the more organic look of soft tubing. It's also infinitely less annoying to work with.
- Voice recognition. Since this workstation can control my ceiling fans (via SDR) and lights (via Hue), it would be nice to have some basic voice-based operation, but without sending anything outside the computer. This would mostly be a software project, except there exist cheap chips to do this easily. I could also tie this into my multifactor security story (i.e. don't unlock without my voice).
- LoRa. LoRa is a long-range, low-bandwidth radio protocol. I could bring an antenna out, and use the Arduino together with a LoRa chip.
- Battery for the CCFL. It would be nice to have some light when I'm working inside the machine. If I could provide selectable battery-based backup for these rods, that would be useful.
- PID control for fans/pumps. The Proportional-Integral-Derivative controller is a simple feedback mechanism that I suspect would work well with fans and pumps. I don't care how many RPM my fans are spinning at; what I care about is how warm my coolant and components are (and noise). I'd like to set up target ΔTs (as a function of ambient temp) and a target noise ceiling, and use an inline sensor, an ambient sensor, and an acoustic sensor in combination to manage my loop's active components.
Bill of materials
We're approaching the $10,000 mark before correcting for inflation, with hard drives alone representing close to $5,000. Materials in this build were acquired over a period going back to 2011 (the LSI Fusion SAS card is, I'm pretty certain, the component longest in my possession). This most recent iteration represents less than $2,000 of components, most of that being $1,150 for the 256GB of RAM (I did manage to sell my old RAM for $200, but we can't deduct that, unless we included its original cost).
Chassis
As always, I ride into battle atop my beloved CaseLabs Magnum T10, seeking death and glory. CaseLabs went ignominiously out of business in August 2018, and spare parts a la carte are now effectively unavailable. Nonetheless, it remains a truly legendary artifact, perhaps the single greatest case ever built. This build makes more complete use of it than I ever have before.
- Caselabs Magnum T10 chassis with 85mm ventilated top plus...
- 3x Caselabs MAC-101 HDA+fan cages
- Caselabs MAC-113 120mm fan mount
- StarTech HSB4SATSASBA 4-bay 3U HDD cage. Removed factory fan, replaced with Noctua.
- Icy Dock MB324SP-B 4-bay 1U SSD cage
- Self-designed and -printed case for Arduino MEGA 2560 (source)
- Self-designed and -printed case for RHElectronics Geiger counter (source)
- Self-designed and -printed covering case for EKWB Quantum Kinetic FLT 240 mounting kit (source)
- Self-designed and -printed cable shroud for bottom of Gigabyte Aorus Master TRX40
- Self-designed and -printed false floor for bottom of Caselabs Magnum T10 PSU side
- Self-designed and -printed 4x140mm fan mount for roof (source)
- DEMCiflex magnetic dust filter pack for CaseLabs Magnum TH10
- 2x USB 3.0 motherboard header 90 degree adapters
- USB 2.0 B-type 90 degree adapter
- EVGA PowerLink
Cooling
An entirely custom water loop with redundant D5 pumps (either can drive the entire loop, though of course with less flow). I can partially drain and fill the loop without touching anything through the externally-mounted Kinetic FLT. Full draining and optimal filling proceed via the 5.25"-mounted Monsoon, sitting at the bottom of the case; this requires removing the USB bay installed above it.
There are fourteen 120mm fans, four 140mm fans, one 80mm fan (in the 4x3.5 bay), and one 40mm fan (in the 4x2.5 bay). There's also a 55mm chipset fan in the lower-right corner of the motherboard, and a 30mm fan under the IO shield (now uselessly) attempting to cool the VRMs. Most (eight) of the 120mm fans are mounted in push configuration to the four radiators, yielding a total of 1200mm² of radiator (720 on the top, and 480 on the bottom).
- EKWB Quantum Kinetic FLT 240 D5 pump + reservoir with mounting brackets. Installed halfway up the case's back, outside. Pump is an EK Laing PWM D5.
- Monsoon MMRS Series II D5 pump housing + reservoir with 2x Silver Bullet biocide G1/4 plugs. Installed at the front bottom of the case, in the lowest two 5.25" bays.
- EK Laing Vario D5 pump installed into the Monsoon.
- Bitspower BP-MBWP-CT G1/4-10K temperature sensor. Installed in Quantum FLT's central front plug, running to motherboard's first external temp sensor.
- XS-PC G1/4-10K temperature sensor. Installed in Monsoon's upper left plug, running to Corsair iCUE Commander Core XT's first external temp sensor.
- DiyHZ aluminum shell flowmeter and temperature sensor. LCD screen displays both values, and a 3-pin connector carries away flow information.
- EKWB Aorus Master TRX40 DRGB monoblock (nickel+plexi).
- EKWB EK-Quantum Vector RTX RE DRGB waterblock (nickel+plexi).
- Hardware Labs Black Ice Nemesis GTR360 16 FPI 54.7mm radiator, mounted to top PSU side.
- Hardware Labs Black Ice Nemesis GTS360 30 FPI 29.6mm radiator, mounted to top motherboard side.
- 2x Hardware Labs Black Ice Nemesis GTS240 XFLOW 16 FPI 29.6mm crossflow radiators, mounted to bottoms.
- 4x Noctua NF-A14 chromax.black 140mm fans, connected to Corsair Commander Core, mounted in top
- Noctua NF-A8 chromax.black 80mm fan, replacing original fan in StarTech drive bay
- 2x Noctua chromax.black NF-F12 PWM fans on GTS360
- Noctua iPPC-2000 PWM fan on GTS360
- Noctua iPPC-2000 PWM fan mounted in front Flex-Bay
- 2x Noctua chromax.black NF-A12x25 PWM fans on mobo-side 240 XFLOW
- 2x Noctua redux NF-P12 PWM fans on PSU-side 240 XFLOW
- 2x EK Vardar PWM fans on GTR360
- Noctua NF-A15 fan on GTR360
- 3x Noctua NF-A15 fans on drive cages
- Fancasee 1-to-4 PWM splitter
- Silverstone 8-way PWM splitter, SATA power
Compute
What can I say about the 3970X that hasn't been said? One of the premier packages of our era, and probably the best high-end processor-price combo since Intel's Sandy Bridge i7 2600K. It's damn good to have you back, AMD; my first decent machine, built back in 2001, was based around a much-cherished Athlon T-Bird.
- AMD Ryzen Threadripper 3970X dotriaconta-core (32 physical, 64 logical) Zen 2 7nm FinFET CPU. Base clock 3.7GHz, turbo 4.5GHz. Overclocked to 4.1GHz.
- Gigabyte Aorus Master TRX40 Revision 1.0. Removed factory heating solution, replaced with EKWB monoblock.
- 8x Kingston Fury Renegade RGB DDR4-3000 32GB DIMMs for 256GB total RAM.
- EVGA GeForce RTX 2070 SUPER Black Gaming 6GB GDDR6 with NVIDIA TU104 GPU. Installed in topmost PCIe 4.0 16x slot, though this is only a 3.0 card.
- ELEGOO MEGA 2560 Revision 3, connected to NXZT internal USB hub, mounted to back of PSU chamber
- Intel X540-BT2 2-port 10Gbps 10GBASE-T PCI 3.0 card
As I detailed regarding Schwarzgerat II, the 3990X is an amazing achievement in chip design and fabrication, but I believe it to be severely starved for many tasks by its memory bandwidth; with its four memory channels populated, the ThreadRipper 3990X can hit about 90GB/s from fast DDR4; its Epyc brother can pull down ~190 through its eight channels. For my tasks, it's rare enough that I can drive all my 32 cores; with the 3990X, I'd be paying twice as much to hit full utilization less often, and be unable to bring full bandwidth to bear when I did.
I absolutely 🖤 my 3970X, though. Bitch screams.
Power
Power ended up being a tremendous pain in the ass.
- EVGA Supernova Titanium 850 T2
- Can provide 850W of 12V power, but only 100W of 5V (this would be important later. read on...)
- CableMod green/black braided cables for PSU
- BitFenix 3-way Molex expander
- 2x PerformancePCs PCIe-to-Molex converters
- BitFenix Molex-to-4xSATA converter
- 2x 4-way SATA expanders
- 2x Bankee 12V->5V/15A buck transformers
Storage
I love the CableDeconn bunched SATA data cables; they're definitely the only way to fly, assuming lack of SATA backplanes. We end up with 4x 3.5 drives in the bay, 10x 3.5 drives in CaseLabs cages in the PSU side, 3x M.2 devices in the motherboard PCIe 4.0 slots, and 2x M.2 devices in the HyperX card. This leaves room for 2 more M.2s in the card, and 4x 2.5 devices in the smaller bay. The bottom 2 slots in the bottom hard drive cage are blocked by the PSU-side radiator; indeed, I had to take a hacksaw to said cage to get it into the machine.
See my analysis of how to best make use of 14 drives. I went with a striped raidz2 for my 14 Exos drives (also known as a RAID60 in the Old English), yielding 180TB usable from 252TB total. I suffer data loss if I lose any combination of 4 drives, and can lose data if I lose certain combinations of 3 drives (any combination where all three lost drives are in the same raid2z), but no rebuild ever involves more than seven drives.
All filesystems are ZFS, and all storage enjoys some redundancy (save the 16GB Optane, which is just for persistent memory/DAX experiments).
- 14x Seagate Exos X18 18TB 7200 rpm SATA III drives in striped raidz2
- Asus HyperX 4x M.2 PCIe 3.0 x16
- LSI Fusion PCIe 2.0 x8 2x SAS
- Joylifeboard ASM1166 PCIe 3.0 x1 6x SATA III
- 2x Samsung 970 EVO Plus 2TB NVMe M.2 in raidz1
- 2x Western Digital Black SN750 1TB NVMe M.2 in raidz1
- Intel Optane 16GB M.2
- 2x CableDeconn SAS-to-4xSATA cables for use with LSI Fusion
- 2x CableDeconn 4x SATA cables for use with motherboard
- CableDeconn 6x SATA cable for use with ASM1166
Interfaces
- NXZT internal USB 2.0 hub, magnetically attached to underside of PSU, connected to motherboard USB 2.0 header
- RHElectronics Geiger counter, wired via 3 pins to 2560 MEGA, mounted to back of PSU chamber
- Corsair ICUE Commander Core XT RGB/fan controller, mounted in top, connected to NXZT internal USB hub. I extended OpenRGB and the Linux kernel to drive this device.
- Monsoon CCFL 12V inverter, mounted to top, powered via SATA power connector attached to 12V Molex attached to video power line
- GY-521 board for MPU 6050 accelerometer + gyro, wired via 8 pins to 2560 MEGA, mounted to back of PSU chamber
- DIY-FAB USB 3.2 front plate, connected to motherboard USB 3.2 header
- 2x USB 3.0 front plates, connected to motherboard USB 3.0 headers
Lighting
- 4x Corsair ARGB LED lines, connected in series to Corsair Commander Core, attached via adhesive around top.
- 2x 12V RGB LED lines, backlighting top radiators, attached to motherboard's top RGB header via 1-to-2 RGB splitter.
- 2x green PerformancePCs CCFL rods, attached to Monsoon inverter, mounted to back inner corner of each chamber.
- ARGB lines on EVGA Quantum FLT and Aorus Master monoblock, attached to motherboard's top and bottom ARGB headers respectively.
- RGB tops on Rage DIMMs are unmanaged, and self-synchronize via infrared.
Distributing power
I began to run into some serious power issues on this build, originating in the Exos X18 drives (of which, you might remember, there are 14). It will be worth your time to consult the Exos 18 manual. Remember, 12V is for the motor, and 5V is for the logic.
Power draws
Item | 5V watts | 12V watts | Source |
---|---|---|---|
EKWB PWM D5 (in Quantum FLT) | 0 | 23 | Molex |
EKWB Quantum Kinetic FLT 240 LEDs | ? | 0 | DRGB header |
EKWB Vario D5 (in Monsoon) | 0 | 23 | Molex |
AMD 3970X (stock clocks) | 0 | 280 | 2x EPS12V |
Aorus Master TRX40 | 0 | ? | ATX-24pin |
Corsair Commander Pro XT (logic) | ? | 0 | SATA |
Corsair LED strips (x4) | ? (?) | ? (?) | Corsair (SATA) |
Silverstone fan controller | ? | ? | SATA |
Noctua NF-A14 (x4) | 0 | 1.56 (6.24) | Corsair (SATA) |
Noctua NF-P12 redux-1700 (x2) | 0 | 1.08 (2.16) | Fan header |
Noctua NF-F12 iPPC-2000 | 0 | 1.2 | Fan header |
Noctua NF-A12x25 (x2) | 0 | 1.68 (3.36) | Fan header |
Noctua NF-F12 (x2) | 0 | 0.6 (1.2) | Silverstone (SATA) |
Monsoon inverter | 0 | ? | SATA |
DiyHZ flowmeter | ? | ? | Molex |
Exos18 (x14) (spinning) | 4.6 (64.4) | 7.68 (107.52) | SATA |
Exos18 (x14) (spinup) | 5.05 (70.7) | 24.24 (339.36) | SATA |
Western Digital (2x) | 0 | 0 | 2.8A @ 3.3V (9.24W) |
Samsung 970 EVO (2x) | 0 | 0 | 1.8A @ 3.3V (6W) |
EVGA RTX Super 2070 | 0 | 215 | Mobo + PSU |
S5050 LEDs | 0 | ? | ARGB header |
NXZT USB2 hub | ? | 0 | Molex |
Arduino 2560 | 2.5 (5V * 500mA USB max) | 0 | NXZT (Molex) |
everything the Silverstone powers is 12V -- can we move it to a pure 12V source, or does it have internal 5V logic?
Power is natively required in the following form factors:
- Molex (12V only): 2 (pumps)
- Molex (5V only): 2 (flowmeter, USB hub)
- SATA (12V+5V): 14 (10 disks, 2 for StarTech bay, 1 for IcyDock bay, Corsair)
- SATA (12V only): 2 (inverter, fan controller)
- SATA (5V only): 2 (USB front bays)
- PCIe: 1 (GPU)
The drive problem
QUOTH the Exos 18 datasheet:
Mode | 5V Amps | 12V Amps |
---|---|---|
Standby | 0.23 | 0.01 |
Idle_A | 0.30 | 0.31 |
Idle_B | 0.25 | 0.19 |
Idle_C | 0.24 | 0.13 |
Sequential Write (64K/18Q) | 0.92 | 0.31 |
Random Read (64K/18Q) | 0.36 | 0.64 |
Spinup | 1.01 | 2.02 |
I chose Sequential Write and Random Read because those are the most intensive operations (barring Spinup) for the 5V and 12V loads, respectively. For the regular use cases, we have no problems: the maximum 5V usage ought be around 64.4W (14 * 0.92A * 5V), and the maximum 12V usage ought be around 107.52W (14 * 0.64A * 12V).
What about spinup, though? We're talking 70.7W of 5V and 339.36W of 12V! That's 24.24W of 12V per disk. A Molex connector can carry 132W of 12V power and 55W of 5V power (11 amps per pin, 1 pin per voltage level). A SATA power connector can carry 54W of 12V power and 22.5W of 5V power (1.5 amps per pin, 3 pins per voltage level). A SATA power connector can thus safely supply spinup current to only two of these Exos drives! With three SATA power connectors from my PSU, that only covers 6 drives, leaving 8 unaccounted for. With that said, the three connectors together ought be fine for normal use, following spinup.
If I also employed my one Molex, I could handle another two drives, but I need my Molex for a variety of other things. We must then handle this spinup case via another mechanism.
Let's first knock out anything that doesn't need 5V. Our motherboard, CPU, and GPU all take their own cables. The PCIe power cables can carry 75W (6-pin) or 150W (8-pin), almost all of it 12V. Four items in our build require only 12V: the two pumps, the Silverstone fan splitter, and the Monsoon CCFL inverter. Two of these four are in the back, and two in the front. We go ahead and use two of the PCIe cables, together with PCIe-to-Molex PerformancePC adapters and BitFenix Molex splitters, to drive these four items. This takes care of all our fans (save the two on the back of the bays) -- those which weren't on the Silverstone are drawing power from the motherboard.
We have six remaining items requiring 5V power: the flowmeter, the Corsair, the internal USB hub, and the three front panel USB bays. Of these, three natively want Molex 4-pin, and the other 3 want SATA. Combined with our 14 drives, that's 20 power drains.
I'm initially solving this problem using PUIS (Power-Up In Standby), a feature of the SATA specification. Enabling this feature on a disk will prevent it from spinning up until it receives a particular command, which can be issued by the OS (so long as you don't need your system firmware to recognize the disk, which will be unreadable until this command is sent). I intended to solve this later with 12V relays controlled by the Arduino, but instead I think I'm going to use a 12V->5V buck transformer, splice the 12V line of my remaining PCIe cables (see below), and make myself an underpowered 4-pin Molex from it. Each PCIe cable and its 66W of 12V ought be good for three drives, if I'm right about the whole feasibility of that plan.
Power sources
Here's what's available from the EVGA Supernova T2 850:
Cable | 3V | 5V watts | 12V watts | Notes |
---|---|---|---|---|
SATA x3 | 14.85 (sometimes) | 22.5 | 54 | Can bridge to (underpowered) Molex. Don't rely on the 3.3V line; it's sometimes missing from various components. Which also means we needn't provide it, should we synthesize SATA (though we'll be unable to use PWDIS). |
Perif (Molex) | 0 | 55 | 132 | Can bridge to SATA or PCIe. Wiring might not be safe for the full pin capacity. |
PCIe | 9 | 0 | 66 | Can bridge to (underpowered) 12V-only (2-pin) Molex. Can probably bridge (with buck transformer) to underpowered Molex. |
As noted above, the first thing we do is take our 12V-only devices, and stick them on 2-pin Molex extended from the PCIe cables. We use two different cables, both due to these devices' locations, and to avoid an unnecessary single point of failure for the redundant pumps.
There are 4 of the PCIe cables, 1 peripheral cable, and 3 SATA cables. Of these last, two have 3 ports, and one has 4 ports. That's 10 native SATA ports. Conveniently, we have 10 drives in our cages, so we'll go ahead and just use all three cables there.
With all our SATA cables allocated, our only remaining source of 5V is the peripheral Molex cable. We still have 8 junctions that require 5V, so there's some real pressure on our physical 5V distribution. We'll use our 4-port peripheral cable, adding a 1-to-4 Molex expander on the end. That still leaves one sink unallocated; we'll have to use a 1-to-2 SATA splitter on one of the 3-port SATA cables.
Obviously, we'll want to serve a native SATA device with this split, but do we want to use a heavy SATA draw or a light one? Well, our Molex line is already responsible for 4 filled SATA drives through the StarTech bay...but then again, our Molex line can supply more than twice the 5V amperage of a SATA line. The SATA line is servicing three devices; at spinup, that's 15.15W (67% of 22.5). The Molex line is servicing four devices, for 20.2W (36.7% of 55). More importantly, the SATA line has headroom of 7.35W, while the Molex line has headroom of 34.8W. Even when those USB bays come on, even if they're doing the 1.5A of USB3's dedicated charging, they're still putting out 7.5W max (per device). It ends up making way more sense to keep the heavyweight Corsair on our Molex line, and instead kick as light a draw as we can to the SATA line.
This calculation might change if we ever populated the IcyDock 2.5" bays, but they're currently empty.