The Dell PowerEdge FX2 is a 2U rackmount blade chassis with a common infrastructure allowing servers and storage to share power, cooling, network switching, and chassis management. When it was announced last fall, there were two options: the FC630 two-socket, half-width Xeon blade, and the FM120x4, an Intel Atom–based microserver option. Dell quietly started shipping two additional modules this week. The FC430 is a two-socket, quarter-width Xeon blade, allowing up to eight servers in the FX2, for a total of 224 cores. The FD332 is a direct-attached disk module that contains up to two RAID controllers and sixteen 2.5-inch drives, assignable to the compute nodes inside the FX2.
The compute options inside the FX2 are impressive. By populating the chassis with two FC630 compute modules and two FD332 disk modules, you effectively get two Dell R730 servers in half the data center real estate. Combining four FC430s with two FD332 modules lets you build a respectable four-node VSAN cluster in just 2U of space.
The FC630 and the FC430 have differences due to the space each consumes. The quarter-width form factor of the FC430 limits it to eight DIMM sockets, two onboard 10 Gbps NICs, and access to one of the FX2’s PCIe slots. The FC630 has twenty-four DIMM sockets, four 10 Gbps NICs, and access to two PCIe slots. However, with eight FC430s, you can have up to 224 Intel Xeon cores in 2U, versus only 144 with the FC630. That’s 515.2 GHz of CPU and 2 TB of RAM in 2U, or with a 5:1 consolidation ratio, 1,120 single vCPU virtual servers. In two rack units. Two!
The FX2 chassis itself is more like the Dell PowerEdge VRTX than the venerable M1000e blade system. All three products have similar integrated KVM and Chassis Management Controllers (CMCs), and they can be configured with redundant CMCs. Like the VRTX, the FX2 uses PCIe as the internal fabric, and both have assignable PCIe slots. Each slot can be assigned to a single compute node, but the slots can’t be shared with SR-IOV. The PCIe RAID controllers in the FD332 can only be attached to one compute node at a time, unlike the VRTX, which can share its storage among the compute nodes.
Networking on the FX2 is accomplished either with I/O passthrough modules, which offer nothing in terms of consolidation and cable reduction, or the FN IO Aggregator (IOA) which does reduce cabling requirements. There are options for up to two networking modules, and the FN IOAs can be configured as 10GBASE-T, 10 Gbps SFP+, or Fibre Channel—a nice touch. So is the InfiniBand option for the FC430, which will be popular with HPC and trading platforms. Combining the IOA modules with HBAs or NICs in the PCIe expansion slots is a potent combination for I/O.
The Dell website indicates plans for an FC830, a full-width, four-socket blade. However, most people looking for four-socket solutions will likely be better served by the M1000e, if only to better amortize the overhead and licensing of CMCs and staff time spent managing the infrastructure. You can put eight full-height blades in an M1000e but only two in an FX2.
So, who would buy one of these things?
People looking for incredible density but who don’t want the difficulty of managing bare-bones equipment. This is Dell’s answer to the Supermicro Twin product line, which doesn’t have as robust management or service and support offerings. Organizations think they want white-box servers for value, but then they find they’re on the hook for absolutely every aspect of product support, monitoring, and management, plus miscellaneous expenses. This seriously detracts from overall ROI. Dell’s FX2 is a nice compromise, especially for organizations that like Dell support and manageability. It’s also a nice option for Dell OEM Solutions customers to drive density (imagine a SolidFire array built of FX2s), or for current Supermicro customers who would like to focus on their own strengths and leave the intricacies of server design to the professionals.