

🚀 Elevate your server game with AMD EPYC power and lightning-fast connectivity!
The ASRock Rack ROMED8-2T/BCM is a robust ATX server motherboard designed for AMD EPYC 7003 and 7002 series processors, featuring 8 DDR4 DIMM slots, PCIe 4.0 x16, dual OCuLink ports, and dual 10GbE Broadcom networking. Engineered for high-performance virtualization and enterprise workloads, it supports advanced memory types and remote management, making it a versatile choice for professional-grade server builds.
| ASIN | B0BCXYTPDJ |
| Best Sellers Rank | #243 in Computer Motherboards |
| Brand | ASRock Rack |
| Customer Reviews | 3.4 3.4 out of 5 stars (12) |
| Date First Available | September 3, 2022 |
| Item Dimensions LxWxH | 12 x 9.6 x 1 inches |
| Item Weight | 3.57 pounds |
| Item model number | ROMED8-2T/BCM |
| Manufacturer | Asrock Rack |
| Memory Speed | 2133 MHz |
| Number of USB 2.0 Ports | 2 |
| Product Dimensions | 12 x 9.6 x 1 inches |
| RAM | DDR4 |
| Series | ROMED8-2T/BCM |
M**H
Great Server Board - Works well with Proxmox and XCP-NG, Linux and Windows Server 2022
This a great board for what it offers in its form factor. Have been running two of these units now for over eight (8) months in my home labs two Proxmox nodes and have not had a single issue with either of them. Plenty of PCIe slots and for the most part good spacing on them. M.2 place is not the best but in a case with proper airflow, it will not be a problem. The 2x Intergrated SAS connectors instead of SATA ports as I can just connect it right to my drive backplane and the two Oculink ports is a nice plus for U.x/NVMe Drives. BMC is your typical ASPEED management and gets the job done with no complaints. I have tested XCP-NG, Debian 12, Manjaro and Ubuntu 24.04 on these boards and found now issues. Also tested server 2022 and it worked fine as well. (NOTE - This is a board designed for System integrators and Windows Server drivers will not be available from AsrockRack and will need to be sourced elsewhere). Specs in both nodes are AMD Epyc 7443p 24/48, 512GB 3200 RDIMMs, 8x1.92GB Crucial 5400 Pro, 6x 8TB WD RED Pro, and 1xNvidia A2000 in a 45homelab case with Noctura case fans for whisper quite operation. Each system at idle load bounces around 296-309W. Also, for the reviewer that was complaining about no support for Windows 10. This is a server board and was in no way ever designed to support any Windows desktop/workstation OS. Please do your research before purchasing any item like this to make sure it will work for what you want to use it for. AsrockRack makes Workstations class Epyc boards which are designed to support Windows Desktop class OS.
W**J
650 dollar MoBo with inoperative PCIE slot 2... yeah.... I tried the jumpers.
Bought this MoBo to build a 12 NVLinked GPUs for an AI computing / Rendering server at home, given a modest budget... well *modest* cough... First of all, while it might not be a super important detail, the MoBo you'll get is the ROME8D-BCM, not ROME8D-2T/BCM as the title reads, they are NOT the same, the one sold here doesn't have the USB-C plug built nor the USB-C header built into the mobo either, nor the Intel X550 dual 10GBe Ethernet. It has a dual Broadcom 10GBe which seemed fine, but again, its just something to point out for any potential buyers. There IS a difference between the 2T and the BCM. Now for the details as to why this is 1-star. So, the first thing you need to build a 12GPU server is PCIE lanes. The AMD 7532 that was mated tothis MoBo had 128, more than plenty for the task at hand. Then you need a bunch of risers, and bifurcation support... check and check. I've tested a variety of risers using my other workstations/servers boards before committing to this build, so I had a pretty good idea what risers worked and which ones didn't. Most of these inexpensive LINKUP risers (20-25mm) appeared to work, and since this was going to be a 2080Ti NVLinked setup (x6 groups of x2 NVlinked 2080Ti, which are fairly cheap), so, the PCIE 4.0 speeds weren't really needed, Then you need x5 PCIE x16-->x8x8 bifurcator risers/splitters, the ones I got were 40 dollar with 2-slot distance between PCIE lanes, so you can fit 2 2080Tis on one of those bifurcators. PSU, couple of EVGA 1600W supernova platinums for a total power of 3200W... but I only planned on running 10 GPUs at once, or 2500W max, plenty. Everything was poised to work... so I put the board on an open case, installed the EPYC7532 on it... put the first pair of 2080Ti on the first 2 slots (PCIE1 and PCIE2) and powered it up. Post goes fine... install the NVidia driver, and it only sees x1 2080Ti... so, power the server down, check the Jumpers, all 1-2 and 1-2 so, the PCIE2 should behave like any x16 PCIE slot according to the manual. Power up again, same result, no second 2080Ti detected. So, placed the 2nd GPU on PCIE3, power up and now both cards show up. Since I am using 2-slot NVlinks, I put the x2 2080Ti on PCIE3 and PCIE5... and fire it up... both cards show up again. So, power down and add the NVlink bridge, power up... NVLink not detected, try a few things... nada. At this point I am not happy. So, I decide to figure out what is going on with slot PCIE2, so I insert a few different GPUs, tried a Titan X, a Titan Xp, and a RTX3060... none work on PCIE2, won't even show the screen on POST, so I try bifurcating the PCIE2 to 8x8... nada... same thing, won't even show the screen... it boots into the OS, just no screen... WTH? So, lets try the riser cables and bifurcators, maybe I can live with just 10 GPU... etc... so placed two GPUS with a bifurcator x8x8... computer boots, but it immediately freezes as soon as it starts to use the GPUs... remove the riser, place cards directly on MoBo it doesn't hang. I try 4G decoding, ON/OFF, SRV-IO ON/OFF, all jumper positions... nada. I try the same risers/bifurcators on an ASUS Z10PE8-WS with bifurcation enabled on the CPU2 slot 5 and 7, risers work, NVLink is detected.... so, at this point I am done with this board. Piece of trash. Not sure if its AMD, or ASRock, but given the fact I have other Broadwell-EP based ASRock dual CPU boards that have been running for years without a hitch, I suspect its AMD typical quality from the early 2000s... they make the CPU faster, but they cut every other possible corner there is to be cut just to sell cheaper than Intel... with noisy PCIE lanes that hang your machine? Check... just like the good old VIA AGP P.O.S. chipsets that also froze Win2k/XP back in 2001ish... FWIW, measured performance of the EPYC7532 was exactly the same as a dual Broadwell Xeon E5-2699 v4 (not even the 2699A model) sure, you lose 48 PCIE lanes, and its a much older chip design, and needs 2 CPUs to keep up, but at least you have a device that actually works and works for years without a hitch. CPU-Z for an E5-2699v4 is 427 on Single Thread, and 14983 on Multithread (543/17800 on AVX2) The EPYC7532 clocked at 412 single/ 16100 multithread, and 412/13900 AVX2) Make whatever you want of those numbers... but this will be my last AMD for the foreseable future... my previous AMD build was an Athlon back in 2002... that I quickly ditched due to having piss poor reliability, just like this turd... who knew? W.
R**.
Quality
ASRockRack always great....
R**S
Decent board for my needs with some real annoyances during initial setup
I Picked the ASRock Rack romed8-2t/BCM motherboard for a multiGPU AI workstation build. It works fine for my purposes, but have a few issues that keep this from getting a higher rating. The single biggest issue I have is that upon flashing with the latest bios from the ASRock Rack website the motherboard refuses to post with both GPUs in. I was updating the bios in order to try and get added support for resizeable bar on the PCIe slots. updated to version 3.8 and then it refused to post. I was able to get it to post with a single GPU after that, but upon re-adding the second GPU it refused to post again. I used IPMI to downgrade the BIOS to 3.5 from the ASRock Rack website and it posted without issue. Tried upgrading the BIOS several time since to 3.8 with and without the 'keep bios settings' option and with various other configurations and always the same issue. Luckily I do not NEED version 3.8 as I do not NEED ReBar support. am not 100% sure 3.8 even adds ReBar support to be honest, but regardless wasnt thrilled with the inability to run the latest BIOS version in my dual GPU config. Another issue I found is that the way the networking is configured you have the option of bridging the IPMI network interface to the built in NICs... This is a BIOS setting and the bios and manual give very little information on what this means for the network devices. I set this to unbridged expecting to have dedicated lines run to the IPMI as well as each of the NICs. When running this way the IPMI worked fine but both onboard NICs seemed to have issues getting an IP from my DHCP server and even with a static IP the connections would not remain connected. In bridged mode it works fine for me. Its almost as if even when unbridged the IPMI shares a MAC with one of the NICs or something so cant function on the same network? I dont know.. there probably is a good explanation but because the documentation is severely lacking for this I dont know. Luckily in bridged mode NIC1 works fine for network connectivity and for the IPMI and thats fine for me, but this was a frustrating issue and should have had a better description in BIOS or better documentation in the manual. Another thing I was not thrilled with was the positioning of the front panel connectors on the motherboard. they are bent 90 degrees facing downwards if this motherboard is in a tower style case. this adds extra length and clearance needs. I am using the DARKROCK Classico Max eATX tower and I had to bend the power botton pins on the motherboard in order to plug the connector in to it. Given that this is an eATX motherboard I think its kind of wild to go with this pin configuration knowing it will likely become a clearance issue with the bottom of a tower styled case. Final note, this is not a huge deal, but just be aware that driver availability is very poor for Windows Pro(or home) for this motherboard. Windows Server 2022 and the latest version of Ubuntu worked fine, but if you were wanting to run this as a workstation build with Windows 11 Pro you're going to have some difficulties with network and other drivers.
S**L
7 PCIe 16x gen 4 slots!
The Threadrippers can't do that, at least my 3960X can't support that and I want it for an AI rig using 7 GPUs!
Trustpilot
3 days ago
2 weeks ago