
Quoting Craig Sanders (cas@taz.net.au):
funnily enough, i have a similar reaction to most intel motherboards - their CPUs can be quite good, but the PCIe lines available and the I/O is minimal compared to AMD AM2/3/3+ CPUs and motherboards.
Here's a question that stumps me just a bit: Why are so many x86_64 motherboard / CPU combinations limited to what seem like arbitrarily low ceilings on total RAM? I don't want to seem like a spoiled child in this. Even 8GB (most Atom) is nice. But why not more? Back when x86_64 was new (and was AMD64, technically), my recollection was that we heard about the glorious new 16 exabyte (2^64) _theoretical_ linear address space that for reasons of practicality would be limited to 256 terabytes (2^48). Yet, we've never seen that, right? Mind you, I'm not talking about machines shipping _with_ 256 terabytes of RAM, only ones that could address that amount if it were available in real-world hardware. But, instead, I see a lot of 8GB total-possible-RAM limitations, such as all Intel Atom gear other than the 'Avoton' Atom CPUs that support 64GB (2^36) RAM addressing. It's not that I expect to see a motherboard with 256 terabyte support at local retailers. It's that I'm surprised the real-world limit is so dramatically _lower_ than that. Closer to the real world, AMD's 2013 'Jaguar' mobile SoC architecture, split into the Kabini (more powerful, low-power) and Temash (less powerful, ultra-low power) product lines can address up to 32GB. The following year's successor to Jaguar, 'Puma', is again segmented into higher-power Beema (15W TDP[1]) and lower-power Mullins (4.5W TDP) -- but I'm unable to find any Beema/Mullins real-world units with RAM capacity above 8GB. I'm confounded by that. What's going on? I'm aware there's a small irony in my posting that question from Silicon Valley, where I can ride my bicycle past Intel and AMD headquarters complexes by pedaling 25km down the road. But we're here and the question's been on my mind -- and I'm nobody's idea of a hardware engineer. (And it's not like I could knock on their doors and demand an answer, come to think of that.) Why wouldn't there be Beema or Mullins-compatible motherboards with at least as high RAM capacity as their Kabini predecessors a year earlier, and instead capacity declined by a factor of four? Is it because AMD is conceding the market for anything bigger than a smartphone or low-end tablet to Intel, or alternatively that few OEMs will any longer pay even small change above the cost of a low-end ARM chip, outside of the colo server market? That would be sad. I'm not even sure where the physical constraint lies, in this era of SoCs (Systems on Chips, what AMD calls APUs) that merge the former CPU, GPU, north bridge, and south bridge into a single chip. I'm guessing number of address lines from the SoC are a constraint, and there may or may not still be logic outside the SoC to decode the memory address lines. And of course there are the sockets for conventional SDRAM or SODIMM sticks, which doubtless impose some constraints. But RAM is cheap, and is (in my own use cases) the single most cost-effective place to sink money into a system to extend functionality, improve performance, and prolong useful service life. So, why are we seeing system-total address limits far below what rosy projections last millennium for x86_64 promised? And where is my hoverboard, anyway? ;-> One last data point: Note the strengths and limits of this extraordinary top-end variant of the new CompuLab 'Fitlet' nono-PC: http://www.fit-pc.com/web/products/specifications/?model%5B%5D=FITLET-GI-C67... This uses the fastest and most impressive AMD 'Mullins' SoC, the quad-core AMD A10-Micro 6700T (4.5W TDP). You can't buy that particular Fitlet. It went out of stock a short amount of time after it hit the market. But you can buy other variants with slightly slower (and cooler, even more power-thrifty) Mullins SoCs. And all Fitlets max out at 8GB RAM, one of their few notable design compromises. So, I'm wondering where that limit arises.
like 4 sata ports on intel vs 8 or 10 on amd, 2 or 3 pcie slots (at x16,x1,x1 or something like that) compared to about 4 or 5 or more on amd (at x16,x16,x16,x4,x1 or or x16,x8,x8,x1 or similar). and often only two DIMM sockets.
The motherboards with only two DIMM sockets tend to be the budget and/or SFF ones, such as the mini-ITX form-factor HTPC market -- thus driven by budget and physical space. Speaking for myself, few sockets doesn't bother me much, as long as I can have high total RAM by using dense sticks. It just means I have an incentive to use the densest supported sticks immediately, rather than have to yank one and put it in a drawer when I upgrade RAM a few years later. Basically, IMO, 'use dense RAM' is one of those basic lessons you learn through making dumb errors, like 'never fight a land war in Asia' and the only slightly less famous 'Never go in against a Sicilian when death is on the line.' In my area, people giving away five-year-old computing gear frequently, and I'm always amused to note that, upon examination, you find them stuffed with 4GB SDRAM sticks. Why? Because the donor joyfully emptied his/her drawer full of 4GB sticks and extracted the 16GB ones before donating. (These are also the same guys who were trying to _sell_ 17" hulking tube-type monitors around 2005 after the world moved to LCD -- which merely meant they were trying to clear out a pile of obsolete monitors and testing the Greater Fool Theory.)
to get the same sort of features (like ECC RAM on AM3+ m/bs) and I/O capability that AMD has on cheap consumer chips and boards, you have to spend a fortune on intel server cpus and motherboards.
Here's my perspective on ECC. I've used and worked on a very great deal of server gear with ECC RAM, and the advantages are obvious. But.... Let's assume you run Linux and are even moderately attentive to the operation of your machine. In my experience, you will become aware that you have a bad stick of RAM pretty quickly, because the signs will be unmistakeable, in the form of a pattern of segfaults and spontaneous reboots. Where I come from, if you see what even looks halfway like that sort of problem, you yank the machine for testing and stress-test the RAM with your choice of usual-suspect tools. And confirm or disconfirm your suspicion. I wouldn't want to run a mission-critical database server without ECC, because one hour of corrupton from single-bit RAM errors is an hour too much. But for, in particular, a home server running Linux, I no longer see ECC as in any way worth the very substantial extra cost for both RAM and motherboards. In passing, I'll note that it's Linux itself that appears to make bad RAM quickly noticeable -- in my experience -- from system behaviour. I have known corporate WinNT (& successors) and Novell NetWare servers where if you weren't running ECC, you would have no idea that data were being silently corrupted by passing through bad RAM.
and it's not just X11. the arm distros are tiny compared to the full range of software in x86 debian, so you'll be spending a lot of time compiling if you need much of that. and then you'll find that the same sort of devs who think it's OK to write linux-only *nix software also think it's OK to write x86-only code. so you wont just be compiling, you'll be spending a lot of time fixing architecture-specific bugs and incompatibilities (which is, of course, part of the reason why the arm distros are tiny. if the source packages compiled OK, distro autobuilders would build them all)
Aha! Thanks for clarifying that. I actually hadn't figured out why the ARM variants of distros were relatively spare on package selection, but that explains it.
depends on how much risk you're willing to take (fairly low for non-ECC with properly memtested/burned-in DIMMS) and how much you're willing to spend up-front and in electricity bills to eliminate that risk.
Speaking of burning in: At the late VA Linux Systems, Inc., we developed in-house a tool called Cerberus Test Control System (CTCS) for burn-in of both newly manufactured systems and all systems RMAed back for any reason. It's still quite good. Open source. 'Cerberus FAQ' on http://linuxmafia.com/kb/Hardware had information. My own way of finding bad RAM is iterative kernel compilation with 'make -j N' set high enough to fully saturate RAM just short of driving the system into swap. Details here: http://linuxmafia.com/pipermail/conspire/2006-December/002662.html http://linuxmafia.com/pipermail/conspire/2006-December/002668.html http://linuxmafia.com/pipermail/conspire/2007-January/002743.html This was an edge-case, in that I had foolishly accepted discarded, suspect RAM from a company's data centre machines and optimistically assumed it might be good, which was a poor wager to make. It turned out that two of the system's four sticks were bad in ways that somehow partially offset each other, making it a bit of wark to find the specific badness and isolate the cause. The punch line: All four sticks were ECC, running on a server-grade ECC-supporting Intel L440GX+ 'Lancewood' motherboard in a colo 2U chassis.
personally, i don't bother with ECC RAM at home - but that's only because it's harder to get.
I don't because I'd rather spend the money on things I think are better value. But Views Differ.[tm] [1] Thermal Design Power is the maximum rated heat emission that associated cooling for a part might be called upon to handle. Thus, it is a maximum-case measure of the power draw that the part could draw at peak loading. By implication, real-world usage will typically be much lower.