RAM is so cheap because it’s a commodity. Unlike CPUs and GPUs, RAM doesn’t have a specific use case that drives its price up. So manufacturers can make RAM cheaper without worrying about losing customers. This also means that RAM prices are always going to be lower than CPU or GPU prices, no matter how high those components get. This low price makes RAM an ideal choice for running everything from basic applications to heavy workloads. Because RAM is so cheap, you don’t have to worry about overloading your system and causing it to crash. And because it’s a commodity, you can always find a supplier who has the best price for what you need. So why not run everything from RAM? Because there are some tasks that are better suited for CPUs or GPUs. But for the most part, running everything from RAM is the best way to go. ..
RAM modules are cheaper than ever before, so why aren’t we running our entire operating system off super speedy RAM banks?
Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.
The Question
SuperUser reader pkr298 wants to know why we’re not running RAM-based, instead of disk-based, machines. He writes:
On the surface his inquiry makes sense, but clearly we’re not awash in RAM-based computer builds; what’s the back story?
Of course, current operating system may not support this at all, but is there any reason RAM isn’t used this way?
The Answer
SuperUser contributor Hennes offers some insight into why we still use disk-based systems:
If you want to read more about RAM disks, check out RAM Disks Explained: What They Are and Why You Probably Shouldn’t Use One.
Common desktop (DDR3) RAM is cheap, but not quite that cheap. Especially if you want to buy relatively large DIMMs. RAM loses its contents when powered off. Thus you would need to reload the content at boot time. Say you use a SSD sized RAMDISK of 100GB, that means about two minutes delay while 100GB are copied from the disk. RAM uses more power (say 2–3 Watt per DIMM, about the same as an idle SSD). To use so much RAM, your motherboard will need a lot of DIMM sockets and the traces to them. Usually this is limited to six or less. (More board space means more costs, thus higher prices. ) Lastly, you will also need RAM to run your programs in, so you will need the normal RAM size to work in (e. g. 18GiB, and enough to store the data you expect to use).
Having said that: Yes, RAM disks do exist. Even as PCI board with DIMM sockets and as appliances for very high IOps. (Mostly used in corporate databases before SSD’s became an option). These things are not cheap though. Here are two examples of low end RAM disk cards which made it into production:
Note that there are way more ways of doing this than just by creating a RAM disk in the common work memory.
You can:
Use a dedicated physical drive for it with volatile (dynamic) memory. Either as an appliance, or with a SAS, SATA or PCI[e] interface. You can do the same with battery backed storage (no need to copy initial data into it since it will keep its contents as long as the backup power stays valid). You can use static RAMs rather then DRAMS (simpler, more expensive). You can use flash or other permanent storage to keep all the data (Warning: flash usually has a limited number of write cycles). If you use flash as only storage then you just moved to SSDs. If you store everything in dynamic RAM and save to flash backup on power down then you went back to appliances.
I am sure there is way more to describe, from Amiga RAD: reset surviving RAM disks to IOPS, wear leveling and G-d knows what, However I will cut this short and only list one more item:
DDR3 (current DRAM) prices versus SSD prices:
DDR3: € 10 per GiB, or € 10,000 per TiB SSDs: Significantly less. (About 1/4th to 1/10th. )
Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.