Originally Posted by transhour
if i had more ram, maybe 16 gigs, i would definately invest in the time and effort it would take to get a ramdisk setup properly and have it sync back to the harddrive upon changes.
it could be done on 4 gigs, i use 1 gig for a /tmp partition out of 4 gigs, i've seen a bit of advantage to it, especially if compiling, might need to rework it and give it 2 gigs for bigger projects, dunno yet, they aren't hard to setup.
That might not work out as well as you would think, especially with compiling. Considering you'd not only need the distro, you need the development files. Which can add a lot more space, I downloaded over 500MB for dev files. My / total is like 10G in space, that's all the applications I have and require for a working desktop. And if you think that's a lot, LibreOffice is 300MB. KDE compiled is 800MB (low estimate, more like 1G), the source is 1.5G. Now you might keep the source on the drive, that's fine, but you still have to push KDE into the RAM. I'm guessing the average install for any distro is around ~6-8G, that's not a custom/minimal install. I know it can be done, but when you start adding in a full blown desktop it starts to increase in size.
I could use 4G minimal install, but then just that running a browser would make your ram use above 5G. This might not seem like a crazy amount, but then add compiling (this is more specific to transhour) your going to get at least 7-8G minimum used. That's a hell of a lot of RAM, and yes compiling will use that. I've actually hard locked my desktop compiling KDE, I was using too much ram and the thing locked up. So for this type of setup you would need at least 8G to make it feasible. When it hard locked I was only using 800MB out of 2G, 1.2G was used just for compiling (and possibly more). I had to shutdown and re-start the make process without running my browser and the make would still eat up about 1.5G of ram during certain peaks.
If you wanted a full blown distro put into ramdisk I'd estimate on the minimum side 6G, best guess would be at least 8G. Anything less and your dealing with compression, and then your not only going to eat RAM but eat CPU cycles like mad. The higher the compression (less RAM) the higher the CPU load. Your going to end up with a Dual Core as a min req, probably a Quad if you want any IO intense programs (compiling), and that's going to suck hard with compression.
Basically, if you want to do a lot of IO intense stuff you can't use compression. If you use compression, you can't do IO intense stuff. You can't decide to switch, and if you do you'll end up waiting for everything to compress/decompress before you make changes (using a crap ton of RAM). At that point you might as well use temp storage, a hard disk. Then your just doing what we do now, store IO intense files/data in ram or cache and use a physical drive for everything else.
It's feasable for a LiveCD because your not ment to be doing every day tasks, and your not ment to compile. But once you get into compiling and anything advance, your going to start eating up ram like you do HDD space. For instance, it takes 1.5G just to store the source code for KDE. If you plan on using Slackware how much source do you plan to store? Are you going to store all that on the physical drive? But then why use the RAM disk? It makes no sense, you'll have to copy all the files to ramdisk or decompress it all during boot anyways. Decompressing 4G of data is going to be a pain. Not only that, why copy 4G of data to a ramdisk? It just seems like a lot of redundant and wasted time for what? "Snappier"? You can get that same performance if you tweak your system right anyways, I just don't see the point.
 I know this is already a lot, but then you get into things like: During the compile process, which is already RAM intensive. When you start using that RAM for compile, while using it for the every day file IO of browsing (temp files, data streams: internet video/audio), then the IO of browing the "drives", or just IO stuff like using the terminal/or the DE using temp files/IO functions. You will eventually run into the problem that despite the amount of RAM being used, your going to put a lot of stress on the overall input/output of the bus. Basically your computer is going to become completely unresponsive during that time. This isn't like the normal lag you might get just compiling, this is going to be much worse because not only are you relying on the RAM for the compile process but EVERYTHING relies on it. Your going to destroy the data bus, too much data flow will cause lockups. This is just going to compound the issue even worse. That's why we went SSD, because we can push them faster than standard drives while keeping the other resources free (RAM/CPU). You can't expect to do IO intensive applications at all on a ramdisk, it's just going to cause problems.
[edit2] Not only will you run into the problems above, but you will have to take into account that you will need to have a filesystem. So just like when you format a drive the file system will take some of that space. Compressed ramdisk vs uncompressed, ect.... It's one thing to create a minimal system and run, it's another to create a completely functional system with drivers that install properly, custom kernels, blah blah.Edited by mushroomboy - 2/15/11 at 1:55pm