New Posts  All Forums:Forum Nav:

Ramdisk Linux - Page 2

post #11 of 24
Thread Starter 
Actually it will most likely be manual. I see non persistent storage as both a benefit and a drawback. Many times I will make changes to test something new but end up undoing it later. I'll want to write changes selectively. All of my documents are accessed from a network share so I won't have an issue with losing work.

I won't copy everything. I'm thinking I need /bin /etc /home /lib* /sbin /usr /var. Any others?
post #12 of 24
depends on your distro of choice, but /opt, i would drop home and keep it on a physical partition (as programs do not run from there, just some configuration files and your personal files.) /var might benefit better from a physical partition, but with reiserfs instead of one of the ext., also don't forget /tmp
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
post #13 of 24
Quote:
Originally Posted by transhour View Post
if i had more ram, maybe 16 gigs, i would definately invest in the time and effort it would take to get a ramdisk setup properly and have it sync back to the harddrive upon changes.

it could be done on 4 gigs, i use 1 gig for a /tmp partition out of 4 gigs, i've seen a bit of advantage to it, especially if compiling, might need to rework it and give it 2 gigs for bigger projects, dunno yet, they aren't hard to setup.
That might not work out as well as you would think, especially with compiling. Considering you'd not only need the distro, you need the development files. Which can add a lot more space, I downloaded over 500MB for dev files. My / total is like 10G in space, that's all the applications I have and require for a working desktop. And if you think that's a lot, LibreOffice is 300MB. KDE compiled is 800MB (low estimate, more like 1G), the source is 1.5G. Now you might keep the source on the drive, that's fine, but you still have to push KDE into the RAM. I'm guessing the average install for any distro is around ~6-8G, that's not a custom/minimal install. I know it can be done, but when you start adding in a full blown desktop it starts to increase in size.

I could use 4G minimal install, but then just that running a browser would make your ram use above 5G. This might not seem like a crazy amount, but then add compiling (this is more specific to transhour) your going to get at least 7-8G minimum used. That's a hell of a lot of RAM, and yes compiling will use that. I've actually hard locked my desktop compiling KDE, I was using too much ram and the thing locked up. So for this type of setup you would need at least 8G to make it feasible. When it hard locked I was only using 800MB out of 2G, 1.2G was used just for compiling (and possibly more). I had to shutdown and re-start the make process without running my browser and the make would still eat up about 1.5G of ram during certain peaks.

If you wanted a full blown distro put into ramdisk I'd estimate on the minimum side 6G, best guess would be at least 8G. Anything less and your dealing with compression, and then your not only going to eat RAM but eat CPU cycles like mad. The higher the compression (less RAM) the higher the CPU load. Your going to end up with a Dual Core as a min req, probably a Quad if you want any IO intense programs (compiling), and that's going to suck hard with compression.

Basically, if you want to do a lot of IO intense stuff you can't use compression. If you use compression, you can't do IO intense stuff. You can't decide to switch, and if you do you'll end up waiting for everything to compress/decompress before you make changes (using a crap ton of RAM). At that point you might as well use temp storage, a hard disk. Then your just doing what we do now, store IO intense files/data in ram or cache and use a physical drive for everything else.

It's feasable for a LiveCD because your not ment to be doing every day tasks, and your not ment to compile. But once you get into compiling and anything advance, your going to start eating up ram like you do HDD space. For instance, it takes 1.5G just to store the source code for KDE. If you plan on using Slackware how much source do you plan to store? Are you going to store all that on the physical drive? But then why use the RAM disk? It makes no sense, you'll have to copy all the files to ramdisk or decompress it all during boot anyways. Decompressing 4G of data is going to be a pain. Not only that, why copy 4G of data to a ramdisk? It just seems like a lot of redundant and wasted time for what? "Snappier"? You can get that same performance if you tweak your system right anyways, I just don't see the point.

[edit] I know this is already a lot, but then you get into things like: During the compile process, which is already RAM intensive. When you start using that RAM for compile, while using it for the every day file IO of browsing (temp files, data streams: internet video/audio), then the IO of browing the "drives", or just IO stuff like using the terminal/or the DE using temp files/IO functions. You will eventually run into the problem that despite the amount of RAM being used, your going to put a lot of stress on the overall input/output of the bus. Basically your computer is going to become completely unresponsive during that time. This isn't like the normal lag you might get just compiling, this is going to be much worse because not only are you relying on the RAM for the compile process but EVERYTHING relies on it. Your going to destroy the data bus, too much data flow will cause lockups. This is just going to compound the issue even worse. That's why we went SSD, because we can push them faster than standard drives while keeping the other resources free (RAM/CPU). You can't expect to do IO intensive applications at all on a ramdisk, it's just going to cause problems.

[edit2] Not only will you run into the problems above, but you will have to take into account that you will need to have a filesystem. So just like when you format a drive the file system will take some of that space. Compressed ramdisk vs uncompressed, ect.... It's one thing to create a minimal system and run, it's another to create a completely functional system with drivers that install properly, custom kernels, blah blah.
Edited by mushroomboy - 2/15/11 at 1:55pm
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
post #14 of 24
i simply have no idea what you are ranting about?

i have a 1 gig ramdisk for my /tmp, i'm not compiling kde or gnome, or libreoffice from it, if i am compiling that large, i push it back to the harddrive, cause i know i'm going to need all the ram i can get.

but little programs compiling them, works out just fine with 3 gigs of ram available.

i think you are taking the extreme end of this debate, and not for a good cause. if you have 16 gigs of ram, and use 8 of it for a ramdisk, that is more than enough for a base install of say ubuntu, with only Gnome as its primary desktop, now if you go stupid crazy, and start installing every piece of random garbage and having /var int he ramdisk, then yeah things are gonna get cluttered up really fast as /var is used for caching for apt.
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
post #15 of 24
Thread Starter 
Yeah.. considering the size of some sources I probably do need to put at least parts of /var on disk. I run Gentoo so I need to be able to compile.
post #16 of 24
Quote:
Originally Posted by transhour View Post
i simply have no idea what you are ranting about?

i have a 1 gig ramdisk for my /tmp, i'm not compiling kde or gnome, or libreoffice from it, if i am compiling that large, i push it back to the harddrive, cause i know i'm going to need all the ram i can get.

but little programs compiling them, works out just fine with 3 gigs of ram available.

i think you are taking the extreme end of this debate, and not for a good cause. if you have 16 gigs of ram, and use 8 of it for a ramdisk, that is more than enough for a base install of say ubuntu, with only Gnome as its primary desktop, now if you go stupid crazy, and start installing every piece of random garbage and having /var int he ramdisk, then yeah things are gonna get cluttered up really fast as /var is used for caching for apt.
Right, but even if you have that much ram, it doesn't mean the IO bus can handle moving all that data. Simply put, you can't do intensive IO applications on off disk and expect the ram to still run anything remotely decent on a ramdisk. It's not extreme, that's the basics of making a distro based off ramdisk. You have to think of the entire picture, not just one side. Having the ability to move X data doesn't mean you can move X data both ways, you can move 1/2X read and 1/2X write, but it's not a 1:1 ratio.

And the compression thing isn't actually that crazy. You have to realize all squashFS systems are compressed, how much compression will you need to make a ~6G system reasonable? Is the compression worth it, and how intensive will that compression be towards your CPU/RAM.

Honestly you have to put that all into the equation to make a RAM based distro, without that your just throwing stuff at it hoping it runs smooth. This is exactly the same thing maintainers have to deal with when programming games for consoles. When you have a limited amount of IO/Space yet you have Graphics, Physics, Audio, ect... all to go under the same bus. That's why the PS3 went with BR, they could get rid of picture compression so they can have less strain on the SPs. Everything plays a part, if you remove something you have to make sure the other parts can do that, and that they can do that with leeway so you don't encounter lockups or slowdown.

[edit] Not to mention, when dealing with a squashfs situation you can't write back into that squashfs. That's why we created persistent files (cw-something?). I don't remember how they name them, but you have to have a separate entity to save your data (new programs, drivers, new kernels, ect...). Will you run that with compression? If so, how? Look at how usb persistent drives work, currently we don't have a very good set method of making a persistent drive without squashfs + outside file. You would have to make your own managed system/scripts to do that.

[edit2] What I'm saying is, it's not as simple as just "copying to ram". Cause once you get the ramdisk your going to have to mount it in a way so that the kernel understands the path and keeps your original files. Will you switch path schemes? How will you manage the different system links /usr/lib /usr/bin, will you boot two kernels? I mean it gets much more complicated than just "copy to ram", and it does so very fast. Making /tmp in ram isn't hard, you can directly mount /tmp. But then your going to have to have a separate /usr* and then re-mount /usr* from ram and do it in a way so that the system understands that your mounting over /usr. I just don't see a very good feasible way to make a hybrid system without doing a lot of work. And the work for what? You'll have to wait for everything to be copied/loaded, unless you boot the system then switch it out (which will take a while still).

[edit3] I solved the mout scheme, lol I'm dumb. You could use an initrd to copy all the files to /<destination> and then boot the system. But then your going to have to wait for everything to copy, and that could take a while (quite a while). Every boot is going to be like that. But then you won't be able to save new files unless you write a script to copy them from ram back to the hard drive, which could be done at shutdown. That's still kinda sloppy, if something gets shut of or powered down you would end up with a non-functional system. Essentially your still copying the entire thing to ram.
Edited by mushroomboy - 2/15/11 at 2:29pm
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
post #17 of 24
Thread Starter 
I'm not running squashfs here by the way. This is a standard HDD install of Gentoo (actually iSCSI but the result is the same) that I modified the initrd script for to copy everything to local tmpfs and to boot the tmpfs.
post #18 of 24
Quote:
Originally Posted by evermooingcow View Post
I'm not running squashfs here by the way. This is a standard HDD install of Gentoo (actually iSCSI but the result is the same) that I modified the initrd script for to copy everything to local tmpfs and to boot the tmpfs.
Right but what type of drive are you running? A standard 3G/s SATA would take forever to copy/read my entire system. I'm not going to wait 5 minutes for every reboot, possibly even more. Then, when it shuts down, I'd have to have a script check everything and write any changes to the disk. I mean, it's a great solution if you don't ever add anything new. By then you might as well just use a LiveCD on the hard drive. If your using a real SCSI drive, I don't see much performance gain from that.

[edit] Are you copying EVERYTHING? The entire root /, or just certain things?

I'm dumb, I didn't read the OP completely. lol I'm doing a sociology paper at the same time so my focus is elsewhere. But everything I've said would be what you would have to think about for a persistent drive. I guess if your doing a non-persistent drive you could do this, but it's going to take a LOT of ram.
Edited by mushroomboy - 2/15/11 at 2:36pm
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
post #19 of 24
rsync wouldn't take that much time to, it would only commit the changes, but it would prolly increase the shutdown process by a few minutes.
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
Bazinga Punk
(12 items)
 
ooh shiny!
(6 items)
 
 
CPUMotherboardGraphicsRAM
Intel Xeon 3440 AsRock P55 extreme Evga 8800 GT 512 MB Gskill Ripjaws 
Hard DriveCoolingOSMonitor
Western Digital Blue Antec Khuler 620 Ubuntu 11.10 Asus vw264H 
KeyboardPowerCaseMouse
GIGABYTE KM7600 CORSAIR TX 650 Cooler Master 590 GIGABYTE GM-M6800 
CPUMotherboardGraphicsRAM
Intel Core I5 6500 Gigabyte z170xp-SLI Nvidia 970gtx Corsair 16gb ddr4 2666mhz  
Hard DriveOS
250gb Samsung Evo 850 Windows 10 & Ubuntu 15.10 
  hide details  
Reply
post #20 of 24
Thread Starter 
Currently I just have a quick and dirty setup to copy everything to see that it actually works. I do plan to work on refining it when I get home today.

I don't know exactly what speeds I'm getting when I transfer the files. I can saturate a gigabit link when transferring a large file but I don't know about many small files. I may need to run another FS. Maybe copying an archive and expanding in RAM would be faster.

How painful this setup will be directly depends on how often you reboot Yes the boot is currently at ~5min and rsyncing before shutdown will take additional time if you have changes you want to keep.

Part of the reason I tried all of this was in preparation for my file server upgrade. I'm going to be booting off of a USB or CF card and in order to limit writes I'm going to want to set it up like a liveCD. I now have a setup in mind that I can adapt from this experiment.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Linux, Unix