Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › echo "The `uname` Club" (NEW POLL)
New Posts  All Forums:Forum Nav:

echo "The `uname` Club" (NEW POLL) - Page 94

Poll Results: How long have you been using your current, main installation?

 
  • 24% (50)
    less then a month
  • 23% (47)
    less then six months
  • 14% (30)
    less then a year
  • 24% (49)
    less then three years
  • 13% (27)
    three years+
203 Total Votes  
post #931 of 4043
I run ZFS on my home server. I'm using the zfsonlinux (a project out of LLNL...can't be added to the kernel due to licensing issues so instead takes a 3rd party aproach as a kernel module) PPA with Ubuntu minimal install. It's quite stable and usable on Linux (except for root filesystems I believe). Anyway, I just use it as a NAS, so I don't care about that. I love it. Currently running two 1TB drives mirrored, but in the past I tried a 3 drive RAIDZ which benchmarked really well. There's also deduplication and compression available.
2017 Build
(10 items)
 
   
CPUMotherboardGraphicsRAM
Ryzen 7 1700X ASRock X370 Killer SLI/ac PowerColor R9 280 3GB 2x Corsair Venceance LPX 32GB DDR4-3200 (4x16GB) 
Hard DriveHard DriveCoolingMonitor
Sandisk Ultra II 960GB SSD Mushkin Reactor 960GB MLC SSD Corsair H110i 34" LG 34UC88-B 3440x1440 
PowerCase
EVGA SuperNOVA G2 750W Phanteks Enthoo Evolv ATX TG 
CPUCPUCPUCPU
AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 
MotherboardRAMHard DriveOptical Drive
Supermicro H8QME-2+ 32GB DDR2-667 ECC Registered (16x2GB) 2x Samsung F3 1TB 2x Toshiba 5TB 
CoolingOSMonitorPower
4x Hyper TX-3 Debian Wheezy Headless Corsair CX750M 
  hide details  
Reply
2017 Build
(10 items)
 
   
CPUMotherboardGraphicsRAM
Ryzen 7 1700X ASRock X370 Killer SLI/ac PowerColor R9 280 3GB 2x Corsair Venceance LPX 32GB DDR4-3200 (4x16GB) 
Hard DriveHard DriveCoolingMonitor
Sandisk Ultra II 960GB SSD Mushkin Reactor 960GB MLC SSD Corsair H110i 34" LG 34UC88-B 3440x1440 
PowerCase
EVGA SuperNOVA G2 750W Phanteks Enthoo Evolv ATX TG 
CPUCPUCPUCPU
AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 
MotherboardRAMHard DriveOptical Drive
Supermicro H8QME-2+ 32GB DDR2-667 ECC Registered (16x2GB) 2x Samsung F3 1TB 2x Toshiba 5TB 
CoolingOSMonitorPower
4x Hyper TX-3 Debian Wheezy Headless Corsair CX750M 
  hide details  
Reply
post #932 of 4043
Wow...that FS sounds way to complicated for my needs.
     
CPUGraphicsRAMHard Drive
Intel Core m3-6Y30 Intel HD515 8GB 1866DDR3L Micron M600 MTFDDAV256MBF M.2, 256 GB 
CoolingOSOSMonitor
Fanless Win10 Home x64 Kubuntu 16.04 (requires Linux kernel 4.5/4.6) 13.3 inch 16:9, 1920x1080 pixel, AU Optronics A... 
CPUMotherboardGraphicsRAM
AthlonIIX4 640 3.62GHz (250x14.5) 2.5GHz NB Asus M4A785TD-M EVO MSI GTX275 (Stock 666) 8GBs of GSkill 1600 
RAMHard DriveHard DriveHard Drive
4GBs of Adata 1333 Kingston HyperX 3k 120GB WD Caviar Black 500GB Hitachi Deskstar 1TB 
Optical DriveCoolingOSOS
LG 8X BDR (WHL08S20) Cooler Master Hyper 212+ Kubuntu x64 Windows 7 x64 
OSMonitorPowerCase
Bodhi Linux x64 Acer G215H (1920x1080) Seasonic 520 HAF912 
CPUMotherboardGraphicsRAM
N450 1.8GHz AC and 1.66GHz batt ASUS proprietary for 1001P GMA3150 (can play bluray now!?) 1GB DDR2 
Hard DriveOptical DriveOSOS
160GB LGLHDLBDRE32X Bodhi Linux Fedora LXDE 
OSOSMonitorKeyboard
Kubuntu SLAX 1280x600 + Dell 15inch Excellent! 
PowerCase
6 cells=6-12hrs and a charger 1001P MU17 Black 
  hide details  
Reply
     
CPUGraphicsRAMHard Drive
Intel Core m3-6Y30 Intel HD515 8GB 1866DDR3L Micron M600 MTFDDAV256MBF M.2, 256 GB 
CoolingOSOSMonitor
Fanless Win10 Home x64 Kubuntu 16.04 (requires Linux kernel 4.5/4.6) 13.3 inch 16:9, 1920x1080 pixel, AU Optronics A... 
CPUMotherboardGraphicsRAM
AthlonIIX4 640 3.62GHz (250x14.5) 2.5GHz NB Asus M4A785TD-M EVO MSI GTX275 (Stock 666) 8GBs of GSkill 1600 
RAMHard DriveHard DriveHard Drive
4GBs of Adata 1333 Kingston HyperX 3k 120GB WD Caviar Black 500GB Hitachi Deskstar 1TB 
Optical DriveCoolingOSOS
LG 8X BDR (WHL08S20) Cooler Master Hyper 212+ Kubuntu x64 Windows 7 x64 
OSMonitorPowerCase
Bodhi Linux x64 Acer G215H (1920x1080) Seasonic 520 HAF912 
CPUMotherboardGraphicsRAM
N450 1.8GHz AC and 1.66GHz batt ASUS proprietary for 1001P GMA3150 (can play bluray now!?) 1GB DDR2 
Hard DriveOptical DriveOSOS
160GB LGLHDLBDRE32X Bodhi Linux Fedora LXDE 
OSOSMonitorKeyboard
Kubuntu SLAX 1280x600 + Dell 15inch Excellent! 
PowerCase
6 cells=6-12hrs and a charger 1001P MU17 Black 
  hide details  
Reply
post #933 of 4043
Thread Starter 
Quote:
Originally Posted by Rookie1337 View Post

Wow...that FS sounds way to complicated for my needs.

actually, it's very simple to admin- it's just completely different then managing filesystems like xfs or ext4.
post #934 of 4043
Quote:
Originally Posted by stolid View Post

I run ZFS on my home server. I'm using the zfsonlinux (a project out of LLNL...can't be added to the kernel due to licensing issues so instead takes a 3rd party aproach as a kernel module) PPA with Ubuntu minimal install. It's quite stable and usable on Linux (except for root filesystems I believe). Anyway, I just use it as a NAS, so I don't care about that. I love it. Currently running two 1TB drives mirrored, but in the past I tried a 3 drive RAIDZ which benchmarked really well. There's also deduplication and compression available.
Why not just run FreeBSD? It's every bit as good as Linux but with stable ZFS support.
Quote:
Originally Posted by Rookie1337 View Post

Wow...that FS sounds way to complicated for my needs.
I've been running ZFS for years and one of the attraction is how simple it is to wield such power. It really is a fantastic piece of technology.
Quote:
Originally Posted by jrl1357 View Post

basically, it's the first true self-healing, easy snapshot filesystem and it does a complete retake on how data is stored. zpools can span multiple disk and partitions even without raid, a variation of which, zraid is a software raid built right into the file system. btrfs, developed by suns then competitor and now owner, oracle, tries to address these same issues (the lack of snapshoting, data integrity, etc. ) in a more conventional way, but is as of yet unstable and not fully self-healing. unlike zfs though, it can be used with linux.
To be honest, one of the early criticisms against BtrFS was that it's implementation was more unconventional than ZFS (BtrFS didn't have clear separation between hardware, file system, and user space code, where as ZFS does).

Things may have changed though
post #935 of 4043
Thread Starter 
Quote:
Originally Posted by Plan9 View Post

Quote:
Originally Posted by stolid View Post

I run ZFS on my home server. I'm using the zfsonlinux (a project out of LLNL...can't be added to the kernel due to licensing issues so instead takes a 3rd party aproach as a kernel module) PPA with Ubuntu minimal install. It's quite stable and usable on Linux (except for root filesystems I believe). Anyway, I just use it as a NAS, so I don't care about that. I love it. Currently running two 1TB drives mirrored, but in the past I tried a 3 drive RAIDZ which benchmarked really well. There's also deduplication and compression available.
Why not just run FreeBSD? It's every bit as good as Linux but with stable ZFS support.
Quote:
Originally Posted by Rookie1337 View Post

Wow...that FS sounds way to complicated for my needs.
I've been running ZFS for years and one of the attraction is how simple it is to wield such power. It really is a fantastic piece of technology.
Quote:
Originally Posted by jrl1357 View Post

basically, it's the first true self-healing, easy snapshot filesystem and it does a complete retake on how data is stored. zpools can span multiple disk and partitions even without raid, a variation of which, zraid is a software raid built right into the file system. btrfs, developed by suns then competitor and now owner, oracle, tries to address these same issues (the lack of snapshoting, data integrity, etc. ) in a more conventional way, but is as of yet unstable and not fully self-healing. unlike zfs though, it can be used with linux.
To be honest, one of the early criticisms against BtrFS was that it's implementation was more unconventional than ZFS (BtrFS didn't have clear separation between hardware, file system, and user space code, where as ZFS does).

Things may have changed though

I haven't gone very deep into into it so you may be right, but at the face of it btrfs manages partitions/raid/etc. the same where as zfs does it in a completely different way then other FSs.

EDIT--

I'm taking about the end user perspective, not really the code.
Edited by jrl1357 - 1/8/13 at 8:30am
post #936 of 4043
Quote:
Originally Posted by jrl1357 View Post

I haven't gone very deep into into it so you may be right, but at the face of it btrfs manages partitions/raid/etc. the same where as zfs does it in a completely different way then other FSs.
EDIT--
I'm taking about the end user perspective, not really the code.

Well yeah, I know what BtrFS is. But that doesn't explain why it's considered more conventional than ZFS. Most of the time, the biggest complaint against ZFS is that it's not GPL (which is just dumb complaint in my opinion)
post #937 of 4043
Quote:
Originally Posted by Plan9 View Post

Well yeah, I know what BtrFS is. But that doesn't explain why it's considered more conventional than ZFS. Most of the time, the biggest complaint against ZFS is that it's not GPL (which is just dumb complaint in my opinion)

So to clear up a few things. ZFS is an open source filesystem. People complain that it isn't licensed under the GPL because that is obviously a license that is compatible with the Linux kernel, meaning that code licensed under the GPL can be included in the Linux kernel. ZFS meanwhile, is licensed under Sun Microsystem's CDDL which for various reasons is incompatible with the GPL, so code licensed under the CDDL cannot be included in the Linux Kernel.

The intention of BTRFS is to be a fully modern and scalable filesystem licensed under the GPL for Linux. The goal is to have features comparable to those found in ZFS, although at this time ZFS is still ahead as it's been around a lot longer. That said, if you don't want to learn FreeBSD but want a ZFS-esque filesystem BTRFS is probably a good choice. I've listed the features they have in common below:

  • Both ZFS and BTRFS integrate the volume manager at the filesystem level, eliminating the need for a seperate volume manager such as LVM and making it trivial to grow and shrink storage pools on demand
  • Both ZFS and BTRFS support adding multiple devices to a single pool. giving you a storage space that is roughly the combined total of all the devices in the pool
  • Both filesystems are based on the copy-on-write model, meaning data is never overwritten, just cleaned up later. New data is always written to an empty or unused sector. This helps to ensure data integrity because in the event of a system crash the old data is still there.
  • Both filesystems support cloning and snapshots to be able to roll-back changes if need be
  • Both filesystems can do software RAID natively, though the RAID levels are different. ZFS has raidz, raidz2, and raidz3 which is RAID5 with 1, 2, or 3 disks for parity respectively, and can also do striping and mirroring. BTRFS currently can only do RAID0, RAID1, and RAID10.
  • Both filesystems do checksums of the data written and read to ensure data integrity, though in BTRFS's case I don't believe you can configure what checksum algorithm is used whereas with ZFS you can.
  • Both filesystems support transparent compression

Those are the main features the 2 filesystems have in common. That being said, in all honesty it's worth learning FreeBSD to use ZFS. I have a ZFS server at home and I love it. ZFS is much more stable than BTRFS and has many more features. ZFS can do deduplication on datasets (ZFS pool term for mountpoints) and is also self-healing (if corrupted data is found in a raidz configuration, the data is automatically repaired form the parity information), which BTRFS isn't. Also, within a single pool, ZFS can have a large number of datasets, all with their own settings completely independent from the pool they are in, whereas I don't believe BTRFS can do that as of yet, although I have a lot more experience with ZFS than BTRFS. For those interested, here is a link to a podcast that provides a very good overview of ZFS and it's features. http://www.jupiterbroadcasting.com/13052/ultimate-zfs-overview-techsnap-28.
post #938 of 4043
Oh one additional thing, if you're going to setup a server and use ZFS, whether it be in FreeNAS and FreeBSD itself, you need to have at least 8 GB of RAM if you're going to do anything useful. I know 1GB is the listed minimum for FreeBSD but most ZFS performance issues can be resolved by adding more RAM. This is because the ARC (Adaptive Replacement cache) is a major component of ZFS and it resides in RAM, making reads and writes much faster since they come from memory if possible. If you're running ZFS and having performance issues, watch the following presentation given at EuroBSDcon 2012 for information on tuning ZFS http://www.youtube.com/watch?v=PIpI7Ub6yjo
post #939 of 4043
I've run ZFS on NexentaCP, OpenSolaris and FreeBSD; and FreeBSD is easily the nicest of the 3 OSs. (though that's going to come as no surprise to many on here as I'm quite a big FreeBSD fan. I've also ran FreeBSD servers in the past - long before ZFS - so I jumped at the chance to switch back once ZFS hit the FreeBSD's RELEASE tree biggrin.gif).
Quote:
Originally Posted by Nixalot View Post

Oh one additional thing, if you're going to setup a server and use ZFS, whether it be in FreeNAS and FreeBSD itself, you need to have at least 8 GB of RAM if you're going to do anything useful.
Actually that's not true. FreeNAS's documentation heavily rounds things up. If you go to the original ZFS best practice guide, the specs are a lot more modest. Running ZFS on vanilla FreeBSD should only need a minimum of ~3GB of RAM for ZFS (and that's still rounding up): 2GB of which is required for prefetching. So 4GB would be more than enough to run a basic FreeBSD file server.

In my current NAS, I 'only' have 8GB of RAM, more than half of which is reserved for virtual machines. Then you could probably drop another 256MB of RAM for the core OS (I only have lighttpd, samba (I don't like ZFSs SMB implementation) and NFS running, so it's pretty much a bare bones system). So lets say there's only 3GB free for ZFS. Now I only have one storage pool which is is 6x 1TB HDDs (two 3x HDD RAIDZ1), so I don't have to worry about the 1GB additional footprint as per the best practice guides. If FreeNAS's guide was accurate then prefetching (with it's reported 4GB footprint) would auto-disable. However all my tests on FreeBSD has shown that prefetching only needs 2GB RAM.

So I have prefetching running, I've got compression enabled (not that it does much good as the vast majority of data these days is already saved in compressed formats (videos, images, music, even Office documents). Yet I still have the best part of a gig of unused system memory. A direct contradiction to FreeNAS's documentation.

I will concede that I'm not running encryption (AFAIK that's not even been open sourced by Oracle yet) nor deduplication (which adds a massive overhead, but I'm not convinced there's a worthwhile pay off outside of data centers). However even without these, there's a huge mismatch between FreeNAS's minimum specs and the real world figures.

What's more, I used to run this set up on 4GB of RAM (albeit with VM's taking up 2.5GB and only one RAIDZ1 at that point). Prefetching had to be disabled (which is part of the reason I paid for more RAM - the other being that one of my existing sticks of RAM was starting to burn out). Yet even with prefetching disabled, performance wasn't actually that bad (it still exceeded my network capacity which was, at the time, only 100baseT; since been upgraded to GbE).

By far the biggest requirement for ZFS would be a 64bit CPU. The filesystem drivers were written to take advantage of 64bit addressing (there's a document detailing all this somewhere on the interwebs) so running ZFS in 32bit mode means adding quite a bit of overhead to ZFS. This is one of the surprisingly low examples of enterprise software that doesn't perform well with 32bit installs (I know there's this argument that 64bit == better, but quite a number of enterprise daemons can still perform marginally better in 32bit, though that trend wont last much longer eg some PostgreSQL functions are now optimised for 64bit).

I did write a more detailed breakdown of the memory usage on my NAS and how that contradicted FreeNAS's specifications - but I think that may have been on another forum and I can't find it, so you'll have to excuse the estimations used here smile.gif
Quote:
Originally Posted by Nixalot View Post

I know 1GB is the listed minimum for FreeBSD but most ZFS performance issues can be resolved by adding more RAM.
That's the case for pretty much all software tongue.gif
Quote:
Originally Posted by Nixalot View Post

If you're running ZFS and having performance issues, watch the following presentation given at EuroBSDcon 2012 for information on tuning ZFS http://www.youtube.com/watch?v=PIpI7Ub6yjo
Thanks for the link, I'll give that a watch over the weekend, however a disclaimer needs to be made: you have to be careful tuning ZFS as it can lead to an unstable environment. It's definitely better to stick with the defaults for production use, or only change the basic ZFS options (rather than kernel level tuning that many guides offer).
Edited by Plan9 - 1/9/13 at 1:17am
post #940 of 4043
Quote:
Thanks for the link, I'll give that a watch over the weekend, however a disclaimer needs to be made: you have to be careful tuning ZFS as it can lead to an unstable environment. It's definitely better to stick with the defaults for production use, or only change the basic ZFS options (rather than kernel level tuning that many guides offer).
I agree, some "internet wisdom" will have you do things like turn off ZFS checksums which I would never do. The video is by one of the maintainers of ZFS for FreeBSD and he mentions that it is all very workload dependent.
Quote:
Actually that's not true. FreeNAS's documentation heavily rounds things up. If you go to the original ZFS best practice guide, the specs are a lot more modest. Running ZFS on vanilla FreeBSD should only need a minimum of ~3GB of RAM for ZFS (and that's still rounding up): 2GB of which is required for prefetching. So 4GB would be more than enough to run a basic FreeBSD file server.

I've read the guide as well but I find that recommendation questionable. Depending on the workload and the settings you have enabled on the dataset ZFS will easily use that much RAM, especially if you turn on deduplication on a dataset, since the deduplication table ZFS uses to determine if data already exists before it is written is stored in RAM. Plus, I think that guide was written for ZFS on Solaris, FreeBSD's implementation does have some differences, for instance the guide recommends you use ZFS only with whole disks, since you lose the device cache if you use it on just a partition, but on FreeBSD it works perfectly fine to have ZFS only on certain partitions because in FreeBSD by default (thanks in part to FreeNAS boxes with lots of hard drives but low amounts of RAM) the ZFS vdev cache is turned off by default.
Quote:
In my current NAS, I 'only' have 8GB of RAM, more than half of which is reserved for virtual machines. Then you could probably drop another 256MB of RAM for the core OS (I only have lighttpd, samba (I don't like ZFSs SMB implementation) and NFS running, so it's pretty much a bare bones system). So lets say there's only 3GB free for ZFS. Now I only have one storage pool which is is 6x 1TB HDDs (two 3x HDD RAIDZ1), so I don't have to worry about the 1GB additional footprint as per the best practice guides. If FreeNAS's guide was accurate then prefetching (with it's reported 4GB footprint) would auto-disable. However all my tests on FreeBSD has shown that prefetching only needs 2GB RAM.

Again this is all depending on your specific requirements for your NAS, my ZFS server has 16GB of RAM and I've let it autotune (the default setting) so up to 15GB of memory is available for the ARC, and at least 1.8 GB of RAM will always be available for it. The amount in use at any given time will go up or down based on other system requirements but most of the time there's around 12 GB of space allocated to ARC, which works really well if you're trying to stream a 2GB file over a GbE network to a boxee box because the file is read into RAM and sent from there making the stream very reliable. When I was using ext4 on an ubuntu install on the same hardware to do the same I would occasionally get pauses while the video buffered. My server is fairly busy though, I have samba running and do a lot of transfers via rsync from my other machines to back up data to the server and also stream a lot to my boxee box.

Given what you've said about your setup, I think the bottom line is that people should monitor ZFS on their systems and adjust settings and add RAM as needed based on the data collected, since so much of this is dependent on the specific requirements of the server and application. The sysutils/zfs-stats port has 2 utilities that are great for watching the various ZFS counters as kept by sysctl. zfs-stats will give you information from the time the system booted up, but the really great one is zfs-mon which gives you real-time information about how ZFS is performing on your system at that point in time and by default will run and monitor until you kill it with ctrl+c or something. While it's running it'll give you each stat for the last 10 seconds, last 60 seconds, and for the total amount of time you've been running zfs-mon. It's very handy.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Linux, Unix
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › echo "The `uname` Club" (NEW POLL)