Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › echo "The `uname` Club" (NEW POLL)
New Posts  All Forums:Forum Nav:

echo "The `uname` Club" (NEW POLL) - Page 95

Poll Results: How long have you been using your current, main installation?

 
  • 24% (50)
    less then a month
  • 23% (47)
    less then six months
  • 14% (30)
    less then a year
  • 24% (49)
    less then three years
  • 13% (27)
    three years+
203 Total Votes  
post #941 of 4043
Why isn't Btrfs "self-healing"? Are there any drawbacks to it? Or can it still be added?
post #942 of 4043
Quote:
Originally Posted by xeekei View Post

Why isn't Btrfs "self-healing"? Are there any drawbacks to it? Or can it still be added?

BTRFS is still very much in development so it could definitely be added, I don't think it has been yet but I don't see a reason why, in for instance a BTRFS RAID1 setup it couldn't grab the damaged data from the mirror drive, but I don't think at this time it does that. BTRFS is stable enough for home use but I don't think it's truly production ready yet, I've done some testing with it but not much since if I need a filesystem with those features I use FreeBSD and ZFS. There aren't any drawbacks per se to BTRFS, it's the default filesystem for at least one distribution that I know of, Fedora, but you just have to keep in mind that it's still under heavy development, although it is stable enough for home use as I said. Take a look at the BTRFS wiki located at https://btrfs.wiki.kernel.org/index.php/Getting_started. That page is the "Getting Started" section and lists the distributions and what kernel versions they have. Since BTRFS is under heavy development right now the kernel version you use matters a lot as there are always a lot of bug fixes and such between kernel releases.

I haven't played with BTRFS in a while and this thread has made me want to so I think I'm going to spin up an Arch VM and play with it.
post #943 of 4043
Quote:
Originally Posted by Nixalot View Post

I've read the guide as well but I find that recommendation questionable. Depending on the workload and the settings you have enabled on the dataset ZFS will easily use that much RAM, especially if you turn on deduplication on a dataset, since the deduplication table ZFS uses to determine if data already exists before it is written is stored in RAM. Plus, I think that guide was written for ZFS on Solaris, FreeBSD's implementation does have some differences, for instance the guide recommends you use ZFS only with whole disks, since you lose the device cache if you use it on just a partition, but on FreeBSD it works perfectly fine to have ZFS only on certain partitions because in FreeBSD by default (thanks in part to FreeNAS boxes with lots of hard drives but low amounts of RAM) the ZFS vdev cache is turned off by default.
I appreciate that, but like I said, deduping is pointless for home users and the figures I've given are real world usage figured from FreeBSD systems I've built. Unless FreeNAS has deduping enabled by default (in which case I'd recommend turning it off), their figures are massively exaggerated.

Quote:
Originally Posted by Nixalot View Post

Again this is all depending on your specific requirements for your NAS, my ZFS server has 16GB of RAM and I've let it autotune (the default setting) so up to 15GB of memory is available for the ARC, and at least 1.8 GB of RAM will always be available for it. The amount in use at any given time will go up or down based on other system requirements but most of the time there's around 12 GB of space allocated to ARC, which works really well if you're trying to stream a 2GB file over a GbE network to a boxee box because the file is read into RAM and sent from there making the stream very reliable.
I regularly stream 20GB files over GbE on my box. And you don't need to read the whole file into RAM before sending, you only need a large enough buffer to compensate for dropped read/write anomalies and dropped packets. Typically that shouldn't be more than a few megs. 15GB is massively over spec'ing the system (not even enterprise level SANs nor RAID controllers have that level of onboard cache).
Quote:
Originally Posted by Nixalot View Post

When I was using ext4 on an ubuntu install on the same hardware to do the same I would occasionally get pauses while the video buffered.
Then I think you had your server configured wrong. I do this stuff for a living, we have two ext4 file servers in production use (as well as a number of SANs for the webfarm) and they each only have 2GB RAM.

My pre-ZFS NAS was running Arch Linux and ext3, with 768MB and before that was FreeBSD running on 512MB.

You don't need a lot of RAM to stream large data, you just need your access times to be consistent (disk fragmentation causes issues here). which is part of the reason why I opted for smaller disks but more of them (the other being cheaper redundancies).

IIRC the Linux kernel even gives priority to disk IO and network throughput over other network services - though I might be wrong there.

the other trick is to have scheduled automated processes (eg my scrubs happen over-night on a week night when I'm least likely to stop up late. This means that the chance of ZFS scrubs competing with real time services is next to nil).
Quote:
Originally Posted by Nixalot View Post

My server is fairly busy though, I have samba running and do a lot of transfers via rsync from my other machines to back up data to the server and also stream a lot to my boxee box.
Like I said, I have samba. Obviously I use rsync for things as well (also, it rsync isn't system demanding). But most importantly, I have 5 virtual machines. You don't get much heavier load than running virtual machines. And those VMs do everything from web serving through to real time audio and video transcoding. So we're not only talking about 5 VM, but a few of which are dedicated to real time services. all of that runs smoothly on 8GB.
Quote:
Originally Posted by Nixalot View Post

Given what you've said about your setup, I think the bottom line is that people should monitor ZFS on their systems and adjust settings and add RAM as needed based on the data collected, since so much of this is dependent on the specific requirements of the server and application. The sysutils/zfs-stats port has 2 utilities that are great for watching the various ZFS counters as kept by sysctl. zfs-stats will give you information from the time the system booted up, but the really great one is zfs-mon which gives you real-time information about how ZFS is performing on your system at that point in time and by default will run and monitor until you kill it with ctrl+c or something. While it's running it'll give you each stat for the last 10 seconds, last 60 seconds, and for the total amount of time you've been running zfs-mon. It's very handy.
I just used the included native UNIX tools (iostat et al). But you're right, there's no harm in upgrading if your system demands it. However most people don't (and those that do, likely have other issues causing the bottlenecks which more memory hid - however those bottlenecks could have been fixed with a little bit of analysis).

The only show-stopper I've encountered I've pointed out was transcoding 1080p into another 1080p codec inside a VM in real time (I can get transcoding down to 720i working in real time, but alas my CPU (a rather modest AMD tri-core. So pretty meh by todays standards) and VM hyperviser just isn't up to the task. However the only time I want to transcode is when streaming online, so I almost never want to to transcode into 1080p (all other instances, I just mount the remote file system).
Edited by Plan9 - 1/9/13 at 4:45am
post #944 of 4043
I'll concede that I probably don't need 16gb of ram to do what I'm doing, and I could probably lower the ARC hard limit and be fine, but I don't see a reason to. Plus it gives room to grow should I decide to do something more IO intensive. My eventual plan is to share out the space over iSCSI to VMware esxi or proxmox and use it for storing the vm disks. Haven't got that far yet though.
post #945 of 4043
Thread Starter 
Quote:
Originally Posted by Nixalot View Post

Quote:
Originally Posted by xeekei View Post

Why isn't Btrfs "self-healing"? Are there any drawbacks to it? Or can it still be added?

BTRFS is still very much in development so it could definitely be added, I don't think it has been yet but I don't see a reason why, in for instance a BTRFS RAID1 setup it couldn't grab the damaged data from the mirror drive, but I don't think at this time it does that. BTRFS is stable enough for home use but I don't think it's truly production ready yet, I've done some testing with it but not much since if I need a filesystem with those features I use FreeBSD and ZFS. There aren't any drawbacks per se to BTRFS, it's the default filesystem for at least one distribution that I know of, Fedora, but you just have to keep in mind that it's still under heavy development, although it is stable enough for home use as I said. Take a look at the BTRFS wiki located at https://btrfs.wiki.kernel.org/index.php/Getting_started. That page is the "Getting Started" section and lists the distributions and what kernel versions they have. Since BTRFS is under heavy development right now the kernel version you use matters a lot as there are always a lot of bug fixes and such between kernel releases.

I haven't played with BTRFS in a while and this thread has made me want to so I think I'm going to spin up an Arch VM and play with it.

just want to nikpick here.

officially only the B is capitalized. however, many do capitalize the whole name, but even then it's BTrFS with the r as lower case because it's 'B-Tree File System)

as for how it's going, it's improved a lot in the last year of kernel releases (3.2+) over when trying to use it on 3 or 2.6.3x so if you haven't tried it recently I suggest you do. it's still missing many features that oracle wants to add before 'stable' but if your fine without those it's quite stable. SLES is even recommending to to enterprise users as root, although then haven't done that for data (xfs still being recommended there)

also, I read BTrFS is semi-self healing or something like that, with goal being to become fully self-healing.
post #946 of 4043
Looks like some of Plan9's nitpicking is rubbing off on poor lil jrl.
post #947 of 4043
Quote:
Originally Posted by Plan9 View Post

Why not just run FreeBSD? It's every bit as good as Linux but with stable ZFS support.
Good question. I had in fact considered that (and in the past used NAS4Free/FreeNAS), but the NAS machine is also my 24/7 folder. Getting F@H running on BSD didn't look like it would be easy or as performant. It's the second rig in my sig.

Edit: @jrl, any plans to update the member list one of these days? smile.gif
Edited by stolid - 1/9/13 at 1:18pm
2017 Build
(10 items)
 
   
CPUMotherboardGraphicsRAM
Ryzen 7 1700X ASRock X370 Killer SLI/ac PowerColor R9 280 3GB 2x Corsair Venceance LPX 32GB DDR4-3200 (4x16GB) 
Hard DriveHard DriveCoolingMonitor
Sandisk Ultra II 960GB SSD Mushkin Reactor 960GB MLC SSD Corsair H110i 34" LG 34UC88-B 3440x1440 
PowerCase
EVGA SuperNOVA G2 750W Phanteks Enthoo Evolv ATX TG 
CPUCPUCPUCPU
AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 
MotherboardRAMHard DriveOptical Drive
Supermicro H8QME-2+ 32GB DDR2-667 ECC Registered (16x2GB) 2x Samsung F3 1TB 2x Toshiba 5TB 
CoolingOSMonitorPower
4x Hyper TX-3 Debian Wheezy Headless Corsair CX750M 
  hide details  
Reply
2017 Build
(10 items)
 
   
CPUMotherboardGraphicsRAM
Ryzen 7 1700X ASRock X370 Killer SLI/ac PowerColor R9 280 3GB 2x Corsair Venceance LPX 32GB DDR4-3200 (4x16GB) 
Hard DriveHard DriveCoolingMonitor
Sandisk Ultra II 960GB SSD Mushkin Reactor 960GB MLC SSD Corsair H110i 34" LG 34UC88-B 3440x1440 
PowerCase
EVGA SuperNOVA G2 750W Phanteks Enthoo Evolv ATX TG 
CPUCPUCPUCPU
AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 AMD Opteron 8431 
MotherboardRAMHard DriveOptical Drive
Supermicro H8QME-2+ 32GB DDR2-667 ECC Registered (16x2GB) 2x Samsung F3 1TB 2x Toshiba 5TB 
CoolingOSMonitorPower
4x Hyper TX-3 Debian Wheezy Headless Corsair CX750M 
  hide details  
Reply
post #948 of 4043
Can't wait for Btrfs to become complete then. smile.gif I use Ext4 for my system disk, and XFS for my storage disks at the moment.
post #949 of 4043
Quote:
Originally Posted by stolid View Post

Good question. I had in fact considered that (and in the past used NAS4Free/FreeNAS), but the NAS machine is also my 24/7 folder. Getting F@H running on BSD didn't look like it would be easy or as performant. It's the second rig in my sig.
Ahh that would make a lot of sense then thumb.gif
Quote:
Originally Posted by Nixalot View Post

I'll concede that I probably don't need 16gb of ram to do what I'm doing, and I could probably lower the ARC hard limit and be fine, but I don't see a reason to. Plus it gives room to grow should I decide to do something more IO intensive. My eventual plan is to share out the space over iSCSI to VMware esxi or proxmox and use it for storing the vm disks. Haven't got that far yet though.
fair enough then smile.gif
Quote:
Originally Posted by jrl1357 View Post

just want to nikpick here.
officially only the B is capitalized. however, many do capitalize the whole name, but even then it's BTrFS with the r as lower case because it's 'B-Tree File System)
Actually it's BtrFS. Both t and r are lower case. tongue.gif
And since we're chatting about naming conventions, there's a lot of debate about how it's pronounced (yeah, some people really are that dull!). The developers are adamant that it should be pronounced "Better FS", but many instead call it "Butter FS" laugher.gif
Quote:
Originally Posted by Shrak View Post

Looks like some of Plan9's nitpicking is rubbing off on poor lil jrl.
laugher.gif
post #950 of 4043
Thread Starter 
Quote:
Originally Posted by Plan9 View Post

Quote:
Originally Posted by stolid View Post

Good question. I had in fact considered that (and in the past used NAS4Free/FreeNAS), but the NAS machine is also my 24/7 folder. Getting F@H running on BSD didn't look like it would be easy or as performant. It's the second rig in my sig.
Ahh that would make a lot of sense then thumb.gif
Quote:
Originally Posted by Nixalot View Post

I'll concede that I probably don't need 16gb of ram to do what I'm doing, and I could probably lower the ARC hard limit and be fine, but I don't see a reason to. Plus it gives room to grow should I decide to do something more IO intensive. My eventual plan is to share out the space over iSCSI to VMware esxi or proxmox and use it for storing the vm disks. Haven't got that far yet though.
fair enough then smile.gif
Quote:
Originally Posted by jrl1357 View Post

just want to nikpick here.
officially only the B is capitalized. however, many do capitalize the whole name, but even then it's BTrFS with the r as lower case because it's 'B-Tree File System)
Actually it's BtrFS. Both t and r are lower case. tongue.gif
And since we're chatting about naming conventions, there's a lot of debate about how it's pronounced (yeah, some people really are that dull!). The developers are adamant that it should be pronounced "Better FS", but many instead call it "Butter FS" laugher.gif
Quote:
Originally Posted by Shrak View Post

Looks like some of Plan9's nitpicking is rubbing off on poor lil jrl.
laugher.gif

nitpicker hmmsmiley02.gif

just kidding smile.gif

@xeekei oh good, someone who uses xfs to ask questions to, since I know absolutely nothing about it. what is the pros/cons?
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Linux, Unix
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › echo "The `uname` Club" (NEW POLL)