Overclock.net banner

1 - 20 of 37 Posts

·
Registered
Joined
·
58 Posts
Discussion Starter · #1 ·
Hey folks! I'm looking to set up a home server with some of the parts from an older PC but I'm not sure how I should configure it. I have a technet subscription so I have access to Windows Server but I'm looking at other options. I also have a budget for any components I might need.

Intended use:

- File Server to replace a small 2-bay synology nas that I've been using with my HTPC.
- Testing and developing web applications using vm's.
- pfsense router

The hardware that I have available to use:

- Intel i7 980x @ 4.4 - 4.6
- EVGA Classified X58 motherboard
- 48GB - 6 x 8GB DDR3 RAM
- Intel Gigabit NIC
- 2 x 240GB Corsair Neutron GTX SSD
- 2 x 640GB Western Digital Black HDD
- 2 x 1TB Western Digital Black HDD
- 4 x 3TB Western Digital Red HDD
- Silverstone 1500w PSU
- Corsair 800D case
 

·
Premium Member
Joined
·
8,040 Posts
You could do all that on something 1/6th of the spec. In fact I do do all that (and more) on a system 1/6th of the spec.

edit:
Also, why the mismatch of drives:
- 2 x 240GB Corsair Neutron GTX SSD
- 2 x 640GB Western Digital Black HDD
- 2 x 1TB Western Digital Black HDD
- 4 x 3TB Western Digital Red HDD

The SSDs I can relate to (though personally I wouldn't bother), but I'm a little confused by the choice of your HDDs.
 

·
Registered
Joined
·
58 Posts
Discussion Starter · #3 ·
Quote:
Originally Posted by Plan9 View Post

You could do all that on something 1/6th of the spec. In fact I do do all that (and more) on a system 1/6th of the spec.
Right, I could definitely do that. Except that I have all of these components sitting unused atm from an old build. I either sell it to buy something more appropriate or use what I currently have.
 

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by juryan View Post

Right, I could definitely do that. Except that I have all of these components sitting unused atm from an old build. I either sell it to buy something more appropriate or use what I currently have.
ahhh, that makes more sense then.

I'm surprised and impressed you have 48GB of RAM laying about though
 

·
Registered
Joined
·
58 Posts
Discussion Starter · #5 ·
Quote:
Originally Posted by Plan9 View Post

ahhh, that makes more sense then.

I'm surprised and impressed you have 48GB of RAM laying about though
It was 64GB of ram that I got at a fantastic price
smile.gif
using 16GB of it in my brothers PC. Originally I was going to put it in an X79 build but those plans changed.
 

·
Registered
Joined
·
58 Posts
Discussion Starter · #6 ·
Quote:
Originally Posted by Plan9 View Post

edit:
Also, why the mismatch of drives:
- 2 x 240GB Corsair Neutron GTX SSD
- 2 x 640GB Western Digital Black HDD
- 2 x 1TB Western Digital Black HDD
- 4 x 3TB Western Digital Red HDD

The SSDs I can relate to (though personally I wouldn't bother), but I'm a little confused by the choice of your HDDs.
Those are drives I have available for use. I will most likely just use the SSD's in my own system if they won't provide any real benefit.

The 640GB drives were used in a NAS that I'm not using and the 1TB drives were backup drives on my older PC. The 3TB drives are new and I was thinking of running them in RAID 5 for 9 TB of storage backup and media storage. The 1TB drives I would run in RAID 0 as and use them for the OS or VM's. I might just leave the 640GB drives out since they are older.

I'm open to suggestions.
 

·
Registered
Joined
·
2,179 Posts
Quote:
Originally Posted by juryan View Post

Those are drives I have available for use. I will most likely just use the SSD's in my own system if they won't provide any real benefit.

The 640GB drives were used in a NAS that I'm not using and the 1TB drives were backup drives on my older PC. The 3TB drives are new and I was thinking of running them in RAID 5 for 9 TB of storage backup and media storage. The 1TB drives I would run in RAID 0 as and use them for the OS or VM's. I might just leave the 640GB drives out since they are older.

I'm open to suggestions.
4x 3TB in RAID 5 for storage
2 x 640GB in RAID 1 for OS
2x 240GB SSD in JBOD for VMs
2x 1TB in JBOD for backups (VMs and critical data)
 

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by juryan View Post

Those are drives I have available for use. I will most likely just use the SSD's in my own system if they won't provide any real benefit.

The 640GB drives were used in a NAS that I'm not using and the 1TB drives were backup drives on my older PC. The 3TB drives are new and I was thinking of running them in RAID 5 for 9 TB of storage backup and media storage. The 1TB drives I would run in RAID 0 as and use them for the OS or VM's. I might just leave the 640GB drives out since they are older.

I'm open to suggestions.
If you run FreeNAS instead of Windows and you want redundancy then you can have all of those disks in one ZFS storage pool.
2 x 640GB Western Digital Black HDD <-- mirrored
2 x 1TB Western Digital Black HDD <-- mirrored
4 x 3TB Western Digital Red HDD <-- raidz

then use the SSDs mirrored for the OS - if you wish.

I think there are Windows software that do similar things, but I think they're quite expensive (tycoonbob will no doubt correct me here
smile.gif
)
 

·
Registered
Joined
·
2,179 Posts
Quote:
Originally Posted by Plan9 View Post

If you run FreeNAS instead of Windows and you want redundancy then you can have all of those disks in one ZFS storage pool.
2 x 640GB Western Digital Black HDD <-- mirrored
2 x 1TB Western Digital Black HDD <-- mirrored
4 x 3TB Western Digital Red HDD <-- raidz

then use the SSDs mirrored for the OS - if you wish.

I think there are Windows software that do similar things, but I think they're quite expensive (tycoonbob will no doubt correct me here
smile.gif
)
Yes, they're Windows alternatives that can do this such as the built in Storage Spaces (Windows 8 and Server 2012) as well as FlexRAID and SnapRAID. I personally have only tried Storage Spaces, but am not impressed. I'm a big fan of hardware RAID (nothing wrong with ZFS), so that's where my recommendation comes from. Storage Spaces is built in so there are no additional costs if you choose to go with that OS, but FlexRAID will cost anywhere from $30 to $60, with SnapRAID being free (open source).

Regardless of running your storage in software or hardware RAID, if you are running VMs, you should without a doubt, run them from the SSDs. A 7200RPM drive has around 80 IOPS, where your SSDs are going to have over 40,000 IOPS (probably more, up to 90,000). No need to RAID your SSDs for VM storage either, as RAID 1 is wasting space, and RAID 0 is not worth the performance increase for VM storage (or worth the risk). Either get a third and run RAID 5, or get two more and run RAID 10, or just don't RAID them and back them up nightly with some sort of software.
 
  • Rep+
Reactions: juryan

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by tycoonbob View Post

Yes, they're Windows alternatives that can do this such as the built in Storage Spaces (Windows 8 and Server 2012) as well as FlexRAID and SnapRAID. I personally have only tried Storage Spaces, but am not impressed. I'm a big fan of hardware RAID (nothing wrong with ZFS), so that's where my recommendation comes from. Storage Spaces is built in so there are no additional costs if you choose to go with that OS, but FlexRAID will cost anywhere from $30 to $60, with SnapRAID being free (open source).
FlexRAID and SnapRAID are very different beasts to ZFS though. Not that I'm trying to say they're lesser products for it, just different.
Quote:
Originally Posted by tycoonbob View Post

Regardless of running your storage in software or hardware RAID, if you are running VMs, you should without a doubt, run them from the SSDs. A 7200RPM drive has around 80 IOPS, where your SSDs are going to have over 40,000 IOPS (probably more, up to 90,000). No need to RAID your SSDs for VM storage either, as RAID 1 is wasting space, and RAID 0 is not worth the performance increase for VM storage (or worth the risk). Either get a third and run RAID 5, or get two more and run RAID 10, or just don't RAID them and back them up nightly with some sort of software.
RAID1 is only a waste of space if you don't care about disk failure.
wink.gif


I nearly did suggest running his VMs on SSD but then thought better of it as most of the image will sit in RAM anyway. To be quite honest, I don't really see the point of using these SSDs in this server at all - except perhaps as a cache disk for any RAIDs. I'm a little surprised at myself for even suggesting running the OS off SSD.
 

·
Registered
Joined
·
58 Posts
Discussion Starter · #11 ·
Ok, I think I'm going to use a crucial M4 for the VMs and keep the Corsair drives for another build.

4x 3TB in RAID 5 for storage
2 x 640GB in RAID 1 for OS
1x 128GB SSD for VMs
2x 1TB in JBOD for backups

Which version of windows server should I install?
 

·
Registered
Joined
·
2,179 Posts
Quote:
Originally Posted by Plan9 View Post

FlexRAID and SnapRAID are very different beasts to ZFS though. Not that I'm trying to say they're lesser products for it, just different.
RAID1 is only a waste of space if you don't care about disk failure.
wink.gif


I nearly did suggest running his VMs on SSD but then thought better of it as most of the image will sit in RAM anyway. To be quite honest, I don't really see the point of using these SSDs in this server at all - except perhaps as a cache disk for any RAIDs. I'm a little surprised at myself for even suggesting running the OS off SSD.
FlexRAID and SnapRAID are different, but they are drive pooling alternatives for Windows. RAID 1 for a home server is nice, but if you have good backups, it's not necessary. Especially if the drive is JUST VM storage. RAID 1 is great, but why not have double the VHD/VMDK space and do nightly backups? So what if you have a day of VM downtime on your home server.

However, I don't understand by what you mean that most of the image will sit on RAM. The VMs will use the RAM, but the virtual hard disk (VHD, VMDK, etc) will not be sitting on RAM as each one of these will be 20GB or more, if they run Windows. A single SSD will out performance 5 7200 RPM drives in a RAID 5 by a long shot, and if you have an SSD available for a hypervisor, that's what you should be using (applying this logic to home lab/servers, not enterprise). Disk I/O is very important for VMs, especially with the more you run.

FWIW, all of my home servers run a 60GB SSD for the OS. Blame me, flame me, whatever, but that's what I wanted, and that's what I did.

@OP, I think you will be very happy running your VMs from a SSD. In regards to your host OS, if you are going with a Windows environment, run Server 2012 Standard if possible. Hyper-V 3.0 is much improved from the previous version on Server 2008 R2, and is now on par with VMware. Hyper-V has some things better, VMware has other things better, but either will serve you greatly.
 

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by tycoonbob View Post

FlexRAID and SnapRAID are different, but they are drive pooling alternatives for Windows.
Yes, I know this. But I was talking about more than just drive pooling.
Quote:
Originally Posted by tycoonbob View Post

RAID 1 for a home server is nice, but if you have good backups, it's not necessary.
That depends on how much you hate downtime
wink.gif

Quote:
Originally Posted by tycoonbob View Post

Especially if the drive is JUST VM storage. RAID 1 is great, but why not have double the VHD/VMDK space and do nightly backups?
Nightly backups to where? You'd have to store them on another hard disk / SSD as most home internet connections wouldn't be fast enough to upload that to cloud storage, optical media isn't large enough and tape backups are expensive. So you might as well just RAID those disks and have real time "backups" instead of nightly jobs.
Quote:
Originally Posted by tycoonbob View Post

However, I don't understand by what you mean that most of the image will sit on RAM. The VMs will use the RAM, but the virtual hard disk (VHD, VMDK, etc) will not be sitting on RAM as each one of these will be 20GB or more, if they run Windows.
The virtual HDD will be cached to some extent by the host OS, but even without that, your typical development webserver shouldn't access the HDD much baring small (only a few KB) text files. So it doesn't really matter if that virtual HDD is on slow mechanical drives. It's only the boot times that are going to be affected and who really cares about that on a server?

Also, all your advice is based on the assumption that he's running Windows - which he's already highlighted that at least 1 of his 2 VMs are not going to be (and there's a good chance his web server isn't going to be Windows either - given the platform that drives the majority of web servers)
Quote:
Originally Posted by tycoonbob View Post

A single SSD will out performance 5 7200 RPM drives in a RAID 5 by a long shot,
That's so bloody obvious it's not even worth mentioning
rolleyes.gif

Quote:
Originally Posted by tycoonbob View Post

and if you have an SSD available for a hypervisor, that's what you should be using (applying this logic to home lab/servers, not enterprise). Disk I/O is very important for VMs, especially with the more you run.
You're somewhat over stating things there. Yeah if he has SSDs free then he can benefit from it, I never denied that. But I'm saying if he wanted to use them for other projects (like he suggested) then it's no great loss. In that instance he'd be better off having the VMs on a RAID and using a smaller SSD as a cache drive.

In fact, if he's really feeling clever, he could put that RAM to better use as a RAMdisk for the VMs and RAIDing that RAMdisk with one mechanical drive for a persistence. That way he'd have performance and save himself an SSD.
Quote:
Originally Posted by tycoonbob View Post

FWIW, all of my home servers run a 60GB SSD for the OS. Blame me, flame me, whatever, but that's what I wanted, and that's what I did.
Sorry, but that's just a complete waste. Do you have even the slightest idea how servers work
tongue.gif
Aside the booting, that SSD is basically just sat idle. That is unless you're willing to concede that Windows is a terrible server OS that constantly and unnecessarily thrashes the OS drive?
tongue.gif
 

·
Registered
Joined
·
2,179 Posts
Quote:
Originally Posted by Plan9 View Post

Yes, I know this. But I was talking about more than just drive pooling.
That depends on how much you hate downtime
wink.gif

Nightly backups to where? You'd have to store them on another hard disk / SSD as most home internet connections wouldn't be fast enough to upload that to cloud storage, optical media isn't large enough and tape backups are expensive. So you might as well just RAID those disks and have real time "backups" instead of nightly jobs.
The virtual HDD will be cached to some extent by the host OS, but even without that, your typical development webserver shouldn't access the HDD much baring small (only a few KB) text files. So it doesn't really matter if that virtual HDD is on slow mechanical drives. It's only the boot times that are going to be affected and who really cares about that on a server?

Also, all your advice is based on the assumption that he's running Windows - which he's already highlighted that at least 1 of his 2 VMs are not going to be (and there's a good chance his web server isn't going to be Windows either - given the platform that drives the majority of web servers)
That's so bloody obvious it's not even worth mentioning
rolleyes.gif

You're somewhat over stating things there. Yeah if he has SSDs free then he can benefit from it, I never denied that. But I'm saying if he wanted to use them for other projects (like he suggested) then it's no great loss. In that instance he'd be better off having the VMs on a RAID and using a smaller SSD as a cache drive.

In fact, if he's really feeling clever, he could put that RAM to better use as a RAMdisk for the VMs and RAIDing that RAMdisk with one mechanical drive for a persistence. That way he'd have performance and save himself an SSD.
Sorry, but that's just a complete waste. Do you have even the slightest idea how servers work
tongue.gif
Aside the booting, that SSD is basically just sat idle. That is unless you're willing to concede that Windows is a terrible server OS that constantly and unnecessarily thrashes the OS drive?
tongue.gif
Yes, I know how servers work and why bother asking such a degrading question? That's the reason I am starting to not like this place because everyone thinks they know their stuff and no one else does. I am a Microsoft consultant, which is why my experience/advice is always based around Windows.

I'd rather use a 60GB SSD for my server OS drive instead of wasting a 500GB drive that could be used for good storage. I have 4 physical servers all with 60GB SSDs as the OS drive. 3 of these servers run Server 2012 Datacenter with the Hyper-V role installed, for my cluster. My 4th physical server is my NUS (Network Unified Storage) box, which the SSD is utilized for my archiving solution. I use AltDrive to back up to the cloud, but before it's sent it is compressed and encrypted, and that is another use for the SSD. It does not sit idle.

Fine, if he is only running 2-3 VMs, then a single 7200RPM drive will be just fine for his VMs, but if he has SSDs sitting around not being used that he wants to utilize, he might as well use one for his VMs. I have over 30 VMs running at home (mostly Windows, but I do have 3 *nix VMs -- My Murmur host, my Usenet Index, and my Minecraft server), so performance between a single 7200 RPM drive vs a RAID 5 with 4 7200 RPM drives vs a single SSD really makes a difference when it's needed. If he needed any of the SSDs for any other tasks, by all means use them for that, but if not, use them for your VMs. A single 7200RPM drive with over 6 VMs will start to choke on I/O.

I still don't see how running the VMs from a RAMdisk would work, unless you had way more RAM than what he has. Sure, if he had a *nix VM with a 5GB VHD, that could live on RAM, but that's just not practical in my opinion. I'd rather have that VM sitting on a SSD and have plenty of RAM available for that VM. Honestly, I don't care what OS is running in the VM, but what hypervisor he is using. From the sounds of it, it will be Hyper-V with either Server 2012 or 2008 R2, and the same facts apply regardless of his VMs being *nix or Windows.

And as far as the backups, back up to a second box (i.e. another server, or your computer). If your VMs live on a 128GB SSD, than you have no more than 128GB worth of backups to manage, so that shouldn't be a problem. Use backup software or write a script that will shutdown the VMs, run an incremental backup, then power the VMs back on and run that nightly. Hell, with Server 2012 and Hyper-V 3, you would just have to shutdown the VMs and make a copy of the VHD and the config XML. You can import a VM into Hyper-V 3 without exporting it first (can't do that in Server 2008 R2 and Hyper-V 2).
 

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by tycoonbob View Post

Fine, if he is only running 2-3 VMs, then a single 7200RPM drive will be just fine for his VMs, but if he has SSDs sitting around not being used that he wants to utilize, he might as well use one for his VMs.
That we agreed on. It was when you were talking as if he /must/ use an SSD for VMs and how you used SSD for the OS drive on servers (I still don't see the point of that or even agree with your reasons, but at least you are using it for more than just the OS drive).
Quote:
Originally Posted by tycoonbob View Post

I have over 30 VMs running at home (mostly Windows, but I do have 3 *nix VMs -- My Murmur host, my Usenet Index, and my Minecraft server), so performance between a single 7200 RPM drive vs a RAID 5 with 4 7200 RPM drives vs a single SSD really makes a difference when it's needed.
But he's not talking about running 30 VMs
rolleyes.gif

And I'm sure that figure of yours inflated. Nobody needs 30+ active VMs for a home server.
Quote:
Originally Posted by tycoonbob View Post

If he needed any of the SSDs for any other tasks, by all means use them for that, but if not, use them for your VMs. A single 7200RPM drive with over 6 VMs will start to choke on I/O.
Again, we're not talking about that many VMs. It's all in his spec. And he did say he was tempted to use the SSDs for other tasks, which is why I started this debate
smile.gif

Quote:
Originally Posted by tycoonbob View Post

I still don't see how running the VMs from a RAMdisk would work, unless you had way more RAM than what he has. Sure, if he had a *nix VM with a 5GB VHD, that could live on RAM, but that's just not practical in my opinion.
Why not? A *nix VM doesn't need to be any bigger (it is, after all, just the OS).

With 46GB of RAM, that could be:
* 4GB ZFS
* 1GB host OS
* 4x8GB VHD
* 4x2GB per VM assigned RAM
...and you'll still have memory to spare

My home server is running on a fraction of that spec (I have VMs with only 128MB RAM) and still runs fast.
Quote:
Originally Posted by tycoonbob View Post

And as far as the backups, back up to a second box (i.e. another server, or your computer). If your VMs live on a 128GB SSD, than you have no more than 128GB worth of backups to manage, so that shouldn't be a problem.
Aside the extra electricity cost powering two servers 24/7 instead of one. Which is the main reason I consolidated a number of dedicated systems into VMs.
Quote:
Originally Posted by tycoonbob View Post

Use backup software or write a script that will shutdown the VMs, run an incremental backup, then power the VMs back on and run that nightly. Hell, with Server 2012 and Hyper-V 3, you would just have to shutdown the VMs and make a copy of the VHD and the config XML. You can import a VM into Hyper-V 3 without exporting it first (can't do that in Server 2008 R2 and Hyper-V 2).
With ZFS you just have to export a snapshot. Simples.
But I do love the Windows approach there: nightly reboots. lol

[edit]

That last part was borderline trolling. Sorry about that. Though with my weird sleeping patterns, nightly reboots as a definite no-no. Thankfully all of my administration can be done on a live system
smile.gif
 

·
Registered
Joined
·
2,179 Posts
Your a minimalist, I get it. If your host goes down, everything is down. That's not a good solution for me since I have an Active Directory domain with my home PCs on it. If my only VM host went down, them my DC VM would go down, and nothing would work.

The nightly backup with a reboot was if he wanted a simple free solution. If these are dev VMs as you so stated earlier, then what's the problem with that? I use Hyper-V Replica for my HA, and no reboots are required. It's all real-time replication.

And for the record, I literally do have over 30 VMs running at home. No needs any active VMs in a home environment, but the fact that I want to run 30+ VMs, I do. I have a full System Center lab along with web hosts, media servers, Usenet Index, Exchange server, Sharepoint server, Lync server, 2 DCs, and several others. I have 2 identical Hyper-V hosts with 8-core FX-8120s and 32GB of RAM, along with a Dell C1100 with dual quad-core Xeons with HT, and 48GB of RAM.

The OP didn't mention that he was worried about power savings, so I don't find that argument valid. Also, it was never said that he was going to be running Linux for web development. I run Apache under Windows using WAMP, because that's what I want to use. It works just fine.

There is not a single right solution for anyone, and I have provided my solution to the OP. I can tell you want the last word, so have at it.
 
  • Rep+
Reactions: reezin14

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by tycoonbob View Post

Your a minimalist, I get it. If your host goes down, everything is down.
How the hell did you come to that conclusion when I'm the one who keeps bringing up the bloody subject of redundancy and down times?
rolleyes.gif

Quote:
Originally Posted by tycoonbob View Post

That's not a good solution for me since I have an Active Directory domain with my home PCs on it. If my only VM host went down, them my DC VM would go down, and nothing would work.
I have zero down time. Not even nightly reboots. But keep trolling.

Quote:
Originally Posted by tycoonbob View Post

The nightly backup with a reboot was if he wanted a simple free solution. If these are dev VMs as you so stated earlier, then what's the problem with that? I use Hyper-V Replica for my HA, and no reboots are required. It's all real-time replication.
I was talking about my own set up, like you keep doing yourself. Nothing more.
Quote:
Originally Posted by tycoonbob View Post

And for the record, I literally do have over 30 VMs running at home. No needs any active VMs in a home environment, but the fact that I want to run 30+ VMs, I do. I have a full System Center lab along with web hosts, media servers, Usenet Index, Exchange server, Sharepoint server, Lync server, 2 DCs, and several others. I have 2 identical Hyper-V hosts with 8-core FX-8120s and 32GB of RAM, along with a Dell C1100 with dual quad-core Xeons with HT, and 48GB of RAM.
With a home server that absurd, I hardly think you're in a position to criticise my way of working.
rolleyes.gif

Quote:
Originally Posted by tycoonbob View Post

The OP didn't mention that he was worried about power savings, so I don't find that argument valid.
Saving electricity is always valid, but yeah lets just screw the planet. Typical American attitude.
rolleyes.gif

Quote:
Originally Posted by tycoonbob View Post

Also, it was never said that he was going to be running Linux for web development. I run Apache under Windows using WAMP, because that's what I want to use. It works just fine.
You're a windows fanboy though so I'm not surprised you'd do that. The reality most of the worlds web servers run on Linux, not Windows. And if he's want to develop a language that targets Apache (PHP, mod_perl, etc), then he's better off doing so on Linux than some crappy Windows kludge of a webserver. Just as if he wants to use .NET then he's better off with IIS on Windows than some crappy Linux kludge of a webserver.
Quote:
Originally Posted by tycoonbob View Post

There is not a single right solution for anyone, and I have provided my solution to the OP. .
Finally we agree on something. This is why I get so fed up with your contributions in these forums. It's always "Windows this...windows that. Screw FreeNAS, Linux, etc, Windows is the best thing ever..blah blah blah". You never ever consider that other solutions might be better suited because you're always to wrapped up in your own Windows-fanboyism. And the reason I never argue with you normally about this is simply because there are multiple ways to build a home server. But quite frankly, I just get fed up with your constant condescending attitudes and the constant arguments you have with anyone who ever supports open source software.
Quote:
Originally Posted by Oedipus View Post

Open source hipsters are the worst.
For the record, I only come across this way because tycoonbob is so far the other way that it takes another extreme to balance things out. In all honesty I would rather the OP go with Windows if that's what he wanted (and even suggested there are Windows software out there that can come some way to emulating the open source features I'd recommended). But seeming as he's on a budget and using a mismatch of hardware, I just wanted it to be known that there are open source products available that suites his needs and that tycoonbobs way isn't the only way (despite the certainty of his posts and condescending remarks he makes about anything outside of Windows).
 

·
Registered
Joined
·
2,179 Posts
I am not a Windows fanboy. I work in a Windows world, so that's where most of knowledge is, and I have never ever ever have said that Windows is the best thing ever and that Linux, FreeNAS, or anything open source is crap. I even things out by offering a Windows alternative to what most people around here suggest, and giving a full solution. If you take my comments as condescending, I apologize because they are never meant to be that way. I think Linux is great, and I love OpenSUSE and SLES and use them when I can, but working in a Windows world I can't use Linux all the time.
 

·
Premium Member
Joined
·
8,040 Posts
Quote:
Originally Posted by tycoonbob View Post

I am not a Windows fanboy. I work in a Windows world, so that's where most of knowledge is, and I have never ever ever have said that Windows is the best thing ever and that Linux, FreeNAS, or anything open source is crap. I even things out by offering a Windows alternative to what most people around here suggest, and giving a full solution. If you take my comments as condescending, I apologize because they are never meant to be that way. I think Linux is great, and I love OpenSUSE and SLES and use them when I can, but working in a Windows world I can't use Linux all the time.
Fair enough. Sorry for the tone of my posts. I know it's not an excuse, but I've been sleeping really badly this week so I'm more irritable than usual
redface.gif
 
1 - 20 of 37 Posts
Top