Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › Project "Linux FileServer RAID1 & Backup" - Economic Semi Professional RackMounted System
New Posts  All Forums:Forum Nav:

Project "Linux FileServer RAID1 & Backup" - Economic Semi Professional RackMounted System - Page 2

post #11 of 36
Thread Starter 
Status "Linux FileServer w. Backup v.1.0.5" aka "Lancaster" - 05.12.14 12:58am:
Finished installing and testing LM-sensors and HddTemp. Added to ToDo. Let's go on with Rsync and CRON wink.gif


Edited by DanHansenDK - 12/7/14 at 9:12am
post #12 of 36
Thread Starter 
Status "Linux FileServer w. Backup v.1.0.1" aka "Lancaster" - 07.12.14 06:07pm:
Hello friends. I'm currently studying Rsync. I got some information long time ago regarding Rsync from my overseas pal Srijan from India. But, I've noticed that Rsync is a very serious tool which can do a lot of things. If anyone of you know this tool, please let me know about your ideas. Ideas how to "copy" syncronize data from one disk to another. Syncronization from one system to another through SSH will be made later on wink.gif
post #13 of 36
I'd argue that rsync can't really do a lot of things; that it just does one thing (copy files) but does it very well. smile.gif

Essentially all you really need to get going with rsync is: rsync -av source destination

You can get more elaborate if you need to (eg delete destinations files if not found at the source, show a progress percentage for larger files, etc) but they're all just additional flags and easy to look up from the command line: man rsync)

For copying across SSH, you just prefix the destinations host name / IP: rsync -av hostname:source destination or rsync -av source hostname:destination but i think I covered that earlier in this thread anyway
post #14 of 36
Thread Starter 
Hi, Plan9,


Thanks... This is what I'm going to solve right now. I just added a new section to the ToDo in the top, so this is the next step. I thought I was ready to go at it a couple of days ago, but I noticed that it would be better to fix the SAMBA server and the shares first.
I've got a question for you. What about fileinformation when using that command? I'm thinking about filepermissions, date of creation etc. etc. Do you know which flag I need to keep those informations??? I think it's pretty important not to risc loosing fileinformation. I just read something about this issue in the weekend and thought you might know about it wink.gif


Status "Linux FileServer w. Backup v.1.0.1" aka "Lancaster" - 08.12.14 01:42pm:
Edited by DanHansenDK - 12/8/14 at 4:43am
post #15 of 36
Quote:
Originally Posted by DanHansenDK View Post

Hi, Plan9,


Thanks... This is what I'm going to solve right now. I just added a new section to the ToDo in the top, so this is the next step. I thought I was ready to go at it a couple of days ago, but I noticed that it would be better to fix the SAMBA server and the shares first.
I've got a question for you. What about fileinformation when using that command? I'm thinking about filepermissions, date of creation etc. etc. Do you know which flag I need to keep those informations??? I think it's pretty important not to risc loosing fileinformation. I just read something about this issue in the weekend and thought you might know about it wink.gif

The -a flag has you covered:
Code:
$ man rsync | grep "archive mode"; man rsync | grep preserve
        -a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
        -H, --hard-links            preserve hard links
        -p, --perms                 preserve permissions
        -E, --executability         preserve executability
        -A, --acls                  preserve ACLs (implies -p)
        -X, --xattrs                preserve extended attributes
        -o, --owner                 preserve owner (super-user only)
        -g, --group                 preserve group
            --devices               preserve device files (super-user only)
            --specials              preserve special files
        -t, --times                 preserve modification times
post #16 of 36
Thread Starter 
Hi Plan9,


Thanks for the list!! Very useful info! I've been reading about crontab and the ways to cron Rsync. There's more than one way to do this apparently. I would like to use the "right" way, if you know what I mean. I'ts a little while ago since I discussed this with brighter heads than myself, so I have to go through it again. But. I think I'm know what's the common way to do it, without installing additional "tools" and that's to use

One question. If modifying cron.d to run a "backup-to-disk" script in say /home/user/scripts, which uses the Rsync command that we choose to use, will it then do this every night at 10pm if the script looks like this:

Script sample:
#!/bin/bash
rsync -av /md1/data /sdc1/backupdisk1/
rsync -av /md1/data /sdc1/backupdisk2/

Cron sample:
cron.d
0 22 * * * /home/user/scripts/rsync_daily_data_backupdisks.sh

What about the -z and -h flag?
rsync -avzh /md1/data /sdc1/backups/

"-z" is compressing, OK, that's not interesting in this case because I wan't the data on the 2 internal backupdisks to be accessible/viewable, but "-h", what exactly does it do?? (-h : human-readable, output numbers in a human-readable format)


Please disregard the stuff written beneath! I've found a better place rolleyes.gif
https://help.ubuntu.com/community/CronHowto

I'm asking because even though I've been reading about this before and for the last couple of days, I still don't get how cron really works. E.g. how often does the system "check" cron @daily and that sort of things. So I'm reading a lot of samples to figure it out wink.gif cron.hourly, cron.daily, cron.weekly & cron.monthly were the ones I believed had to be used! So these are what they say they are. And by viewing the files in them, it looks like a script can be written directly!?!? It's just not what I read!?

And here it sound like you have to start the cron deamon to use it. I'm pretty sure it's running default on Ubuntu Server 14.04 since I'm already using it for MDADM !! It's the root cron rutines I'm looking for. To make these backup rutines run in the background. Hate new stuff redface.gif
Code:
NAME

       cron - daemon to execute scheduled commands (Vixie Cron)

SYNOPSIS

       cron [-f] [-l] [-L loglevel]

DESCRIPTION

       cron  is  started automatically from /etc/init.d on entering multi-user
       runlevels.

OPTIONS

       -f      Stay in foreground mode, don't daemonize.

       -l      Enable LSB compliant names for /etc/cron.d files. This setting,
               however,   does   not   affect   the  parsing  of  files  under
               /etc/cron.hourly,    /etc/cron.daily,    /etc/cron.weekly    or
               /etc/cron.monthly.

       -L loglevel
               Tell  cron what to log about jobs (errors are logged regardless
               of this value) as the sum of the following values:

                   1      will log the start of all cron jobs

                   2      will log the end of all cron jobs

                   4      will log all failed jobs (exit status != 0)

                   8      will log the process number of all cron jobs

               The default is to log the start of all jobs (1).  Logging  will
               be  disabled  if  levels is set to zero (0). A value of fifteen
               (15) will select all options.



OK then! Here's how to set it using user level stuff:
This is not the way, so let's carry on with the reading wink.gif
Code:
To use cron for tasks meant to run only for your user profile, add entries to your own user's crontab file. Start the crontab editor from a terminal window:

crontab -e

# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h  dom mon dow   command




CRON - USER LEVEL:
Actually this is a pretty fine sample/explanation!!
To define the time you can provide concrete values for minute (m), hour (h), day of month (dom), month (mon), and day of week (dow) or use '*' in these fields (for 'any').

Rule
m h dom mon dow command

Sample
For example, you can run a backup of all your user accounts at 5 a.m every week with:

0 5 * * 1 tar -zcf /var/backups/home.tgz /home/


So what will be the best way to do this? To use:

/etc/cron.d
or use
crontab -e



OK, this I think will be the right command to use in the shell backup script, to backup data from the fileserver to the 2 backupdrives:
Code:
#!/bin/bash
rsync --delete -avv /home/ /backupdisk1/
rsync --delete -avv /home/ /backupdisk2/
echo "06.00 - Daily Backup Successful: $(date)" >> /home/admin/logs/mybackup.log


And for the upcoming replication function I think this could be a reasonable good way to do it. The -P flag is interesting! If a file is "broken" during transfer through the net, rsync looks at it again when reconnecting!:
Code:
#!/bin/bash
rsync --delete -azvvP -e ssh -p 22xx /home/ remoteuser@remotehost.remotedomain:/home/path/folder  <---- NOT FINISHED!
echo "06.00 - Daily Backup Successful: $(date)" >> /home/admin/logs/mybackup.log




.
Edited by DanHansenDK - 12/8/14 at 6:37pm
post #17 of 36
Thread Starter 
Hello friends,


Sorry for the brake in the work....
I've started on university again and there has just been a lot to do...


The FileServer is getting there!
* I'm currently implementing VPN so that you can transfer and update (sync) your files over the web.
* I'm currently working on a script which limits the numbers of archives from Rar. This way it will be possible to set the numbers of "versions" to keep before deleting the oldest archives. This to avoid to much data and to many backups. E.g. 30, 60 or 90 days is what I'm aiming for. But any number can be set, of course.
* I've succeeded on implementing ClamAV. Scripts running using CRON of course.
* Rsync & Rar is working perfectly. I'll explain why I'm using both later on...
* I've build several scripts that works along with the functions set to run on the rig. A script watching the backupdisks space, a script running ClamAV, FreshClam etc. (AntiVirus because of the Samba/Windows files involved). A script checking the temperature of the disks. I'm using traditional diskdrives on this server because it's suppose to be a FileServer with a high level of security/data safety. Whenever and if ever a media crashes, then a traditional harddrive makes it somewhat easier to recover files. But this is a story for another project wink.gif

I'll update the ToDo and post the newest version at page 1 right in the top wink.gif

Have a nice day...

See you soon wink.gif
post #18 of 36
Thread Starter 
The script limiting the numbers of files "backed up" by rar, is still being tested. I've got a version running, but it's not near satisfying yet. I want the script to be easy to modify. Therefore I've been building it in 3 parts. A shellscript, a textfile to be mailed in case of problems and a cron-part. I'll get back to you regarding this.

Since the script watching CPU & GPU temperatures is running well on "Headless Linux CLI Multiple GPU Boinc Server" we'll continue with the hddtemp's. I'll be calling this script "WatchdogHddTemp.sh". For the other scripts we were using lm-sensors. We need to install hddtemp to be able to make this work as well.

Code:
Code:
# apt-get install hddtemp
Code:
# df
Filesystem      1K-blocks    Used  Available Use% Mounted on
/dev/md1       1914911908 5772524 1811844444   1% /    <--------------- THE 2 RAID'ED DISKS - RAID1 (SET AS MULTIDISK1)
none                    4       0          4   0% /sys/fs/cgroup
udev              1853900       8    1853892   1% /dev
tmpfs              373004     976     372028   1% /run
none                 5120       0       5120   0% /run/lock
none              1865008       0    1865008   0% /run/shm
none               102400       0     102400   0% /run/user
/dev/sdb1       976759008 1462104  975296904   1% /media/backupdisk1
/dev/sdd1       961300008 1439628  911006048   1% /media/backupdisk2


Testing the installation:
Code:
# hddtemp /dev/sda
/dev/sda: WDC WD20EFRX-68EUZN0: 29°C
# hddtemp /dev/sdb
/dev/sdb: WDC WD10EZEX-08M2NA0: 28°C
# hddtemp /dev/sdc
/dev/sdc: WDC WD20EFRX-68EUZN0: 29°C
# hddtemp /dev/sdd
/dev/sdd: WDC WD10EZEX-08M2NA0: 28°C


Building the script we need to check the numbers for the drive. Set AlertLevel as per your requirements. Please refer to your hard disk manual for working temperature guideline. Here's a general temperature guideline:
Code:
Operating    0 to 60 degrees C
Nonoperating    -40 to 70 degrees C
Maximum operating temperature change    20 degrees C per hour
Maximum nonoperating temperature change 30 degrees C per hour
Maximum operating case temperature      69 degrees C

Regarding this script, we'll convert the "WatchdogCpuTemp.sh" / "WatchdogGpuTemp.sh" scripts for this wink.gif

....ongoing work wink.gif
Edited by DanHansenDK - 11/5/15 at 5:22pm
post #19 of 36
Kudos on a way cool project especially for sharing the step-by-step at OCN. Proper job. One question - is it cost that caused you to choose software raid over cached hardware raid?
NewMain
(16 items)
 
  
CPUMotherboardGraphicsRAM
Intel i5 - 3550 Asrock Z77 Extreme4 Evga GTX 1070Ti  4x2GB Corsair Vengeance 
Hard DriveOptical DriveCoolingOS
Seagate SATA 2TB x 2  Plextor PX-891SAW CM-Hyper N520 Slackware 14.2 MultiLib, Slackware 14.0 32 bit,... 
MonitorKeyboardPowerCase
32" Vizio HDTV + DLP Logitech Wireless Corsair HX-850 Antec Sonata I 
MouseMouse PadAudioOther
Razer DeathAdder 2013 dual ESI Juli@ CoolGear ExtSata Enclosure w/ Optical and 3TB S... 
  hide details  
Reply
NewMain
(16 items)
 
  
CPUMotherboardGraphicsRAM
Intel i5 - 3550 Asrock Z77 Extreme4 Evga GTX 1070Ti  4x2GB Corsair Vengeance 
Hard DriveOptical DriveCoolingOS
Seagate SATA 2TB x 2  Plextor PX-891SAW CM-Hyper N520 Slackware 14.2 MultiLib, Slackware 14.0 32 bit,... 
MonitorKeyboardPowerCase
32" Vizio HDTV + DLP Logitech Wireless Corsair HX-850 Antec Sonata I 
MouseMouse PadAudioOther
Razer DeathAdder 2013 dual ESI Juli@ CoolGear ExtSata Enclosure w/ Optical and 3TB S... 
  hide details  
Reply
post #20 of 36
Thread Starter 
Hi Enorbet,

Kudos on a way cool project especially for sharing the step-by-step at OCN. Proper job. One question - is it cost that caused you to choose software raid over cached hardware raid?

Cost, no... I've had a few hardware controllers doing stuff I don't like. Worst of all not being consistence. Another reason is the possibilities of setting scripts, deamons etc. to alert you when a array not is running as it is suppose to do... Those reasons is the primary reason for me to choose the software raid and then because I haven't had a single brakedown running the software RAID. I've been testing for 3-4 month now.

The basic idea of this system was a RAID'et system which would run even though a disk (in the RAID) brakes down. Then you would have the possibility to run even thought this disk was down. Buy a new one and restore the array. RAID1 wink.gif But, that being said I don't think we have sufficient security yet. We'll have to have rsync running to update files that are added, deleted or replaced on the samba part, the fileserver part of the server. Doing that, using the flag which can destroy/delete files made me think. OK, we may have a little issue here regarding files that might get deleted unintentionally. Therefore I planned for a script using rar to make a "backup" as we know it. That script is running and working but I'm currently writing and testing a script setting a maximum of versions. E.g. 30 days of backup and then delete the oldest ones wink.gif


Sample of those rar files beign made (as sfx archive's - selfextracting in 4Gb size volumes):
Code:
/backupdisk1/rar_daily-261015-110543.sfx
/backupdisk2/rar_daily-261015-110716.sfx
/backupdisk1/test_rar_daily-271015-005304.part1.sfx
/backupdisk1/test_rar_daily-271015-005304.part2.rar
/backupdisk1/test_rar_daily-271015-005304.part3.rar
/backupdisk1/test_rar_daily-271015-005304.part4.rar
/backupdisk1/rar_daily-271015-050001.sfx
/backupdisk2/rar_daily-271015-050140.sfx
/backupdisk1/rar_daily-281015-050001.sfx
/backupdisk2/rar_daily-281015-050126.sfx
/backupdisk1/rar_daily-291015-050001.sfx
/backupdisk2/rar_daily-291015-050125.sfx
/backupdisk1/rar_daily-301015-050001.sfx
/backupdisk2/rar_daily-301015-050124.sfx
/backupdisk1/rar_daily-311015-050001.sfx
/backupdisk2/rar_daily-311015-050125.sfx
/backupdisk1/rar_daily-011115-050002.sfx
/backupdisk2/rar_daily-011115-050136.sfx
/backupdisk1/rar_daily-021115-050001.sfx
/backupdisk2/rar_daily-021115-050126.sfx
/backupdisk1/rar_daily-031115-050001.sfx
/backupdisk2/rar_daily-031115-050126.sfx
/backupdisk1/rar_daily-041115-050001.sfx
/backupdisk2/rar_daily-041115-050126.sfx
/backupdisk1/rar_daily-051115-050001.sfx
/backupdisk2/rar_daily-051115-050126.sfx

Daily cron init logging:
Code:
CRON RAR 05.00 - Daily Rar Initiated: Mon Oct 26 11:05:43 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Mon Oct 26 11:08:59 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Tue Oct 27 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Tue Oct 27 05:03:20 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Wed Oct 28 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Wed Oct 28 05:02:49 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Thu Oct 29 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Thu Oct 29 05:02:56 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Fri Oct 30 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Fri Oct 30 05:02:54 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Sat Oct 31 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Sat Oct 31 05:02:52 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Sun Nov  1 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Sun Nov  1 05:03:26 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Mon Nov  2 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Mon Nov  2 05:02:58 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Tue Nov  3 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Tue Nov  3 05:02:53 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Wed Nov  4 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Wed Nov  4 05:02:55 CET 2015
CRON RAR 05.00 - Daily Rar Initiated: Thu Nov  5 05:00:01 CET 2015
CRON RAR 05.00 - Daily Rar Successful: Thu Nov  5 05:02:54 CET 2015

Sample of script running - checking for disksizes. Harddrives only of course, with a mail-alert when reaching x%. but, not good enough... We need a better solution like described above wink.gif
Code:
Diskspace is still sufficient "/dev/md1 (1%)" on *** as on Thu Oct 29 07:00:01 CET 2015
Diskspace is still sufficient "udev (1%)" on *** as on Thu Oct 29 07:00:01 CET 2015
Diskspace is still sufficient "tmpfs (1%)" on *** as on Thu Oct 29 07:00:01 CET 2015
Diskspace is still sufficient "/dev/sdb1 (1%)" on *** as on Thu Oct 29 07:00:01 CET 2015
Diskspace is still sufficient "/dev/sdd1 (1%)" on *** as on Thu Oct 29 07:00:01 CET 2015
Diskspace is still sufficient "/dev/md1 (1%)" on *** as on Fri Oct 30 07:00:01 CET 2015
Diskspace is still sufficient "udev (1%)" on *** as on Fri Oct 30 07:00:01 CET 2015
Diskspace is still sufficient "tmpfs (1%)" on *** as on Fri Oct 30 07:00:01 CET 2015
Diskspace is still sufficient "/dev/sdb1 (1%)" on *** as on Fri Oct 30 07:00:01 CET 2015

The script I'm building and testing is as said, still in progress... But here's some of it. There's a pretty good chance that I'll finish it in the weekend. All others scripts and setup's are running, work. I haven't updated the stuff I posted earlier on, but I'll do that in the weekend (I hope) too wink.gif I'll try to make it anyway... redface.gif

Testing... Trying...
Code:
ls -l | egrep -o "[0-9]{2}[0-9]{2}[0-9]{2}-[0-9]{2}[0-9]{2}[0-9]{2}" | sort | uniq | head -n -4  | xargs printf "backup-%s.rar\n" | xargs rm

find*/path/to/files* -mtime +15 -exec rm {} \;

find /path/to/dir -type f -mtime +15 -exec rm {} +
Code:
# ls -l /backupdisk1/
total 2732620
drwxr-xr-x 4 ***      ***            4096 Oct 25 04:31 ***
drwxr-xr-x 8 ****** ******       4096 Oct 25 05:07 ******
drwxr-xr-x 3 root     root           4096 Dec  8  2014 shares
-rwxr--r-- 1 root     root      400000000 Oct 27 00:53 test_rar_daily-271015-005304.part1.sfx
-rw-r--r-- 1 root     root      400000000 Oct 27 00:53 test_rar_daily-271015-005304.part2.rar
-rw-r--r-- 1 root     root      400000000 Oct 27 00:54 test_rar_daily-271015-005304.part3.rar
-rw-r--r-- 1 root     root      199122718 Oct 27 00:54 test_rar_daily-271015-005304.part4.rar
Code:
# ls -l /backupdisk1/ | grep rar_daily
-rwxr--r-- 1 root     root      400000000 Oct 27 00:53 test_rar_daily-271015-005304.part1.sfx
-rw-r--r-- 1 root     root      400000000 Oct 27 00:53 test_rar_daily-271015-005304.part2.rar
-rw-r--r-- 1 root     root      400000000 Oct 27 00:54 test_rar_daily-271015-005304.part3.rar
-rw-r--r-- 1 root     root      199122718 Oct 27 00:54 test_rar_daily-271015-005304.part4.rar


BTW, we are running ClamAV of course. Haven't posted the scripts and the update rutines, but it have been running for 14 days now. Tested with the VirusWannarBe "eicar" and it works perfectly. It "isolates" under a folder I named "infectedfiles", warns you with a email telling you what to do... I'll try to get this up here along with the rest..


Thanks for the interest wink.gif
Have a nice one wink.gif

Dan
Edited by DanHansenDK - 11/5/15 at 5:14pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Linux, Unix
Overclock.net › Forums › Software, Programming and Coding › Operating Systems › Linux, Unix › Project "Linux FileServer RAID1 & Backup" - Economic Semi Professional RackMounted System