[BBLISA] System Backup thoughts and questions...

Jurvis LaSalle jurvis at gmail.com
Thu Jan 8 17:01:20 EST 2009


Using rsync should be faster too as I imagine very few of those 114k  
files have actually changed.  There are a few wrapper scripts for  
rsync to improve the experience specifically for backups.  This link  
might send you down a helpful path:
http://www.google.com/search?q=rdiff-backup+vs+rsnapshot
Can't vouch for either one as I'm stuck using something dreadful for  
the time being.

JL

On Jan 8, 2009, at 4:54 PM, David Allan wrote:

> I think there are probably as many answers to this question as there  
> are members of this list, but I have found tar to be a simple and  
> effective solution for this sort of problem, although I can't say  
> I've tried it on anything approaching that number of files:
>
> tar cf - /source/directory | ( cd /backup/directory ; tar xvf - )
>
> Looking forward to the discussion thread,
> Dave
>
>
> On Thu, 8 Jan 2009, Richard 'Doc' Kinne wrote:
>
>> Hi Folks:
>>
>> I'm looking at backups - simple backups right now.
>>
>> We have a strategy where an old computer is mounted with a large  
>> external, removable hard drive. Directories - large directories -  
>> that we have on our other production servers are mounted on this  
>> small computer via NFS. A cron job then does a simple "cp" from the  
>> NFS mounted production drive partitions to to the large, external,  
>> removable hard drive.
>>
>> I thought it was an elegant solution, myself, except for one small,  
>> niggling detail.
>>
>> It doesn't work.
>>
>> The process doesn't copy all the files. Oh, we're not having a  
>> problem with file locks, no. When you do a "du -sh <directory>"  
>> comparison between the /scsi/web directory on the backup drive and  
>> the production /scsi/web directory the differences measure in the  
>> GB. For example my production /scsi partition has 62GB on it. The  
>> most recently done backup has 42GB on it!
>>
>> What our research found is that the cp command apparently has a  
>> limit of copying 250,000 inodes. I have image directories on the  
>> webserver that have 114,000 files so this is the limit I think I'm  
>> running into.
>>
>> While I'm looking at solutions like Bacula and Amanda, etc., I'm  
>> wondering if RSYNCing the files may work.  Or will I run into the  
>> same limitation?
>>
>> Any thoughts?
>> ---
>> Richard 'Doc' Kinne, [KQR]
>> American Association of Variable Star Observers
>> <rkinne @ aavso.org>
>>
>>
>>
>
> _______________________________________________
> bblisa mailing list
> bblisa at bblisa.org
> http://www.bblisa.org/mailman/listinfo/bblisa




More information about the bblisa mailing list