[BBLISA] System Backup thoughts and questions...

Edward Ned Harvey bblisa3 at nedharvey.com
Thu Jan 8 21:34:00 EST 2009


FWIW,

I switched from tar to rsync because my backups were too large for tar, at a few hundred gigs.  Also worth mention, that was Mac OSX 10.4 Tiger tar.  Not linux whatever-version tar.  So there's possibility an os specific limitation.



> -----Original Message-----
> From: bblisa-bounces at bblisa.org [mailto:bblisa-bounces at bblisa.org] On
> Behalf Of David Allan
> Sent: Thursday, January 08, 2009 4:54 PM
> To: Richard 'Doc' Kinne
> Cc: bblisa at bblisa.org
> Subject: Re: [BBLISA] System Backup thoughts and questions...
> 
> I think there are probably as many answers to this question as there
> are
> members of this list, but I have found tar to be a simple and effective
> solution for this sort of problem, although I can't say I've tried it
> on
> anything approaching that number of files:
> 
> tar cf - /source/directory | ( cd /backup/directory ; tar xvf - )
> 
> Looking forward to the discussion thread,
> Dave
> 
> 
> On Thu, 8 Jan 2009, Richard 'Doc' Kinne wrote:
> 
> > Hi Folks:
> >
> > I'm looking at backups - simple backups right now.
> >
> > We have a strategy where an old computer is mounted with a large
> external,
> > removable hard drive. Directories - large directories - that we have
> on our
> > other production servers are mounted on this small computer via NFS.
> A cron
> > job then does a simple "cp" from the NFS mounted production drive
> partitions
> > to to the large, external, removable hard drive.
> >
> > I thought it was an elegant solution, myself, except for one small,
> niggling
> > detail.
> >
> > It doesn't work.
> >
> > The process doesn't copy all the files. Oh, we're not having a
> problem with
> > file locks, no. When you do a "du -sh <directory>" comparison between
> the
> > /scsi/web directory on the backup drive and the production /scsi/web
> > directory the differences measure in the GB. For example my
> production /scsi
> > partition has 62GB on it. The most recently done backup has 42GB on
> it!
> >
> > What our research found is that the cp command apparently has a limit
> of
> > copying 250,000 inodes. I have image directories on the webserver
> that have
> > 114,000 files so this is the limit I think I'm running into.
> >
> > While I'm looking at solutions like Bacula and Amanda, etc., I'm
> wondering if
> > RSYNCing the files may work.  Or will I run into the same limitation?
> >
> > Any thoughts?
> > ---
> > Richard 'Doc' Kinne, [KQR]
> > American Association of Variable Star Observers
> > <rkinne @ aavso.org>
> >
> >
> >
> 
> _______________________________________________
> bblisa mailing list
> bblisa at bblisa.org
> http://www.bblisa.org/mailman/listinfo/bblisa





More information about the bblisa mailing list