[BBLISA] Fwd: Re: Fileserver opinion

John Stoffel john at stoffel.org
Mon Aug 30 15:49:44 EDT 2010


>>>>> "Daniel" == Daniel Feenberg <feenberg at nber.org> writes:

Daniel> We have Netapp equipment, and it has done well for us, but the
Daniel> Netapp fan club on this list is seriously underestimating the
Daniel> relative cost of home-brew to Netapp equipment, and they
Daniel> should not be ridiculing the OP for considering a home-brew
Daniel> store.

I'm a fan of Netapp equipment and the OS, but even I see it's short
comings and weaknesses.  We justify the cost because they're rock
solid and they save us a ton of money by NOT going down on us.  We
have engineers designing products for customers and the cost of
downtime is really quite high.  

We've also been using Netapps for years and years and are comfortable
with what they provided.  But we did look at Sun's ZFS 7000 series
boxes back in 2008 when we last did a refresh of Netapp hardware.  It
was *very* tempting to go the Sun route back then, but a couple of
things stopped us:

1. lack of quotas and reporting on disk usage on a per-user basis, so
we could lay the blame at the proper engineer/manager's feet when we
needed to force disk space cleanup.

2. The hardware/software was quite new, and the migration from
existing Netapps to ZFS would not be as transparent or as easy as a
Netapp to Netapp transition.


On the other hand, Netapp sucks big time because of their assinine
limits on Aggregate size, which limits the size of volumes you can
have, etc.  OnTap 8 does raise the limits, but in the usual vendor way
of tiering the limits by the hardware you buy, which is stupid since
the hardware is mostly the same between models.  Grrr...

They also do shaft you with licenses for every little thing adding a
whole bunch of cost.  Not fun.  

Daniel> A simple Linux or FreeBSD box with 12 2TB drives can be
Daniel> assembled for $3,000. I wouldn't put a RAID 5 on it (because
Daniel> Linux and FreeBSD don't do a good job of reconstruction after
Daniel> a drive failure - see
Daniel> http://www.nber.org/sys-admin/linux-nas-raid.html ) but a RAID
Daniel> 1 will provide reliable, but not fast storage. Suppose that
Daniel> the formatted storage capacity is a third the total drive
Daniel> capacity - that makes the cost about $375 per TB.

So what happens when two disks fail?  How do you get notified and can
you rebuild your parity/redundancy without *any* downtime or even real
notice by your users?  

I only ask because this is a hard thing to get right, and takes time
and effort to test properly.  90% of all scripting and coding seems to
be bounds and errror checking, not to mention check pointing state and
rolling back or committing changes when you know you can.  Computers
are lousy at handing exceptions well, while humans are great at it.
Humans are bad a repetitive tasks which have to be done quickly and
accurately all the time.

Daniel> I have a recent quote from Netapp, including a discount of
Daniel> undetermined size (the list prices were not available) for
Daniel> three FAS2040 SATA based systems with 12, 24 and 48 GB of
Daniel> disk. With RAID 4 the Netapp is a little more space efficient
Daniel> and probably half the raw capacity is usable, The cost per TB
Daniel> ranges from $7,500 down to $3,500 and includes 3 years of
Daniel> maintainance, but only NFS software - no CIFS. That is about
Daniel> 10 to 20 times the price of home-brew. Of course the Netapp is
Daniel> much faster, but as "features" go, FreeBSD is quite
Daniel> competitive, indeed it can do many things the Netapp can't,
Daniel> such as run rsync, or can't do without paying extra, such as
Daniel> run CIFS.

Sure, Netapp is not cheap.  And if I was in a University or lab
environment, I'd just all over ZFS and use it at the drop of a hat.
But there are differences in tolerance for downtime and the costs
associated with it.

Esp when you throw in leasing vs. capital expenditures.  In
non-profits, leasing seems to be non-existant, capital is hard but not
impossible to get, and people costs are lower.  So if you buy
something, you tend to keep it forever.

Daniel> The most discouraging aspect of the Netapp is what happens
Daniel> when the original 3 year service contract runs out. On our 4
Daniel> year old FAS3040 with about 3TB of storage, the maintainance
Daniel> is $7,000/year and is not on-site.  The high price is intended
Daniel> to prevent users from keeping old Netapp's, and it largely
Daniel> succeeds at that objective. There is rarely any point in
Daniel> adding shelves to an existing Netapp head - the maintainance
Daniel> on the head will make the shelves totally uneconomic long
Daniel> before they are obsolete. We have not decided yet what we will
Daniel> do, but we will not renew maintainance!

Heh, for that cost, I'd probably just buy some disks for the netapp
and let maint lapse and live with it.  Since they are quite solid, I
think it's a gamble worth taking.  

Daniel> It is important not to take the view that if A is better than
Daniel> B, then B is no good, or to make decisions based on rules of
Daniel> thumb appropriate for situations far different from your own,
Daniel> or to assert that allowing price to trump quality is
Daniel> unprofessional. All of those are errors of thought that tend
Daniel> to result in systems that are gold-plated, but of insufficient
Daniel> capacity. It is common in university settings to offer faculty
Daniel> and students totally inadaquate storage quantity, on totally
Daniel> over-engineered storage systems. A typical research project
Daniel> can withstand a storage outage once a year (but not lost data)
Daniel> far more easily than it can withstand bumping against a tiny
Daniel> storage quota every day.

Like the poster originally said, they're limited to $10k in hardware
costs (and obviously they can spend more in person costs) so they want
to maximize the bang for the buck.  

So, in conclusion it all depends on your own needs and requirements
for what you buy.  Do I like Netapp?  Yes!  Would I move to another
vendor who overed a better deal and fit my needs?  Sure!  

John



More information about the bblisa mailing list