[BBLISA] iSCSI - opinions / experiences?

Eddy Harvey bblisa2 at nedharvey.com
Thu Jun 15 10:40:30 EDT 2006


> > . . . in my personal experience, nfs has a tendency to hang up the 
> > system whenever there's a network outage or you reboot the 
> nfs server. 
> > You don't have that problem with SAN, because the network is more 
> > reliable and there is no nfs server.
> 
> I will match my NetApp NFS server against any SAN box and 
> show you as good if not better numbers for speed and 
> reliability. 

I agree, my netapp is awesome too.  But the fault isn't in the server; it's
in the client.  If you have a Solaris or Linux box mounting nfs on the
netapp, and then you reboot the netapp (rare but it happens) the
solaris/linux machine will have nfs stale file handles, and sometimes that
makes the solaris/linux machine unable to reboot.  I've had numerous times I
was required to pull the power cord from solaris/linux because it couldn't
see the netapp.  Not even root could login on the local console.  Not even
pressing the power button to signal init 0 would help.

Also, I spent a fair amount of time measuring file i/o over nfs, and
generally found it 1/8 to 1/10 as fast as block-level network access.  I
dare you to measure and compare.  Even with some serious tweaking of
parameters, or switching to udp instead of tcp, nfs is much slower than
block-level access.  In both cases I was running the i/o across a gigabit
ethernet.

In one case I even used a cross-over cable to eliminate the possiblity if
the switch causing the delays.

Incidentally, this is the test I was using:
	time dd if=/dev/zero of=/some/file bs=1024k count=1000
	(tried the above using GB nfs, also using GB pleiades san)
	time ( dd if=/dev/zero of=- bs=1024k count=1000 | rsh somehost "cat
> somefile" )
	(I'm not sure if this is the right syntax right now, but it was
something like that.)
Results:
	nfs was 8-10 times slower than either rsh or san.


> As for hanging the system, well, NFS might or might not hang 
> but the system won't necessarily crash. When you SAN switch 
> or network dies, it's like ripping the system disk out from 
> under a running system -- you
> *will* crash.

I'll admit that a network crash on a SAN is worse than a network crash on a
NAS.  Plain and simple, you've got to avoid these problems by using decent
hardware - cisco, brocade, whatever.  As long as it's expensive it's
probably good.





More information about the bblisa mailing list