[BBLISA] Faster than 1G Ether ... ESX to ZFS

Edward Ned Harvey bblisa4 at nedharvey.com
Tue Nov 16 20:52:26 EST 2010


> From: Sean Lutner [mailto:slutner at rentul.net]
> 
> I run a somewhat sizable virtual environment with almost 50 ESX hosts and
> up to 50 guests per host. The environment has over 1,000 VMs and we don't
> have a single ESX host with more than 1GbE NICs. Our storage devices
> (Netapp) all have 10GbE out the back. We have never seen a bottleneck on
> the host side using 1GbE connections. We split the hosts into three
> networks; 1 for vkernel and storage (all NFS), 1 for guest networking and
1
> for the service console/vmotion. These are all bonded/teamed failover
pairs.
> seperating your traffic like this is also a best practice and something I
can
> highly recommend you do. Having a full 1GbE connection dedicated to all
> these things is almost certainly more than enough. What are you doing in
> your environment that you need 10GbE?

That doesn't make any sense.  I measured single sas 10krpm disk is able to
sustain 1Gbit/sec.  Typically these are attached via 6Gbit sas/sata buses,
in a raid configuration, for the sake of exceeding the performance of a
single disk.

Maybe in your configuration, you have always light IO.  Or you just suffer
performance degradation and you never know it because you never saw anything
better.

Generally speaking, I am supporting engineers with engineering tools, in an
SGE cluster.  There exists a central file server, and a bunch of compute
nodes.  The compute nodes all need locally attached disk, to avoid the
bottleneck of all the systems hammering on a centralized server.  If anybody
accidentally misconfigures their jobs, and causes a bunch of machines to all
hammer on the central server, everybody notices the slowdown.  The central
server is using 4x 1Gb bonded.

Most of that is irrelevant at present however, because at present, it's
academic.  I don't have a customer demand right now, I just want to know how
for the sake of knowing how.  Imagine you have a windows fileserver, and a
Linux server, and a whatever.  You want them virtualized, and you're running
ESXi.  Well, ESXi does terrible at managing locally attached storage...
There are no raid configuration tools, monitoring tools, you can't use Dell
OpenManage, or MegaCLI, etc.  If you lose disks, or need to reconfigure your
disks or manage hotspares for any reason, generally you have to shutdown
ESXi in order to do that.  However, ESXi is excellent at accessing NFS and
ISCSI over the network.  And ZFS is excellent at serving NFS and ISCSI.  And
ZFS is excellent at backing up and managing raid etc.  So a diskless (or
minimal disk) ESXi server forms an excellent partnership with a ZFS
server...  And it's all about what performance limitations the network
interconnect introduce.

I just can't bear the idea of optimizing all my 6Gbit disks for performance,
only to funnel them all across a 1Gbit bottleneck.




More information about the bblisa mailing list