[BBLISA] Fileserver opinion

Sean Lutner slutner at rentul.net
Wed Aug 11 19:19:28 EDT 2010


Answer all the questions you've posed to the list before you even begin to define that the solution looks like. It very well may turn out that what you've described isn't sufficient for your needs or it could be very over built.

Start with requirements then find things that meet those requirements not the other way around.

That being said, some of the questions are general and can be discussed.

- No, don't put VMs on the storage that will also serve as general purpose file services. Bad idea. The mis-match of workloads will cause poor performance.

- You can look at 10GbE cards and switches, but if you can get away with it you could bond multiple NICs on the host for higher throughput. 

- Estimating the current throughput can be roughly estimate with iostat run on a short intervals over a long period of time. This every 5 seconds for 12-24 hours and then parsing the output to draw some loose conclusions.

- In my experience with SSDs and your described workload you will not see any appreciable performance difference from using SSD drives.

As others have mentioned, you definitely want to do battery backed write cache on the controller cards and make sure you set the read/write balance appropriately for the work loads.

On Aug 11, 2010, at 1:55 PM, Ian Stokes-Rees wrote:

> 
> Diligent readers will recall the thread a few weeks ago on slow disk
> performance with a PATA XRaid system from Apple (HFS, RAID5).  Having
> evaluated the situation, we're looking to get a new file server that
> combines some fast disk with some bulk storage.  We have a busy web
> server that is mostly occupied with serving static content (read only
> access), some dynamic content (Django portal with mod_python/httpd), and
> then scientific compute users who do lots of writes (including a 100
> core cluster).
> 
> We have about a $10k budget (ideally $8k).  The current plan looks
> roughly like this:
> 
> AMD quad socket MB
> 1x12-core AMD CPU
> 8 GB RAM
> 2x160 GB 7200 RPM SATA drives for system software
> 11x300 GB 15000 RPM SAS2 fast storage (RAID10 + 1 hot swap, 1.5 TB volume)
> 5x2 TB 7200 RPM SATA drives (RAID10 + 1 hot swap, 4 TB volume)
> 
> A 3U chassis will be filled, and the 4U chassis will have some empty bays.
> 
> We can also upgrade processors and RAM as funds become available and the
> need arises.
> 
> This will support a compute cluster (~100 cores), 10-20 users (typically
> 3-4 active), and a busy web server.
> 
> Besides the obvious question of whether this setup is sensible/cost
> efficient (mixing two kinds of storage, etc.), the main unknowns we have
> are:
> 
> 1. Should we consider running a VM on this same server and host e.g. the
> web server on a VM which accesses files through the virtualization
> layer, rather than a physical network interconnect.
> 
> 2. What combination of network filesystem and local file system
> combination makes sense? (currently NFS + ext4 is on the cards)
> 
> 3. Should we consider alternatives to GigE for interconnect.
> 
> 4. How can we estimate our IOPs and throughput requirements?
> 
> 5. Perspectives on SLC SSDs vs. SAS2 w/ 15k drives, since we could
> probably transfer the 11x300 GB SAS2 drive budget to a collection of
> SSDs and live with the reduced storage if that was expected to have a
> big performance benefit.
> 
> Thanks in advance for any opinions on this.
> 
> Ian
> 
> <ijstokes.vcf>_______________________________________________
> bblisa mailing list
> bblisa at bblisa.org
> http://www.bblisa.org/mailman/listinfo/bblisa



More information about the bblisa mailing list