[BBLISA] physicalization

Dean Anderson dean at av8.com
Thu Jul 1 00:02:26 EDT 2010


I missed the original message thread due to some problems at AV8 last
weekend.  (moved to larger datacenter in Schrafft, killed some equipment
in the move. oops. )  But $fulltime work on HPC is probably relevant...

I saw some interesting info on HPC regarding FPGA accelerators at SIFMA
in NYC last week. I can't think of the company right now; 'acellerize'
or something like that.  I also attended an Amazon event on their cloud
architecture. Nothing really new, and Steve Reilly (Amazon Security
wizard) incorrectly stated that there is a flaw in AES-256 (there isn't,
but it was discovered that after 14 rounds one can still recover the
key; that was merely unexpected--it is not a flaw. AES-256 is secure).  
Yeah, that was re-assuring.

So... I take it you are thinking about building a grid with Atoms...

What technology to use depends greatly if not entirely on your problem
space. On that count, nothing has really changed in 20 years, since I
was at KSR and KSR was building supercomputers in competition with Cray
and ThinkingMachines.  Today, KSR's Shared Memory MPP approach is
probably best represented by ScaleMP (who also just hit ~1000 processors
in a single system; about the same number as KSR). Today the NUMA
approach of ThinkingMachines are probably best represented by Grid (eg
DataSynapse, Platform Computing, Condor), while the vector processor
approach of IBM, Cray 20 years ago are probably now represented by GPU
and FPGA solutions.

- Share memory codes with lots of threads work best on shared memory
multiprocessors.

- Codes that can be broken in to small compute loads of a few seconds to
a few minutes can be best implmented on grid.

- Things that might have a really tight, closely coupled loop on a lot
of data seem to work pretty well on GPUs and FPGAs.

I think the question of 'How should you get there, physical or virtual?'
is a question of pricing and actual implementation. Virtualization gets
some things that are hard to get in real systems (such as easy moving
between real implementations) But the nature of the problem space
doesn't change just because the technology is virtualized.

The problem with virtual (eg. Amazon EC2) for industries like financial
services is that there are no physical cages to show the lawyers as
evidence that everything is secure and that our systems are physically
secure and separate.  Lawyers like evidence, not assurances.  
Otherwise, I think its great stuff. For some people in some
applications, its really a game changer.

		--Dean

On Mon, 28 Jun 2010, Edward Ned Harvey wrote:

> > From: Ian Stokes-Rees [mailto:ijstokes at hkl.hms.harvard.edu]
> 
> > 
> 
> > Keep us updated on what you discover on this front.  People have been
> > trying this kind of thing in different forms for quite awhile.
> 
>  
> 
> Well, my findings so far are:  The purchase cost per core is approx the
> same, to buy the zillions of atoms versus buying a smaller number of xeon
> etc of equal total compute power, but the density and power consumption of
> the atoms is significantly lower (an order of magnitude) thus yielding a
> lower total cost of ownership.  Figures range from 25% to 50% lower TCO.
> 
>  
> 
>  
> 
> > >From a different perspective, (and also *very* dependent on your
> > having a good grasp of the workload characteristics), if this is a high
> > value and long term workload you *may* be able to benefit from GPUs,
> > which effectively are hundreds of slower compute cores accessible from
> > the same system image, but will require a small computational kernel
> > which you can port to a GPU environment.
> 
>  
> 
> One of our guys is currently exploring the possibility of porting the
> present jobs to GPU.  It may work out, but it's fundamentally more difficult
> to program for a GPU, so it's less versatile for adaptation to changes of
> your algorithm, or new requirements.  Thanks for the suggestion...
> 
> 

-- 
Av8 Internet   Prepared to pay a premium for better service?
www.av8.net         faster, more reliable, better service
617 256 5494






More information about the bblisa mailing list