We've always been avid readers of Slashdot (News for nerds, stuff that matters) so it's not surprising to us that this project found its way there. Discussions on Slashdot are renowned for generating a lot of heat but they can also shed a lot of light on interesting topics like this one.
This is not the first time Slashdot has covered Raspberry Pi Clusters but this project is undoubtedly the largest and something done for specific research driven reasons. The initial discussion was sparked by Rich Brueckner's interview with Bruce Tulloch at insideHPC and Anandtech's review of the project but threads on Slashdot explored some related ideas like using virtual or shared memory machines.
The Los Alamos National Laboratory (LANL) objective that motivated this project is one driven by the need to develop solutions to problems that only arise in high performance computing when working at massive scale. It's not about doing science or high performance computing on Raspberry Pi as Gary Grider explained in the recent news conference at the Super Computing Conference in Denver recently.
This project is about emulating very large clusters that have highly distributed architectures at build and operational costs that don't break the budget (even for large labs like LANL). There was considerable debate about why one would use Raspberry Pi for this but one contributor put it succinctly; "The idea is not to have an super computer but to emulate one. Writing code for stuff like thus is hard and running it on the real deal is expensive. This way the can emulate 750 core system at an fraction of the cost."
Some others asked questions like "So, what point am I missing? The Xeon phi 7290 is 4k$ and has 72 cores, you can get 10 of those and get way more speed, shared memory benefit etc..." to which another answered "The shared memory is a detrator not a benefit if you're trying to have something which emulates an expensive distributed architeture. The point isn't to get lots of speed, it's to get a bunch of cores distributed over a local network in order to get a cheap test bed emulation of a much larger machine." and another pointed out "10 CPUs with 72 cores each is 720 cores. 750 SOCs with 4 cores each is 3,000 cores (and RAM and motherboards included). The point is to have a massive number of cores in a large number of machines, to simulate a large number of machines, at the budget point."
One contributor put it very nicely "a single bus can move a lot of people, but if you're modeling highway traffic, you want to use many independent cars.".
In any case, at the end of the day, this is a research project. The idea is to try to solve questions of scale for which we don't yet have answers. Of course we also happen to think Raspberry Pi Clusters make sense in many other applications, such as computing education, Industrial IoT and small scale cluster and cloud applications. Many of the attendees to the UNM booth at the conference expressed interest in a lot of these ideas as well.
For more discussions about this project head on over to Slashdot or if you're interested to learn more about Raspberry Pi there are many discussions on Slashdot that cover this amazing little computer.
Just be prepared for some strident opinions on all sides of any topic up for discussion on this website :)
GF26A