Full-stack Philosophies

Jeffrey Needham's Blog

RSS Feed

HPC versus HDFS: Scientific versus Social

Posted on 7:32 pm October 30, 2014 by Jeffrey Needham

There have been rumblings from the HPC community indicating a general suspicion of and disdain for Big Data technology which would lead one to believe that whatever Google, Facebook and Twitter do with their supercomputers is not important enough to warrant seriousness—that social supercomputing is simply not worthy.  A little of this emotion seems to be background radiation from the old artsie vs. engineer conflict. Not that Big Data is always going to be used for social computing, but Facebook wouldn’t exist without Hadoop. This issue is a social one, not scientific, so there’s that.

As an artist that has tweaked kernels for a living, I’ve always missed the point of this conflict. Great engineering is an art form that requires big swaths of imagination and science with constant political ingenuity, so I would like to take a look at the considerable mythology around HPC and the extraordinary mythology around Big Data and technologies like Java—a language real men would never use—so there’s that too.

Wisconsin Dairy Farmers

One of my occupations during the last few decades has been making software run faster on hardware.  I spent some time at the granddaddy of all HPC companies—Control Data Corporation—which was founded in the 1950s by a couple of Wisconsin farmer engineers, a pedigree if there ever was one. Seymour Cray was a maverick, obsessed with building the fastest computer possible and the CDC 6600 is considered the first RISC CPU in addition to being an imaginative landmark in architecture, performance and industrial design. To this day, processor doctrine is built on RISC cores, including the very CISC-like x64 Instruction Set Architecture (ISA).

In the 1980s, Hennessey and Patterson looked at the VAX’s ISA and found that the new crop of compilers being developed couldn’t emit code for these complex instruction sets; these compilers only needed a small subset of instructions. For years, CPU designers were asked to add instructions that would benefit specific aspects of a programming language’s performance. This complicated the instruction set and is where the C in CISC comes from. This complexity demand came from somewhat irregular language compilers for FORTRAN and COBOL.  It was often easier to add specialized hardware instructions in the absence of a regular grammar and a regular way to emit code. C, PASCAL and ALGOL compilers were built on parsers and code generators that were more regular. This compiler technology enabled sophisticated instruction optimizers, which took over the job of code performance from the CPU designer. This new crop of Green Dragon compiler developers also soon realized that there was a link between the language and how a CISC ISA was not helping optimize the prose of C programmers.

The RISC revolution was driven by these early C compilers and draws a straight line back to design elements of the CDC6600.  This machine had simple instructions that could be implemented in logic which would run faster than if complex instructions were implemented.  Cray went simply for speed and to that end had to simplify the hardware engineering. The landmark 1980s study found that C compilers only used a tiny fraction of the instructions found in something like a VAX or big mainframes. This lesson still informs the Big Data SQL debate because SQL limits many forms of discovery since you generally need to know what you need to find before the schema can be rigged.  The rise of simple key:value/ISAM databases is an attempt to get around language, scale and codepath bottlenecks that exist in transactionally-focused RDBMS kernels like Oracle and DB2.

The CDC 7600—which would do Jackson Pollock proud with its backplane of colorful twisted pairs—was an evolution of the 6600. Cray employed more tricks to cheat the time devil. The dirty laundry about CPU speed (the megahertz myth) is that fast memory makes for fast CPUs (not just core clock speed) and the 6600 CPUs were fast enough to shift the bottleneck onto memory, where we have been ever since. The CDC7600 had more functional units to execute more operands at once, improved register file performance so functional units would not be standing around with their fingers up their nose waiting for operands, and of course, the memory was banked and ranked to reduce the number of out-of-joint noses.

Cray’s next attempt to cheat He-Who-Must-Not-Be-Named was the CDC 8600. This design introduced the notion of vector registers.  A basic tenet of performance is you have to do more things per unit of time or you have to do more steps in parallel.  Even in Hadoop clusters built from 8,000 Raspberry Pis, time remains the dark lord. CDC 8600 vector registers were about building a mathematical mosh pit so that 64 multiplies could happen in a single unit of time. The only drawback to the CDC8600 was that it was science fiction. CDC wasn’t interested, so Cray Research was spun, the sign out in front of his version of Hogwart’s in Chippewa Falls, Wisconsin was repainted, and the CDC8600 became the Cray-1.

The Cray X-MP was another landmark in both form and function—there was a sense of style to a Cray—it was not just a box of electronics. You still find Cray supercomputers in big labs where cooling is distributed with stainless steal piping that was borrowed from the dairy industry. Of course it was. CDC and Cray invented HPC, which only now seems oddly named: High Performance Computing, but in the face of Facebook’s 7,000-node commercial supercomputer, HPC is actually scientific supercomputing.

My first job out of college was at CDC when they were already—in the words of a friend who used this term to describe what happened to SGI in the 1990s—in controlled flight into terrain. This is an important lesson: HPC has always been a tough business. The lucky accident of the VAX (lucky for DEC anyway) hastened CDC’s demise.  Of course, Sun and SPARC hastened DEC’s demise and as Oracle is finding out, Intel is hastening SPARC’s. Oracle wishes SPARC was the future, but the future is now Apache Spark which, ironically, is written entirely with another part of Sun’s legendary influence on computing, Java.

Traditional high performance commercial computing has meant relational computing and does look a little like Big Data workloads, but most enterprise warehouses can’t get out from underneath their legacy rat’s nest OLTP schema doctrine, which prevents their SQL optimizer from ever getting off the runway.  New database technologies like columnar and keyvalue (aka ISAM, if you remember this term) are designs focused on three concepts that work well together: simplicity, performance and scalability.  A relational schema can be built for high performance, but is difficult for both good and bad reasons. Most relational schemas conform to a transactional relational doctrine, not an analytic relational doctrine that doesn’t treat every last piece of data like it is precious payroll data. Using technology that thinks all data is precious comes with significant performance and scale trade-offs. As most EDWs are leached from transaction systems that have been around for decades, doing high performance analytics on a transactional RDBMS eventually leads to downloading a copy of Mongo.

Really Cheap, Filo Cheap

When I arrived at Yahoo!, I had some familiarity with high performance commercial and scientific computing, but handling 250 million users in three datacenters cost-effectively and making sure none of it failed was a new form of supercomputing for me. Yahoo! was a seminal experience as it was not just about high performance, reliability and scalability, but it was about extreme price/performance because most of their services were free. These were a lot of chainsaws to juggle, but this is the single most important distinction between HPC and Big Data, which is web-scale supercomputing on a dime (or less if you can manage). Yahoo! could never afford specialty RDBMS technology from Teradata, Oracle or IBM, let alone exotically expensive HPC technologies.

A good unintended consequence of affordable-at-scale supercomputing has produced some interesting non-scientific use cases (a great example being OPSEC). A less good consequence of off-the-shelf supercomputers is that there are now far more systemic software bottlenecks within a platform and they’re much harder to triage or cost-effectively repair. Many criticisms of Big Data from the HPC community center around “awful” technology choices like TCP/ip, Hadoop, Java, and commodity switches and servers. However, the characteristic in all of these choices is price/performance. For instance, RDMA verbs over Infiniband have superior latency, lousy price/performance and require skilled design and implementation. Hadoop initially solved this trade-off by focusing on batch freight train workloads where the Map/Reduce scheduling algorithm is a good fit. Hadoop2 is a somewhat less purpose-built computing platform that supports a pluggable methodology of access and scheduling.

Because Hadoop clusters were first built on crap hardware, its filesystem (HDFS) and the scheduler had to be highly resilient. The design premise has always been Hadoop must run reliably on unreliable crap because that is what it was running on. A 40-node cluster with 20 GB/sec, a raw PB, 5TB of RAM and 240 cores runs about $240K. Now that is cheap supercomputing, Filo cheap.

Drag and Drop

Modern commercial IT has become a drag and drop industry, whereas high performance computing still requires advanced engineering skills. Building Big Data or HPC platforms is a high performance activity, much to the horror of drag and drop IT departments. Massive computing platforms that must deliver high rates of throughput and scale are never general purpose.  Price, scale and computational self-reliance are critical platform goals of Hadoop.  In HPC, cost is rarely a critical requirement.  In Big Data, affordable at scale is the first requirement. Not all Big Data users will actually end up with Huge Data, so they can get happy with Hadoop being just a (much) cheaper version of what they are attempting with legacy SQL technology. But like HPC, true Big Data will remain resistant to drag and drop.

I think the notion of organically merging commercial and scientific supercomputing businesses has merit. It gives an industry that cares about performance a good future. HPC may now have access to funds, ideas and an enthusiastic generation of developers who are also just as interested in many of the same problems and are willing to take a swing at Lord Voldemort.


Leave a Reply