|By Bill McColl||
|January 4, 2012 05:15 AM EST||
In big data computing, and more generally in all commercial highly parallel software systems, speed matters more than just about anything else. The reason is straightforward, and has been known for decades.
Put very simply, when it comes to massively parallel software of the kind need to handle big data, fast is both better AND cheaper. Faster means lower latency AND lower cost.
At first this may seem counterintuitive. A high-end sports car will be much faster than a standard family sedan, but the family sedan may be much cheaper. Cheaper to buy, and cheaper to run. But massively parallel software running on commodity hardware is a quite different type of product from a car. In general, the faster it goes, the cheaper it is to run.
Time Is Money
As has been noted many times in the history of computing, if you are a factor of 50x slower, then you will need 50x more nodes to run at the same speed (even assuming perfect parallelization), or your computation will need 50x more time. In either case, it will also be much more likely that you will experience at least one of your nodes crashing during a computation. This is not to argue that automatic fault tolerance and recovery should be ignored in the pursuit of speed, but rather that these two factors need to be carefully balanced. Good design in massively parallel systems is about achieving maximum speed along with the ability to recover from a given expected level of hardware failure, via checkpointing.
The key phrase here is "a given expected level of hardware failure". In certain types of peer-to-peer services which take advantage of idle PC capacity, it is necessary to assume that all machines are extremely unreliable and may go offline at any time. However, in a commercial big data cluster it may be reasonably asssumed that almost all machines will be available almost all of the time. This means that a much more optimistic point in the design space can be chosen, one which is designed much more for speed than for pathological failure scenarios.
The MapReduce model is an example of a model where speed has been sacrificed in a major way in order to achieve scalability on very unreliable hardware. As we have noted, while this is acceptable in certain types of free peer-to-peer services, it is much less acceptable in commercial big data systems deployed at scale.
Google, the inventors of the model, were the first to recognize the throughput and latency problems with the MapReduce model. To get the realtime performance they required, they recently replaced MapReduce in their Google Instant search engine.
The MapReduce model of Apache Hadoop is slow. In fact, it's very slow compared to, for example, the kinds of MPI or BSP clusters that have been routinely used in supercomputing for more than 15 years. On exactly the same hardware, MapReduce can be several orders of magnitude slower than MPI or BSP. By using MPI rather than MapReduce, HadoopBI gives customers the best possible big data solution, not only in terms of performance - massive throughput and extremely low latency - but also in terms of economics. HadoopBI is not just the fastest Big Data BI solution, it is also the cheapest at scale.
It's Free, But Is It Fast Enough?
Another frequently misunderstood element of big data economics concerns so-called "free" software. It has been argued by some that, since big data software needs to be run on many nodes, it is really important to have software that is free. Again this is an extreme oversimplification that ignores the dominant cost issues in big data economics. At large scale, software costs will in general be much smaller than hardware or cloud costs. And commercial software vendors should ensure that they are, if they want to stay in business.
Consider the following small-scale example. A company needs to process big data continuously in order to maximize competitive advantage. For simplicity, we will assume that the cost of running a single server (in-house or cloud) for one hour is $1, and that the company has a choice between two big data software systems - system A costs $1,000 per server and system B is free, but system A is 8x faster. Choosing system A, the company requires 5 servers, working continuously, to achieve the throughput required. However, if the company chooses system B, it will require 40 servers running continuously.
Simple arithmetic shows that within just six days, the initial cost of system A has been recovered, and from then on system A gives the company massive cost savings. Even if system A is only 2x or 3x faster and more efficient than system B, the initial cost will still be recovered in a matter of a few weeks.
The economic advantages of speed at scale are magnified even more in large-scale big data systems where, with volume licensing discounts, the payback time for super-fast software is even shorter.
The lesson of the above example is simple and very important. In parallel systems, speed at scale is king, as speed equates to efficiency, and efficiency equates to massive cost savings at scale. So, to be relevant for large scale production deployments, free parallel software has to be at least as fast and efficient as the best commercial software, otherwise the economics will be solidly against it. Some examples of free software, such as the Linux operating system, have achieved this goal. It remains to be seen whether this will also be the case with highly parallel big data software. In the meantime, it's important to remember that "free software is cheap, but fast software can be even cheaper".
- WebRTC Summit at Cloud Expo Agenda Announced
- Google’s Enterprise Problem
- Building Video Calling with PubNub and WebRTC
- DataStax Announces New Startup Programme Offering Free Software, As Well As Free Training Courses For Cassandra Users And New Developer Tool
- Evaluation Report on Virtual Backup Software
- Get Ready to Think Out (C)loud With Cloud Sherpas’ Upcoming Webinar Series
- Series: Exchange 2013 and Lync 2013 Integration with AsteriskNOW PBX Pt. 1
- New PubNub App Template for WebRTC
- Strategic Enough to Matter, Code Halos and Mobile Apps
- GAMA : Quatre acteurs clefs, quatre stratégies différentes !
- Box and NSI Partnership Brings the Cloud to Businesses in the Middle East
- DataStax Announces New Startup Program Offering Free Software, as Well as Free Training Courses for Cassandra Users and New Developer Tool
- WebRTC Summit at Cloud Expo Agenda Announced
- OneLogin Raises $13M to Power Expansion
- Cloud Security Alliance Releases Cloud Controls Matrix, Version 3.0
- Survey Finds Large Enterprises Adopting WebRTC
- WebRTC Summit | WebRTC: Test then Disrupt
- WebRTC Summit Speaker Submissions Open
- BMC Software to Exhibit at Cloud Expo Silicon Valley
- WSO2 Expands Identity Management Capabilities Across Cloud, Mobile and Web Applications With the Launch of WSO2 Identity Server 4.5
- Twilio and LiveOps to Deliver WebRTC Deployments
- Oracle Demonstrates WebRTC Solution with CounterPath's Bria
- OpenStack for the Enterprise – Showcasing the OpenStack Ecosystem
- XIRSYS Launches WebRTC Hosting Service
- Where Are RIA Technologies Headed in 2008?
- The Top 250 Players in the Cloud Computing Ecosystem
- Dolphin Announces Open API With Over 50 Add-ons Including Dropbox and Wikipedia
- Personal Branding Checklist
- AJAXWorld 2006 West Power Panel with Google's Adam Bosworth
- Why Microsoft Loves Google's Android
- Google's OpenSocial: A Technical Overview and Critique
- Cloud Expo New York Call for Papers Now Open
- Wal-Mart To Sell $399 Ubuntu Linux-based Laptop with Google Operating System
- i-Technology Blog: Google Trends on Java, McNealy, AJAX, and SOA Give Pause For Thought
- i-Technology Blog: Is There Life Beyond Google?
- Android: Who Hates Google Over the Phone?