Have We Reached Peak Hadoop?

For better or worse, folks in the Valley are usually quick to declare hypes, trends, and also impending doom. With Strata around the corner, the expression that’s making the rounds is “Peak Hadoop”, the idea that we might actually be witnessing the maximum market penetration and revenue being generated from Hadoop. This notion is fueled by recent economic data but also by a general perception that the hype around Hadoop is cooling off.

Hadoop is running out of green field opportunities
Every database startup is eager to cultivate green field opportunities, i.e., use cases. Not only is green field cool and exciting, it usually results in net-new revenue for the customers. What’s not to love? Well, for every new technology there’s only so much green field to go around. If your green field is so hot, guess what, others, including the traditional incumbents, are encroaching on your opportunity sooner than later. This phenomenon is not new. Every single database startup of the past decade has experienced this dilemma. They all stalled out before reaching any notable market penetration as green field dried up and taking market share from the traditional incumbents was economically not viable because of the high cost of migrations (see article on hidden cost in migrations).

Hadoop in many ways looks like having reached that same stage: its green field is increasingly exhausted and taking market share from incumbents like Teradata seems nearly impossible with the current technology. Hadoop has reached a dead point of adoption.

3 steps to start taking market share
In order for it not to disappear in the darkness of irrelevance, I argue Hadoop needs to take market share aggressively. And here’s how:

  1. Use Apache HAWQ: HAWQ is a full-featured query processors with true MPP capabilities atop of Hadoop. Hortonworks is already blazing a trail with it, given the prowess of the product others will certainly follow suit.
  2. Add Datometry Hyper-Q to the mix: Hyper-Q acts as a hypervisor for database application and lets you run Teradata applications natively on HAWQ. Analytics, operational queries, ETL/ELT, just about everything. No need for painful migration, redesign, or reconfiguration. (you can even keep your Teradata ODBC driver)
  3. Start off-loading workloads from Teradata: Lastly, identify the workloads that are suitable for being off-loaded from Teradata to Hadoop. That one’s easy too: simply point your applications to Datometry and, literally by the end of the day, you know if a workload is right for offloading.

The resulting combination is a powerful antidote to the much decried vendor lock-in in the data warehousing industry. So, have we reached “Peak Hadoop” yet? Not even close. Datometry opens up an entire new market for it – and that’s just the beginning.

Currently Datometry for HAWQ is in Beta and available to through Datometry’s Early Adopters Program (EAP). To sign up and change the world, visit www.datometry.com or contact sales@datometry.com.