Have We Reached Peak Hadoop? Maybe Not

Rima MutrejaBusiness/Company

Peak Hadoop

For better or worse, folks in the Valley are usually quick to declare hypes, trends, and also impending doom. With Strata around the corner, the expression that’s making the rounds is Peak Hadoop, the idea that we might actually be witnessing the maximum market penetration and revenue being generated from Hadoop. This notion is fueled by recent economic data but also by a general perception that the hype around Hadoop is cooling off.

Hadoop is Running out of Greenfield Opportunities

Every database startup is eager to cultivate greenfield opportunities, that is, use cases. Not only is greenfield cool and exciting, it usually results in net-new revenue for the customers. What’s not to love? Well, for every new technology there’s only so much greenfield to go around. If your greenfield is so hot, guess what, others, including the traditional incumbents, are encroaching on your opportunity sooner than later.

This phenomenon is not new. Every single database startup of the past decade has experienced this dilemma: they all stalled out before reaching any notable market penetration as greenfield dried up and taking market share from the traditional incumbents was economically not viable because of the high cost of migrations (see article on hidden cost in migrations).

Hadoop in many ways looks like having reached that same stage: its greenfield is increasingly exhausted and taking market share from incumbents, such as Teradata, seems nearly impossible with the current technology.

How Hadoop can Aggressively Start Taking Market Share

I would argue that Hadoop needs to take market share aggressively so that it does not become irrelevant. Here are some suggested ways:

  1. Use Apache HAWQ: HAWQ is a full-featured query processor with true MPP capabilities atop of Hadoop. Hortonworks is already blazing a trail with it and given the prowess of the product, others will certainly follow suit.
  2. Add Datometry’s Adaptive Data Warehouse Virtualization platform–Datometry Hyper-Q–to the mix: The Hyper-Q acts as a hypervisor for database applications and runs Teradata database applications natively on HAWQ; this includes analytics, operational queries, ETL/ELT, and just about everything. There is no need for a long and expensive migration, redesign, reconfiguration, or the re-writing of database applications.
  3. Start off-loading database workloads from Teradata: Lastly, identify the workloads that are suitable for being off-loaded from Teradata to Hadoop. That one’s easy too: simply point the applications to Hyper-Q and, literally by the end of the day, you know if a workload is right for offloading.

The resulting combination is a powerful antidote to the much decried vendor lock-in in the data warehousing industry. So, have we reached Peak Hadoop yet? Not even close. Datometry opens up an entire new market for it: and, that’s just the beginning.

Currently, Datometry for HAWQ is in Beta and available to through Datometry’s Early Adopters Program (EAP). To sign up and change the world, visit www.datometry.com or contact sales@datometry.com.

ShareShare on FacebookGoogle+Tweet about this on TwitterShare on LinkedInEmail to someone