Image credit of Jirsak, licensed via Shutterstock, 2023.
When they first hear of OpenDB, prospects think, “This is too good to be true.” A compact, simple-to-deploy software platform that breaks the vendor lock-in of some of the biggest names in databases seems improbable. And yet, we’ve repeatedly demonstrated its effectiveness by liberating iconic enterprises from the iron grip of legacy vendors.
OpenDB combines Datometry Hyper-Q and a destination database such as PostgreSQL. Hyper-Q is the virtualizer. It intercepts all communication of the client applications, translates it in real-time, and redirects it to the new destination. Effectively, OpenDB makes existing applications written for a legacy database “speak” the language of the new destination. For example, OpenDB makes Oracle applications work instantly on PostgreSQL.
While nobody has ever questioned OpenDB’s vast benefits, critics sometimes question its feasibility. So, what’s the secret sauce? Why does the seemingly impossible work so well in practice? In this article, we look at three critical insights behind the success.
Most customers use only a tiny fraction of SQL
Not surprisingly, database workloads follow the 80-20 principle. 80% of all workloads use only 20% of a database’s vast functionality. This insight limits the investment needed for OpenDB considerably. Instead of building the entire surface of an Oracle database, one needs to support only 20%.
However, this does not mean workloads only use the “easy” 20%, the features that are straightforward to handle. Quite the contrary, the 20% may contain complex functionality. However, building only part of the legacy system’s functional surface is a significant simplification.
But how can one determine the 20%? In a previous post, we described the feature request feedback mechanism, which helps identify relevant features. A miniature reproduction is generated, anonymized, and shared with the OpenDB developers whenever a workload contains an unsupported feature.
Practitioners need results, not theories
Another point is the semantic differences between source and destination systems. How can OpenDB reconcile the discrepancies between two highly complex software systems?
First, the virtualizer in OpenDB can theoretically emulate any behavior. Be it transaction semantics, unrolling and stepwise execution of stored procedures, or operational constructs like Global Temporary Tables. However, for practical purposes, emulating highly complex features may be too costly and—in practice—not worth the effort.
Instead, consider an application’s life cycle: developers use a given database and decide if the results meet their requirements based on the observed behavior. They care little about the theory behind a database. OpenDB offers a viable trade-off between identical semantics and runtime overhead. Determining if the observed behavior meets the requirements is a simple and highly effective way to validate its use.
OpenDB uniquely facilitates its own evaluation. Because applications can connect to the old or the new stack side by side, business users can validate the results immediately. That same principle comes in handy at other times, too. It enables impressively quick proof-of-concept implementations at the beginning of a project.
Theory meets economic reality
Lastly, 99% can be good enough. We built OpenDB knowing there would always be a handful of queries that needed adjustment in every workload. However, the massive savings enterprises realize by using OpenDB make these outliers a minor nuisance.
OpenDB reduces the migration cost when moving from Oracle to PostgreSQL by about 75% and speeds up the process fourfold. Typically, savings from using the Datometry technology amount to millions of dollars for medium enterprises rather quickly. Savings of this magnitude make adjusting a small number of queries a trivial expense.
Better still, after these minor adjustments, the workload is portable beyond just the current version of PostgreSQL. Adopting future versions of PostgreSQL, usually a considerable effort, is then automatic. So is moving to any PostgreSQL-based specialty database if the workload outgrows the requirements or the capabilities of the vanilla version.
OpenDB demystified: ready to deploy
As we’ve seen, OpenDB exploits some fundamental yet rather general insights about database workloads. With successful deployments in iconic Fortune 500 and Global 2000 enterprises, we have underlined the effectiveness of the concept globally: Using our software, our customers have migrated some of the most sophisticated data warehouses from legacy appliances to modern cloud databases.
With OpenDB, we bring the same principle to market to take on operational OLTP workloads originally developed for Oracle. With OpenDB, enterprises can finally move beyond the vendor lock-in of their legacy database provider.
About Mike Waas, CEO Datometry
Mike Waas founded Datometry with the vision of redefining enterprise data management. In the past, Mike held key engineering positions at Microsoft, Amazon, Greenplum, EMC, and Pivotal. He earned an M.S. in Computer Science from the University of Passau, Germany, and a Ph.D. in Computer Science from the University of Amsterdam, The Netherlands. Mike has co-authored over 35 peer-reviewed publications and has 20+ patents on data management to his name.