“This is too good to be true!” is a typical first reaction we get when we explain Adaptive Data Virtualization to prospects and partners, and “Modernize a database stack without rewriting all applications? Can’t be done!”
We understand. Before Datometry came along, data and database migrations have been among the most challenging problems in IT. So, naturally, people are incredulous (see comments in previous blog posts).
Now, here’s a way of thinking about it that has helped many people wrap their heads around it. Take a close look at how modernization is done today. The single most expensive element is the manual rewriting of applications, one after another, to make them work with the new database. This primarily involves changes to the query syntax, adjustments to deal with different data types and formats, etc. These changes are usually made by people who are not the original code owners, often third-party consultants, and at best vaguely familiar with the logic of the application. They just follow a simple set of instructions, that is, they manually execute an algorithm. Here’s the thing. Datometry automates this algorithm, replacing a highly error-prone procedure with an efficient and scalable implementation – in, well, software.
At this year’s ACM SIGMOD conference in San Francisco, we’ll be pulling back the curtain on the underlying technology. Our paper “Datometry Hyper-Q: Bridging the Gap Between Real-Time and Historical Analytics” is both an architecture paper and a “war story” on how our technology was developed and adopted by customers in the financial services industry.
The use case is just the start—we’ve since added support for other mainstream systems as well. Stop by for our presentation on Thursday, June 30th at 10:30am in Session 15 and join the discussion.