by: Mike Waas

3 Reasons Why You Probably Can’t Afford to Rewrite Your Way Out of Teradata

When the economy slows, enterprises re-evaluate their most costly IT investments. Usually, enterprise data warehouses top the list.  How can one defend the investment in a legacy system when there are so many cost-effective alternatives?

Outdated data warehouse systems are the first to get the axe. But paradoxically, IT organizations may be unable to afford to leave their costly legacy vendor. That’s right. The escape from a legacy data warehouse via conventional means comes at an exorbitant cost.

Cutting ties with your database vendor is a complicated task. Database migrations have abysmal failure rates. And they have a reputation that precedes them. Disillusioned, some IT leaders may admit defeat and stay with their legacy vendor.

In this piece, we look at the three primary reasons that make conventional migrations such an expensive undertaking—and how to overcome it.

Reason #1: Application rewrites don’t work the way you think

At the heart of a conventional migration is the rewriting of applications. Rewrites, as the name suggests, are time-consuming engineering tasks. The difficulty of rewrites ranges from simple textual substitutions to complex re-engineering challenges.

The discrepancies between database systems come in two flavors:

Those that can be resolved up-front via so-called static rewrites, and

those that need information that is only available at run-time.

Only the former can be addressed with rewrites. The latter requires fundamental changes to the application.

In a migration project, rewrites may cover 80% of the workload. Because they lend themselves to automation, they create the illusion of progress. After all, if most of the workload is rewritten within just a few quarters, how hard can it be? More on that in the next section.

The dynamic elements of the workload—stored procs, macros, global temporary tables, etc.—is where migrations derail. By their very nature, these elements may not have a static translation. To work around these issues, you must modify the application code. And so, what looked like a SQL translation issue at first has now become a software engineering project.

The complexity of the project escalates quickly. Even if the modification is benign, which of course it never is, the process of fixing, rebuilding, and testing is risky. It slows down the project. In short, rewrites are just never as simple or contained as one might think.

Reason #2: The 80/20 principle gets you — every time

Rewrites, with or without automation, can solve about 80% of the problem. Database migrations embody the 80/20 principle like no other IT project: the first 80% of progress takes only 20% of the effort. The remaining 20% is where 80% of the work goes.

Let that sink in: if you translate 80% of your Teradata workload in the first year, you’re in for another 4 years! And that’s assuming things go well. It also assumes budgets won’t dry up, that executives won’t withdraw their support, and that the strategic direction of the organization won’t shift over the years.

Once you understand the nature of the issue, the abysmal failure rate of over 60% of database migrations is no longer surprising at all. There has never been a conventional or static database migration project that finishes in time and within the original budget.

However, there are countless examples of migrations that were abandoned after several years. And even those that don’t implode often fail to decommission their legacy system.

Reason #3: Keeping Teradata alive drains your budget

Which brings us to the question of cost. While you’re doing all this work, you’re on the clock. The old system must continue to function while you’re rewriting and re-engineering; we call this the keep-alive cost of the legacy system.

You are already painfully aware of this cost. It’s a primary reason why you wanted to leave your legacy vendor in the first place. The cost of an on-premise legacy system is enormous. The in-cloud version of the legacy system can be even more taxing.

Because of the structure of most data warehousing workloads, there is also no meaningful way to limit the damage. There may not be an opportunity to retire the legacy system in part. Instead, you’ll need to keep it alive until the last workload is moved off.

Adding insult to injury, the keep-alive is particularly painful to watch when a project blows past the 80% mark of completion. Mind you, this only accounts for the first 20% of time and effort. You’ll have to watch this enormous tax inflicted on the enterprise while the team keeps toiling away.

When considering the total cost of a migration, keep-alive cost is about 30% of your cost.

How to win this game

At Datometry, we believe nobody should live beholden to a legacy vendor, and certainly not because the cost of escaping is too high. So, we pioneered database virtualization. It lets IT leaders break free from vendor lock-in of their legacy providers.

The principle of what we call database virtualization is simple yet powerful. Sitting between application and database, Datometry translates all statements in real time. Dynamic translation doesn’t have the 80% limitation we discussed above. Instead, it gets you to 99.5+% out of the box.

With database virtualization, we have executed an impressive number of migrations off legacy data warehouses to modern cloud systems. We have moved workloads small and large. We have moved some of the lighthouse accounts of legacy vendors in a matter of months, not years. A feat previously considered impossible.

We empower our customers to move their workloads first and, if they so choose, modernize then. This way, they get the best of both worlds. They move fast and realize considerable savings right away.

Database virtualization is a powerful antidote to vendor lock-in and those dreaded, never-ending migration projects. To get started, contact us for an assessment of your workload and a comprehensive TCO estimate for the different approaches.

We look forward to demystifying the cost of database migrations for you!

About Mike Waas CEO

Mike Waas founded Datometry with the vision of redefining enterprise data management. In the past, Mike held key engineering positions at Microsoft, Amazon, Greenplum, EMC, and Pivotal. He earned an M.S. in Computer Science from the University of Passau, Germany, and a Ph.D. in Computer Science from the University of Amsterdam, The Netherlands. Mike has co-authored over 35 peer-reviewed publications and has 20+ patents on data management to his name.