“My database migration project is way over budget and running late by years!” is not only frequently uttered but also triple-redundant according to Gartner’s Adam Ronthal. The analyst firm suggests that over 60% of all database migrations fail. That is, a migration is started, then starts slipping, overruns the original budget by multiples, until finally management pulls the plug. Time is lost, money wasted, and careers wrecked.
Think 60% is already damning? Turns out, if the legacy system happens to be Teradata the failure rate is significantly higher. While there is no hard data available, anecdotal evidence puts the likelihood of failure of a Teradata migration north of 95%. Successful projects in this category may be the stuff of legends, but they remain extremely rare exceptions in practice.
This is in stark contrast to the fact that practically every enterprise that uses Teradata today is exploring options to replatform away from it. Most of these replatform initiatives are triggered by an enterprise-wide mandate to move to the cloud and tear down data silos. Enterprises are looking to the cloud as a highly innovative environment for their data processing needs. A data warehouse appliance, physical or virtual, no longer fits into this picture.
The desire to leave Teradata, paired with the magnetism of cloud data warehouses like Azure SQL DW or Amazon Redshift, creates an almost perfect force field. Migrating away from a data warehousing appliance typically finds broad support across the enterprise. So why is it then that Teradata migrations seem to fail more often than any other type of database migration? Here are the top 3 reasons why:
Accurately estimating the complexity of a database workload borders on the impossible. Yet, being able to quantify the difficulty involved in rewriting a workload is the foundation of any successful migration plan.
Teradata systems are used in demanding multi-purpose, multi-tenant scenarios. They process millions of statements a day. Query complexity ranges from the simple to the sophisticated, from the accidentally complex to the intentionally “clever”. On top of that, Teradata queries frequently use SQL that predates any standardization. These queries rely on the evaluation order of predicates, are case-sensitive in some situations, and frustratingly case-insensitive in others.
It is important to note that the engineers that execute the migration are not to blame here. It really doesn’t matter how much experience a team brings to the table: they will always have to walk into a migration with very little knowledge. They will discover landmines and roadblocks. Simple workarounds will snowball into major rewrites. The complexity of the resulting rewritten statement often eclipses that of the original. All this makes for a challenging cocktail of uncertainty that ultimately translates into inaccurate estimates and impossible timelines and failure.
Teradata is one of the most powerful database systems. 40 years of maturity might have all but quelled innovation but it is surely a solid database engine. However, over those 4 decades the language surface has considerably expanded, making Teradata also one of the most complex systems on the market. Few systems are equally rich in syntax—or quirks, for that matter. Not surprisingly, this has grave implications for any related migration project.
Take for example Netezza, the data warehouse company IBM acquired a while back. Due to a strict end-of-support policy by the new owners all Netezza instances are undergoing some kind of migration at the time of writing. Netezza’s feature surface is considerably smaller than that of Teradata. Migrating Netezza workloads to a major commercial database systems is therefore a relatively simple task.
In contrast, moving Teradata applications to any other system comes with significant challenges. A complex language surface has to be expressed using less powerful elements of the new target system. Substantial amounts of code need to be added to work around that lack of Teradata syntax and mismatch in semantics. What was originally 30 lines in Teradata SQL may well turn into hundreds of lines in a different SQL dialect. The complexity of the project has just shot up by an order of magnitude. Tempting as it may sound, extrapolating from a much simpler migration to plan a Teradata project is dangerous and usually wrong.
Migrating out of Teradata is often viewed as a rare opportunity to make changes to data model and query statements. Every department has their laundry list of pent-up change requests. Be it data hygiene and janitorial tasks or fundamental changes to how the business works.
What then happens during a migration is essentially a conflation of the actual migration with additional modernization tasks. It seems to make perfect sense at first: if the schema and potentially millions of SQL statements need to be rewritten anyway, might as well make some over-due changes. Who knows when the next opportunity will present itself? Also, delaying modernization and rewriting millions of lines of SQL later again will be frighteningly costly. However, this line of thinking is highly fraught with risk.
Not surprisingly, what started as a project with a clear focus suddenly became “boiling the ocean”. The scope of the project just keeps growing as it drags on. The cost a well-intended modernization imposes on the original migration quickly outweighs the benefits and brings the entire project to a screeching halt. In the end neither party—the migration team nor the modernization team—will be successful.
In order for a replatforming approach to be successful it needs to meet 3 fundamental requirements:
Anything short of disciplined attention to these three is likely to fail. It isn’t an absolute guarantee for failure, but it puts the win probability in the low tens or even single digit percentage range. These are odds, most of us would find completely unacceptable in other areas of daily life under any circumstance.
Becoming cloud-native just got a whole lot simpler. Contact us today to stop the guesswork and get results.