Treehouse Software Products Ensure Minimal Downtime and Risk-Reducing Fallback Capabilities for Data Migration Cutover

by Wayne Lashley, Chief Business Development Officer for Treehouse Software and Joseph Brady, Marketing and Documentation Manager for Treehouse Software

Organizations that are modernizing from legacy applications or implementing new ones cannot afford downtime. Cutover to a new system has to be nearly instantaneous, and all practical measures to ensure continuous operations must be taken.

Fortunately, offerings from Treehouse Software are ideal for just such circumstances. In fact, Treehouse has been assisting customers with low-risk, minimum-downtime data migration cutovers for many years.

The key to success is the use of change data capture (CDC).

While there was a time when a large-scale data migration could be accomplished overnight or over a weekend, this has become much less feasible given today’s exploding data volumes. The logistics of such implementations are much easier to manage when the migration can be undertaken long in advance of the intended cutover date, and staged in “right-sized” chunks that conform to the organization’s capacity to process them.

Treehouse products such as tRelational/DPS, DPSync, tcVISION, ADAMAGIC and NatQuery provide the ability to efficiently migrate (i.e.,ETL, or Extract-Transform-Load) all or selected source data. We refer to this as materialization or bulk load. In many cases, the process can be executed without direct access to the live source data, instead using a backup or image copy. Furthermore, many mainframe data sources can be migrated by processing on a low-TCO (total cost of ownership) LUW (Linux/Unix/Windows) platform.

After a given set of data has been migrated to the new platform (and the requisite validations completed), Treehouse products—tRelational/DPS, DPSync, tcVISION and NatCDC—can keep the target in sync with ongoing changes in the source through CDC techniques. Such techniques may involve batch-style harvesting of changes from database logs, real-time interfacing with the live database to capture changes, or efficient batch compare processing to derive changes. In each case, the changes identified in the source are transformed appropriately and applied to the target, ensuring ongoing synchronization.

In this way, the data migration is conducted as a cumulative process. It can be commenced at any point and continued until completed, and even if the completion is in advance of the intended cutover, CDC will keep the target database ready for such cutover.

At the time of cutover, it is necessary only to quiesce the source database and apply the remaining transactions to the target, and the new database is ready to go, effectively with zero downtime.

But what if something goes wrong? Even the best-tested implementations may encounter a post-migration issue that necessitates reverting to the old system.

Once again, Treehouse CDC solutions are the answer.

At the point of cutover to the new system, the roles of source and target are reversed, and the CDC process can similarly be reversed. New changes generated in the new system database are captured and applied back to the old database. If it becomes necessary to revert, then once again the new database can be quiesced and final changes applied to the old database, which resumes its role as the system of record.

This might sound too good to be true, but in fact it is a process that has been proven in the field. Recently, Treehouse had a large customer, a multinational manufacturing company based in Latin America, that was moving from a mainframe ADABAS platform to ADABAS on Unix. This customer needed a bulletproof process to migrate hundreds of ADABAS source files from mainframe to Unix at sites located in multiple different countries. The customer could tolerate only the most minimal downtime, and required the ability to immediately revert to the mainframe if it was deemed necessary.

The systems integrator (SI) that the company had engaged for the project contacted Treehouse for a solution. After a successful POC (proof of concept), the company licensed products including ADAMAGIC, NatQuery and NatCDCSP to facilitate the migration. Locations were migrated one by one over a period of several months, in each case with a tested “fallback” methodology.

Yet this case involved a similar source and target—ADABAS (though readers familiar with ADABAS will know that there are significant and sometimes-problematic differences between the mainframe and LUW implementations). What if a customer needs to migrate from a legacy database like IMS/DB to a standard relational database like DB2?

At the risk of being repetitive: Treehouse has the answer.

tcVISION provides CDC facilities across a variety of legacy and modern data sources, including ADABAS, VSAM, IMS/DB, CA IDMS, CA Datacom, DB2 (mainframe and LUW), Oracle and SQL Server. CDC can even be accomplished for flat files and ODBC data sources using tcVISION’s batch compare technique.

So it’s entirely feasible to use tcVISION to stage a migration of IMS/DB to DB2 across an extended period of time, and keep DB2 in sync with new changes in IMS/DB until the point of cutover. Post-cutover changes in DB2 can be captured by tcVISION and applied back to IMS/DB in case fallback is needed.

Is the big-bang data migration dead? Perhaps not, even if it’s becoming more and more difficult. But with Treehouse Software, organizations need no longer endure the risk and stress of big-bang migrations.

If you are contemplating a data migration in your future, contact Treehouse Software today.

Free and Informative Treehouse Software White Papers

Image

Download the free Treehouse Software white paper, “Legacy Data Migration: DIY Might Leave You DOA.”  This informative white paper outlines how failure of the data migration process can cause failure of an entire application migration/renewal project. Additionally, this white paper will demonstrate that given the maturity, wealth of functionality and relative low cost of tools like tcVISION, as compared to the effort, complexity and risk entailed in a “Do-It-Yourself” solution, there is no reason why a legacy renewal project should run aground on data migration.