Treehouse Software Onsite Training Classes are Available

New installation of a Treehouse Software product? New employees? Need a refresher course? Want to explore untapped product features? Three-to-five-day training sessions are available for tRelational / DPS and other Treehouse products.

_0_Software_Training

Treehouse Software training sessions are customized to meet your site’s unique environment and needs. As an example, the following three-day tRelational / DPS training class was conducted at a large University where they are using tRelational / DPS to create an archive of their legacy Adabas data to Oracle in preparation for their upcoming mainframe system retirement.

Over the course of the three-day session, the Treehouse instructor covered:

  • tRelational AUTOGEN feature (auto-generation of complete RDBMS schemata – Tables, Columns, Primary Key, and Foreign Keys – based upon existing ADABAS file structures). AUTOGEN was quite attractive since it will be a very time efficient way to create the Adabas-to-Oracle mappings.
  • The customer wanted to use Relational / DPS to automate as much of the process as possible, and take advantage of the batch functions of tRelational. Generally, the workshops of the class are all online, but in this case, the class ran through one online example after the File Implementation section of the training, and then jumped to the batch section of the training to show how that works. The class then returned to run additional File Implementations, Analysis, and Reporting in batch.
  • The class continued through the Modeling and Mapping section, and ran more batch jobs with some additional Adabas files.
  • Admin and Configuring DPS Parameters sections were next on the agenda. Because of the desire to automate as much as possible, the work focused on how to set up a single job stream that could be generated programmatically, where it was fed in the name of the file to be implemented and then do the AUTOGEN all in the same job. The tRelational AUTOGEN user exit was set up so the customer could identify and cater to the possibility of duplicate table names where multiple Adabas files have the same named MU (like “COMMENTS”); and handle the addition of an ETR (external transformation routine) being automatically added (there were some known HEX values in some fields) and then update the DPSCOLLGTH.
  • The class finished up the Materialization (initial loading of data) training section and then set up the jobs to run Materialization, and installed DPSSPLIT (used to separate the control and data files into one control and one data file per table materialized) on one of their servers. Generally, most class attendees are not the ones responsible for running the Materialization and Propagation (ongoing synchronization of data) jobs, so the instructor doesn’t usually set them up, but in this case, it made sense. The class started out by running the Materialization from a full database ADASAV taken over the weekend. The output of the Materialization job was FTPed to their Server and DPSSPLIT run against it to create the individual loader files. GENDDL (generate the control statements that can be passed to the RDBMS to define the tables and columns) was also run and that output transferred. After setting up the SQL job to process the DDL, the Oracle SQL*LOADER was successfully run to load the data.

Finally, the class went through some additional tRelational / DPS features, and as a final exercise, set up a Propagation job and ran it against a current PLOG just to make sure it worked satisfactorily.


__TSI_LOGO

If you are interested in finding out more about Treehouse training classes for your product, contact Treehouse Software today for more information, or to schedule onsite training.

Download The New Treehouse Software White Paper on Fault-tolerant Data Sharing Between Applications and Databases

Image

Hot Topic or Hot Potato? 

Our informative new white paper, written by Wayne Lashley, Chief Business Development Officer for Treehouse Software, delves into the issues of why a well- architected, comprehensive, robust and scalable replication solution is the key to enabling legacy databases to exchange data reliably and effectively throughout the enterprise. This makes it possible for IT and business users to access corporate data regardless of where it resides. Furthermore, effective enterprise replication can be combined with other techniques to overcome technology constraints and maximize IT effectiveness.

To download this and other free white papers from Treehouse Software, visit our White Papers web page.

Treehouse Software Customer Case Studies Available Online

CustomerLogos

Read about real world application of Treehouse Software products on our Customer Case Studies web page.

Here, you’ll find out how tcACCESS and tcVISION Data Integration and Replication; and tRelational / DPS and DPSync ADABAS-to-RDBMS Data Migration have been implemented and are being used in some of the largest enterprise sites in the world.

To learn more about how to become another Treehouse Software customer success, contact us today!

L10n in Heterogeneous Data Replication

by Wayne Lashley, Chief Business Development Officer for Treehouse Software

Most software vendors whose product markets extend beyond their own home country are familiar with the concepts of “i18n” and “L10n”, which are numeronyms for “internationalization” and “localization” respectively. i18n is the process of making a software product capable of adaptation to different languages and cultures, while L10n is the specific adaptation process for a given local market.

These terms take on special significance in the context of data replication software products—such as Treehouse’s DPSync, which provides real-time replication of mainframe ADABAS data to relational database (RDBMS) targets like DB2, Microsoft SQL Server and Oracle on various platforms. The very purpose of these products is to take data from a source and apply appropriate L10n to make it usable at the target, which is generally dissimilar in various aspects of the technical environment.

Perhaps the simplest form of L10n, having nothing to do with language or locale, is to transform database-specific field/column datatypes. Alphanumeric (A) fields in ADABAS are often mapped to CHAR or VARCHAR datatypes in an RDBMS, which are conceptually quite similar. Packed (P) fields may be expressed in an RDBMS as NUMBER, INTEGER, NUMERIC, DECIMAL, etc., depending on the vendor implementation and desired usages.

When it comes to Binary (B) format, things get tricky.  An array of bits in an ADABAS field can’t usually be mapped directly to a binary representation in an RDBMS column, due to the differences in the way data are represented between the platforms.

Decades ago, when I was earning my stripes as a novice mainframe programmer, the rules seemed simple: 8 bits made up a byte, and characters were expressed in single bytes encoded in EBCDIC.

(True story: During a university Assembler class many years ago, one of my classmates was muttering to himself, and the professor queried him about the subject of the “conversation”. The student replied “Just practicing my EBCDIC, sir!”)

Later on, I learned about that ASCII column of the “CODE TRANSLATION TABLE” in my indispensable System/370 Reference Summary GX20-1850-3, and I realized there was a whole world of computers beyond mainframes.

Image

But in fact things can be much more complex than simply EBCDIC and ASCII. L10n of data has to take into account the multitude of code pages and conventions that customers may use—and the customizations and exceptions to these.

Our European Technical Representative, Hans-Peter Will, has had to become somewhat of an expert in this over the past few years as he has worked with various customers in the Middle East on DPSync implementations.

Take the case of the way the Arabic language is handled in the context of applications at one site. Arabic is normally read right-to-left. But depending on system configuration, Arabic characters in a given field may be stored either left-to-right or right-to-left. Certain characters are represented in one byte, others in two. The cursive appearance of certain characters must be altered if they appear in the middle of a word rather than on an end. And in certain of this customer’s applications, the same screen display may show both Arabic and English. Even on screens where all of the words are in Arabic, and displayed right-to-left, there may be embedded numbers (e.g., telephone numbers) that need to be displayed left-to-right.

Now take all these complexities and factor in different database management systems (ADABAS vs. Oracle) running on different platforms (mainframe vs. Unix), each of which have their own configuration settings that affect the way characters are stored and displayed. Add to that the question of endianness (big-endian vs. little-endian) of the processing architecture.

The first time that Hans-Peter visited the customer in question, Treehouse software engineers had to figure out how to handle all these issues to ensure that ADABAS data would be replicated accurately and appropriately for use in Oracle-based applications. Fortunately, the combination of great product maturity (DPSync and its key underlying components tRelational/DPS having been battle-tested at countless sites over many years) and product extensibility (the ability to plug in complex custom transformations) enabled DPSync to be readily configured to accomplish the task at hand.

Having learned from that initial experience, Hans-Peter is now on familiar ground when assisting new Arabic-language sites implementing DPSync. Recently he was back in the Middle East visiting one of these new customers, and only hours after product installation he was able to confirm the accuracy of the SQL Server representation of data materialized (initially loaded via what is commonly called ETL, Extract-Transform-Load) from ADABAS using DPSync. The customer was also impressed with the speed of the process, both in terms of configuring the materialization (taking advantage of the tRelational schema auto-generation feature) and executing it (using an ADASAV backup as source, avoiding any workload on ADABAS). That customer is now in production with real-time ADABAS-to-SQL Server replication.

What’s your L10n challenge? Contact Treehouse and learn how DPSync and our other products are able to meet it.

Treehouse Software Products Ensure Minimal Downtime and Risk-Reducing Fallback Capabilities for Data Migration Cutover

by Wayne Lashley, Chief Business Development Officer for Treehouse Software and Joseph Brady, Marketing and Documentation Manager for Treehouse Software

Organizations that are modernizing from legacy applications or implementing new ones cannot afford downtime. Cutover to a new system has to be nearly instantaneous, and all practical measures to ensure continuous operations must be taken.

Fortunately, offerings from Treehouse Software are ideal for just such circumstances. In fact, Treehouse has been assisting customers with low-risk, minimum-downtime data migration cutovers for many years.

The key to success is the use of change data capture (CDC).

While there was a time when a large-scale data migration could be accomplished overnight or over a weekend, this has become much less feasible given today’s exploding data volumes. The logistics of such implementations are much easier to manage when the migration can be undertaken long in advance of the intended cutover date, and staged in “right-sized” chunks that conform to the organization’s capacity to process them.

Treehouse products such as tRelational/DPS, DPSync, tcVISION, ADAMAGIC and NatQuery provide the ability to efficiently migrate (i.e.,ETL, or Extract-Transform-Load) all or selected source data. We refer to this as materialization or bulk load. In many cases, the process can be executed without direct access to the live source data, instead using a backup or image copy. Furthermore, many mainframe data sources can be migrated by processing on a low-TCO (total cost of ownership) LUW (Linux/Unix/Windows) platform.

After a given set of data has been migrated to the new platform (and the requisite validations completed), Treehouse products—tRelational/DPS, DPSync, tcVISION and NatCDC—can keep the target in sync with ongoing changes in the source through CDC techniques. Such techniques may involve batch-style harvesting of changes from database logs, real-time interfacing with the live database to capture changes, or efficient batch compare processing to derive changes. In each case, the changes identified in the source are transformed appropriately and applied to the target, ensuring ongoing synchronization.

In this way, the data migration is conducted as a cumulative process. It can be commenced at any point and continued until completed, and even if the completion is in advance of the intended cutover, CDC will keep the target database ready for such cutover.

At the time of cutover, it is necessary only to quiesce the source database and apply the remaining transactions to the target, and the new database is ready to go, effectively with zero downtime.

But what if something goes wrong? Even the best-tested implementations may encounter a post-migration issue that necessitates reverting to the old system.

Once again, Treehouse CDC solutions are the answer.

At the point of cutover to the new system, the roles of source and target are reversed, and the CDC process can similarly be reversed. New changes generated in the new system database are captured and applied back to the old database. If it becomes necessary to revert, then once again the new database can be quiesced and final changes applied to the old database, which resumes its role as the system of record.

This might sound too good to be true, but in fact it is a process that has been proven in the field. Recently, Treehouse had a large customer, a multinational manufacturing company based in Latin America, that was moving from a mainframe ADABAS platform to ADABAS on Unix. This customer needed a bulletproof process to migrate hundreds of ADABAS source files from mainframe to Unix at sites located in multiple different countries. The customer could tolerate only the most minimal downtime, and required the ability to immediately revert to the mainframe if it was deemed necessary.

The systems integrator (SI) that the company had engaged for the project contacted Treehouse for a solution. After a successful POC (proof of concept), the company licensed products including ADAMAGIC, NatQuery and NatCDCSP to facilitate the migration. Locations were migrated one by one over a period of several months, in each case with a tested “fallback” methodology.

Yet this case involved a similar source and target—ADABAS (though readers familiar with ADABAS will know that there are significant and sometimes-problematic differences between the mainframe and LUW implementations). What if a customer needs to migrate from a legacy database like IMS/DB to a standard relational database like DB2?

At the risk of being repetitive: Treehouse has the answer.

tcVISION provides CDC facilities across a variety of legacy and modern data sources, including ADABAS, VSAM, IMS/DB, CA IDMS, CA Datacom, DB2 (mainframe and LUW), Oracle and SQL Server. CDC can even be accomplished for flat files and ODBC data sources using tcVISION’s batch compare technique.

So it’s entirely feasible to use tcVISION to stage a migration of IMS/DB to DB2 across an extended period of time, and keep DB2 in sync with new changes in IMS/DB until the point of cutover. Post-cutover changes in DB2 can be captured by tcVISION and applied back to IMS/DB in case fallback is needed.

Is the big-bang data migration dead? Perhaps not, even if it’s becoming more and more difficult. But with Treehouse Software, organizations need no longer endure the risk and stress of big-bang migrations.

If you are contemplating a data migration in your future, contact Treehouse Software today.