Treehouse Software Customers are Looking Upwards to Mainframe-to-Cloud Data Replication

The search is on for a mature, easy-to-implement Extract Transform and Load (ETL) solution for migrating mission critical data to the cloud.

Mainframe_to_Cloud

Treehouse Software’s tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects, and enabling data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and cloud-based.

Mainframe-to-Cloud Case Use Example…

BAWAG P.S.K. is one of the largest banks in Austria, with more than 1.6 million private and business customers and is a well-known brand in the country. Their business strategy is oriented towards low risk and high efficiency.

BAWAG was looking to reduce the load on their IBM mainframe and as a result, reduce costs. The project involved offloading data from their core database system to a less expensive system, in real-time, and to provide read access from that system to the new infrastructure. The primary motivator for this data migration was the constantly increasing CPU costs on the mainframe caused by the growing transaction load of online banking, mobile banking, and the use of self service devices.

BAWAG ultimately migrated their online banking application to the cloud using tcVISION. Realtime Event-handling, Realtime Analytics, Realtime Fraud Prevention are only a few of the use cases that the bank’s solution currently covers.

BAWAG_Diagram

The bank decided to use tcVISION to migrate z/OS DB2 data into a Hadoop data lake (a storage repository that holds raw data in its native format). 20 Million transactions were made within 15 minutes.

Cost Reductions Seen Immediately

BAWAG is now seeing a 35-40 percent reduction of the MIPS consumption for online processing during business hours. After hours, consumption is less, because it is mainly batch processing on the mainframe. Currently, a volume of approximately 30 GB changed data (uncompressed) is replicated from DB2 per day.

In addition to the primary usage scenario, BAWAG can also cover additional use cases. This includes real-time-event handling and stream processing, analytics based upon real-time data as well as the possibility to report and analyze structured and unstructured data with excellent performance. The system can be inexpensively operated on Commodity Hardware and has no scalability limitations. Compared to the savings, the costs of replication (CPU consumption) of tcVISION are now very low.

Additionally, BAWAG plans to extend the use of tcVISION in the future, including implementation of real-time replication from ORACLE into the data lake.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and Cloud-based.

_0_tcvision_connection_overview

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

TREETIP: tcVISION Allows for Surprisingly Innovative Uses

tcVISONv6

Treehouse Professional Services consultants help Adabas / Natural customers in a variety of ways, including DBA services, performance tuning, change management implementation and training, and data replication planning and training. Our experience and long history of service to the Adabas / Natural community helps us create innovative solutions for our customers’ challenges.

Recently Treehouse Senior Technical Representative Chris Rudolph assisted a customer with a tricky data replication problem. The customer uses tcVISION to perform bi-directional replication between Adabas and an RDBMS during the phase-in of a new application. Unfortunately, the new application incorrectly updated certain columns in the RDBMS, which were then replicated to Adabas. The customer attempted to address the issue by running a series of ADASEL reports against the Adabas PLOG and manually checking for “bad” transactions, which was a very time consuming process that pulled the Adabas DBA away from her normal duties.

Chris explained that tcVISION could expedite the process by replicating all transactions for the Adabas file to a journal table capturing the “before” and “after” values of the problematic columns. The developers working on the new application could then identify invalid values, correct the application, and patch the data themselves. This also allowed the Adabas DBA to return to their normal duties.

The journal table now includes columns to display the “before” and “after” values of the corrupted column, Adabas transaction time, end transaction time, operation and Adabas userid. The customer’s developers immediately recognized immense value from being able to query the journal table to find bad data, patch the data, prove that corruption is no longer taking place, and verify that all corrupted instances of the data have been patched. Journal tables have been added for all replicated Adabas files, and the developers now rely on the journal tables for all of their data patches.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

Mainframe CDC from Treehouse Software

The globalization of markets, increase of data volumes, and high demand for up-to-date information require new data transfer and exchange solutions for heterogeneous IT architectures, and as many customers have discovered, Treehouse Software has the right product (or combination of products) to meet any conceivable mainframe data migration, replication, or integration requirement. To meet many of these needs, Treehouse Software’s proven and mature tcVISION product moves data – as little as possible – as much as necessary. tcVISION is an innovative software solution that processes changed data in real time, in intervals, or event based.

ChangeDataCapture

The tcVISION solution focuses on changed data capture (CDC) when transferring information between mainframe data sources and LUW databases and applications. Changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.

_0_tcVISION_Simple_Diagram

tcVISION enables bidirectional replication for DB2, Oracle, and SQL Server running on Linux/Unix/Windows, and synchronizes each data source, first by doing a bulk load from source(s) to target and then by replicating only changes— only committed changes—from source(s) to target. So there can never be ambiguity as to whether a query against the target database involves uncommitted data.

Read some tcVISON customer success stories here.


Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

 

TREETIP: tcVISION’s Enhanced Web Statistics Functionality

by Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

tcVISIONv6_Logo

With version 6 of tcVISION (Treehouse Software’s product for transferring information between mainframe data sources and LUW databases and applications), a newly enhanced web statistics functionality and web server have been introduced. Any standard web browser can access this server (Internet Explorer, Firefox, Opera, Chrome, Safari, etc.).

This valuable feature enables users to oversee data from the tcVISION Manager Monitor, and statistical and operational information from the tcVISION Manager network. The Manager Monitor displays diagrams in a large, scalable window that shows “running” transfers and “not running” processes, and a diagram that shows statistical data.

The following screen shot shows how the tcVISION web statistics can be displayed to the user…

tcVISION_Web_Stats001

An easy-to-use menu allows complete control of, and access to, all processes within the tcVISION Manager Monitor…

_0_tcVISION_Webstats002

  • “Scripts” shows running processes, completed processes, and not running processes.
  • “Server messages” shows the server messages.
  • “Manager network” shows all Managers of connected Manager Networks.
  • “Statistics” shows a selection of Managers and corresponding scripts from the monitoring database. A selection list with different diagram types is available and a button to save chosen diagrams and their selected settings.
  • “Profiles” shows the profiles of the monitoring database.
  • “Manager” shows the name, the version, the revision and the operating system of the observing Managers.

Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

TREETIP: Did You Know About tRelational’s Schema Auto-Generation Feature?

by Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

tRelational, the data analysis, modeling, and mapping component of Treehouse Software’s Adabas-to-RDBMS product set provides three options for developing RDBMS data models and mapping ADABAS fields to RDBMS columns:

Option 1: Auto-generation

Option 2: Importation of existing RDBMS schema elements

Option 3: Detailed definition and manipulation of schema and mapping elements using tRelational

The Auto-generation function can be an extremely useful productivity aid. By simply invoking this function and specifying an Adabas file structure, a fully-functional corresponding RDBMS schema (Tables, Columns, Primary Keys, Foreign Key relationships and constraints) and appropriate mappings are created virtually instantaneously. The table and column names, datatypes, lengths, and mappings/transformations are all automatically tailored specifically for the RDBMS product, platform, and version–the user need not be an expert in the RDBMS.

tRelational’s schema auto-generation simply requires specification of an Adabas file structure and associated options…

tRe_AutoGen

The auto-generated model can be immediately used to generate both RDBMS DDL and parameters for the Data Propagation System (DPS) component. Within minutes of identifying the desired Adabas file or files to tRelational, the physical RDBMS schema can be implemented on the target platform and DPS can begin materializing and propagating data to load into the tables.

It is important to note that these modeling options complement each other and can be used in combination to meet any requirements. Auto-generated schema elements can be completely customized “after the fact”, as can imported elements. Auto-generation can be used at the file level to generate complete tables and table contents at the field level, making it easy to then manually define and map one or more columns within a table, or even to denormalize MU/PE structures into a set of discrete columns.


About Treehouse Software’s tRelational / DPS Product Set

tReDPSMainDiagram01

tRelational / DPS is a robust product set that provides modeling and data transfer of legacy Adabas data into modern RDBMS-based platforms for Internet/Intranet/Business Intelligence applications. Treehouse Software designed these products to meet the demands of large, complex environments requiring product maturity, productivity, feature-richness, efficiency and high performance.

The tRelational component provides complete analysis, modeling and mapping of Adabas files and data elements to the target RDBMS tables and columns. DPS (Data Propagation System) performs Extract, Transformation, and Load (ETL) functions for the initial bulk RDBMS load and incremental Change Data Capture (CDC) batch processing to synchronize Adabas updates with the target RDBMS.

Visit the Treehouse Software website for more information on tRelational / DPS, or contact us to discuss your needs.

Treehouse Software Products and Professional Services for Adabas / Natural Help with ALLETE’s Mainframe Retirement

by Mike Kuechenberg, Senior Technical Representative at Treehouse Software, Inc. __Allete

Since 2010, Treehouse Software consultants have provided product training and Professional Services for the planned retirement of the mainframe environment at ALLETE.

Here’s a look at three parts of the project in which Treehouse was involved:

tRelational / DPS

In order to replicate Adabas data to Oracle, ALLETE licensed Treehouse’s tRelational / DPS product set. After some initial training by Treehouse Professional Services, ALLETE staff were able to model and map Adabas files to Oracle schemata using tRelational, and to deploy DPS jobs to materialize (ETL) and propagate (CDC) the Adabas data into Oracle. This continued until the final cutover to the new systems. Just before the mainframe was decommissioned, some additional Adabas files were materialized by Treehouse Professional Services for archival purposes.

DBA Consulting

Treehouse Professional Services took over the DBA responsibilities for ALLETE starting in September 2010. The responsibilities included Software AG product upgrades, daily checks on available database space and file extents, and requests from ALLETE staff like file restores, field maintenance, and any other request requiring DBA involvement.

Systems Support

Additionally, Treehouse Professional Services assumed responsibility for ALLETE’s mainframe systems programming activities. Tasks included monthly IPLs to the system, applying licenses to products, handling system error situations, defining new DASD, upgrading IBM and third-party products, providing periodic SOX reports, and handling other requests from ALLETE staff.

Conclusion of a Successful Engagement

As this five-year Treehouse Software / ALLETE partnership ends, we wish our friends all the best as they move forward with their new systems and strategies.

“During the time our companies have been working together, Treehouse personnel were always responsive and are experts in their fields. Thanks for the flexibility and great support over the years.”

Eric Peterson, Manager, IT Infrastructure, ITS Systems, ALLETE Inc.


About Treehouse Software Professional Services _0_Consulting_Services002

Treehouse Professional Services offer a proven, cost-effective alternative to full-time in-house systems programmers, system administrators and DBAs. Using our team of experts and monitoring techniques, Treehouse offers world-class, on-site and remote system support and DBA service at affordable prices. Treehouse Professional Services offer various levels of system monitoring and administration. Our team will monitor critical factors within your Software AG environment, apply preventive maintenance for your systems and databases, and keep your software up-to-date. We even provide z/OS systems programming services! We are strongly committed to our customers’ success and we deliver the highest quality DBA and support solutions.

Visit the Treehouse Software website for more information on Treehouse Professional Services, or contact us to discuss your needs.


About Treehouse Software’s tRelational / DPS Product Set tReDPSMainDiagram01

tRelational / DPS is a robust product set that provides modeling and data transfer of legacy ADABAS data into modern RDBMS-based platforms for Internet/Intranet/Business Intelligence applications. Treehouse Software designed these products to meet the demands of large, complex environments requiring product maturity, productivity, feature-richness, efficiency and high performance.

The tRelational component provides complete analysis, modeling and mapping of ADABAS files and data elements to the target RDBMS tables and columns. DPS (Data Propagation System) performs Extract, Transformation, and Load (ETL) functions for the initial bulk RDBMS load and incremental Change Data Capture (CDC) batch processing to synchronize ADABAS updates with the target RDBMS.

Visit the Treehouse Software website for more information on tRelational / DPS, or contact us to discuss your needs.

Treehouse Software will be Exhibiting at SHARE and CA World

If you are attending SHARE in Pittsburgh in August, or CA World in Las Vegas in November, be sure to stop by the Treehouse Software booth and say hello!

We’ll be featuring our comprehensive and flexible portfolio of solutions for integration, replication, and migration of data between mainframe sources and any target, application or platform using ETL, CDC, SQL, XML and SOA technologies.

2014_SHARE

SHARE Technology Exchange Expo
Visit us at Booth #522 | August 3-8
David L. Lawrence Convention Center
Pittsburgh, PA


 

2014_CA_World

CA World ’14
November 9–12, 2014
Mandalay Bay Resort & Casino
Las Vegas, Nevada


Visitors to our exhibits will learn how Treehouse Software is currently providing several large organizations with ETL and real-time, bi-directional data replication using tcVISION. tcVISION provides easy and fast bi-directional data replication between mainframe, Linux, Unix, and Windows platforms.

tcVISION_Architecture


We will also showcase tcACCESS, which integrates mainframe data and applications with open systems and Windows.

tcACCESS_Diagram01

Download The New Treehouse Software White Paper on Fault-tolerant Data Sharing Between Applications and Databases

Image

Hot Topic or Hot Potato? 

Our informative new white paper, written by Wayne Lashley, Chief Business Development Officer for Treehouse Software, delves into the issues of why a well- architected, comprehensive, robust and scalable replication solution is the key to enabling legacy databases to exchange data reliably and effectively throughout the enterprise. This makes it possible for IT and business users to access corporate data regardless of where it resides. Furthermore, effective enterprise replication can be combined with other techniques to overcome technology constraints and maximize IT effectiveness.

To download this and other free white papers from Treehouse Software, visit our White Papers web page.

L10n in Heterogeneous Data Replication

by Wayne Lashley, Chief Business Development Officer for Treehouse Software

Most software vendors whose product markets extend beyond their own home country are familiar with the concepts of “i18n” and “L10n”, which are numeronyms for “internationalization” and “localization” respectively. i18n is the process of making a software product capable of adaptation to different languages and cultures, while L10n is the specific adaptation process for a given local market.

These terms take on special significance in the context of data replication software products—such as Treehouse’s DPSync, which provides real-time replication of mainframe ADABAS data to relational database (RDBMS) targets like DB2, Microsoft SQL Server and Oracle on various platforms. The very purpose of these products is to take data from a source and apply appropriate L10n to make it usable at the target, which is generally dissimilar in various aspects of the technical environment.

Perhaps the simplest form of L10n, having nothing to do with language or locale, is to transform database-specific field/column datatypes. Alphanumeric (A) fields in ADABAS are often mapped to CHAR or VARCHAR datatypes in an RDBMS, which are conceptually quite similar. Packed (P) fields may be expressed in an RDBMS as NUMBER, INTEGER, NUMERIC, DECIMAL, etc., depending on the vendor implementation and desired usages.

When it comes to Binary (B) format, things get tricky.  An array of bits in an ADABAS field can’t usually be mapped directly to a binary representation in an RDBMS column, due to the differences in the way data are represented between the platforms.

Decades ago, when I was earning my stripes as a novice mainframe programmer, the rules seemed simple: 8 bits made up a byte, and characters were expressed in single bytes encoded in EBCDIC.

(True story: During a university Assembler class many years ago, one of my classmates was muttering to himself, and the professor queried him about the subject of the “conversation”. The student replied “Just practicing my EBCDIC, sir!”)

Later on, I learned about that ASCII column of the “CODE TRANSLATION TABLE” in my indispensable System/370 Reference Summary GX20-1850-3, and I realized there was a whole world of computers beyond mainframes.

Image

But in fact things can be much more complex than simply EBCDIC and ASCII. L10n of data has to take into account the multitude of code pages and conventions that customers may use—and the customizations and exceptions to these.

Our European Technical Representative, Hans-Peter Will, has had to become somewhat of an expert in this over the past few years as he has worked with various customers in the Middle East on DPSync implementations.

Take the case of the way the Arabic language is handled in the context of applications at one site. Arabic is normally read right-to-left. But depending on system configuration, Arabic characters in a given field may be stored either left-to-right or right-to-left. Certain characters are represented in one byte, others in two. The cursive appearance of certain characters must be altered if they appear in the middle of a word rather than on an end. And in certain of this customer’s applications, the same screen display may show both Arabic and English. Even on screens where all of the words are in Arabic, and displayed right-to-left, there may be embedded numbers (e.g., telephone numbers) that need to be displayed left-to-right.

Now take all these complexities and factor in different database management systems (ADABAS vs. Oracle) running on different platforms (mainframe vs. Unix), each of which have their own configuration settings that affect the way characters are stored and displayed. Add to that the question of endianness (big-endian vs. little-endian) of the processing architecture.

The first time that Hans-Peter visited the customer in question, Treehouse software engineers had to figure out how to handle all these issues to ensure that ADABAS data would be replicated accurately and appropriately for use in Oracle-based applications. Fortunately, the combination of great product maturity (DPSync and its key underlying components tRelational/DPS having been battle-tested at countless sites over many years) and product extensibility (the ability to plug in complex custom transformations) enabled DPSync to be readily configured to accomplish the task at hand.

Having learned from that initial experience, Hans-Peter is now on familiar ground when assisting new Arabic-language sites implementing DPSync. Recently he was back in the Middle East visiting one of these new customers, and only hours after product installation he was able to confirm the accuracy of the SQL Server representation of data materialized (initially loaded via what is commonly called ETL, Extract-Transform-Load) from ADABAS using DPSync. The customer was also impressed with the speed of the process, both in terms of configuring the materialization (taking advantage of the tRelational schema auto-generation feature) and executing it (using an ADASAV backup as source, avoiding any workload on ADABAS). That customer is now in production with real-time ADABAS-to-SQL Server replication.

What’s your L10n challenge? Contact Treehouse and learn how DPSync and our other products are able to meet it.

Treehouse Software Products Ensure Minimal Downtime and Risk-Reducing Fallback Capabilities for Data Migration Cutover

by Wayne Lashley, Chief Business Development Officer for Treehouse Software and Joseph Brady, Marketing and Documentation Manager for Treehouse Software

Organizations that are modernizing from legacy applications or implementing new ones cannot afford downtime. Cutover to a new system has to be nearly instantaneous, and all practical measures to ensure continuous operations must be taken.

Fortunately, offerings from Treehouse Software are ideal for just such circumstances. In fact, Treehouse has been assisting customers with low-risk, minimum-downtime data migration cutovers for many years.

The key to success is the use of change data capture (CDC).

While there was a time when a large-scale data migration could be accomplished overnight or over a weekend, this has become much less feasible given today’s exploding data volumes. The logistics of such implementations are much easier to manage when the migration can be undertaken long in advance of the intended cutover date, and staged in “right-sized” chunks that conform to the organization’s capacity to process them.

Treehouse products such as tRelational/DPS, DPSync, tcVISION, ADAMAGIC and NatQuery provide the ability to efficiently migrate (i.e.,ETL, or Extract-Transform-Load) all or selected source data. We refer to this as materialization or bulk load. In many cases, the process can be executed without direct access to the live source data, instead using a backup or image copy. Furthermore, many mainframe data sources can be migrated by processing on a low-TCO (total cost of ownership) LUW (Linux/Unix/Windows) platform.

After a given set of data has been migrated to the new platform (and the requisite validations completed), Treehouse products—tRelational/DPS, DPSync, tcVISION and NatCDC—can keep the target in sync with ongoing changes in the source through CDC techniques. Such techniques may involve batch-style harvesting of changes from database logs, real-time interfacing with the live database to capture changes, or efficient batch compare processing to derive changes. In each case, the changes identified in the source are transformed appropriately and applied to the target, ensuring ongoing synchronization.

In this way, the data migration is conducted as a cumulative process. It can be commenced at any point and continued until completed, and even if the completion is in advance of the intended cutover, CDC will keep the target database ready for such cutover.

At the time of cutover, it is necessary only to quiesce the source database and apply the remaining transactions to the target, and the new database is ready to go, effectively with zero downtime.

But what if something goes wrong? Even the best-tested implementations may encounter a post-migration issue that necessitates reverting to the old system.

Once again, Treehouse CDC solutions are the answer.

At the point of cutover to the new system, the roles of source and target are reversed, and the CDC process can similarly be reversed. New changes generated in the new system database are captured and applied back to the old database. If it becomes necessary to revert, then once again the new database can be quiesced and final changes applied to the old database, which resumes its role as the system of record.

This might sound too good to be true, but in fact it is a process that has been proven in the field. Recently, Treehouse had a large customer, a multinational manufacturing company based in Latin America, that was moving from a mainframe ADABAS platform to ADABAS on Unix. This customer needed a bulletproof process to migrate hundreds of ADABAS source files from mainframe to Unix at sites located in multiple different countries. The customer could tolerate only the most minimal downtime, and required the ability to immediately revert to the mainframe if it was deemed necessary.

The systems integrator (SI) that the company had engaged for the project contacted Treehouse for a solution. After a successful POC (proof of concept), the company licensed products including ADAMAGIC, NatQuery and NatCDCSP to facilitate the migration. Locations were migrated one by one over a period of several months, in each case with a tested “fallback” methodology.

Yet this case involved a similar source and target—ADABAS (though readers familiar with ADABAS will know that there are significant and sometimes-problematic differences between the mainframe and LUW implementations). What if a customer needs to migrate from a legacy database like IMS/DB to a standard relational database like DB2?

At the risk of being repetitive: Treehouse has the answer.

tcVISION provides CDC facilities across a variety of legacy and modern data sources, including ADABAS, VSAM, IMS/DB, CA IDMS, CA Datacom, DB2 (mainframe and LUW), Oracle and SQL Server. CDC can even be accomplished for flat files and ODBC data sources using tcVISION’s batch compare technique.

So it’s entirely feasible to use tcVISION to stage a migration of IMS/DB to DB2 across an extended period of time, and keep DB2 in sync with new changes in IMS/DB until the point of cutover. Post-cutover changes in DB2 can be captured by tcVISION and applied back to IMS/DB in case fallback is needed.

Is the big-bang data migration dead? Perhaps not, even if it’s becoming more and more difficult. But with Treehouse Software, organizations need no longer endure the risk and stress of big-bang migrations.

If you are contemplating a data migration in your future, contact Treehouse Software today.