Treehouse Software Customer Case Study: A State Government Agency’s Real-time Data Synchronization Between IBM Mainframe Adabas and AWS

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

Mainframe_to_AWS_Graphic

Software AG’s Adabas is a mainframe database that is still heavily used by government sites throughout the U.S. and the world, and this blog focuses on a current Treehouse Software customer – a U.S. State Government Agency that uses Adabas on their mainframe system.

Business Issue

The Agency’s modernization team was looking for a Change Data Capture (CDC) technology solution that enables them to synchronize their mainframe Adabas data on AWS, particularly an Amazon RDS. As with most Treehouse customers, the State’s mainframe contains vital data that must always be highly available, so rather than attempting a complete migration from the mainframe, the modernization teams decided to implement a multi-year data replication plan. This allows the mainframe legacy teams to maintain existing critical applications, while the modernization team develops new applications on AWS.

After researching various technologies, the Agency discovered tcVISION on the AWS Parter Network Blog and contacted Treehouse Software to discuss their project and to see a demonstration of Mainframe-to-AWS data replication.

Addressing the Uniqueness of Adabas

Having specialized in tools and services complementary to Adabas/Natural applications since 1982, Treehouse Software has successfully encountered and addressed many unique scenarios within the Adabas environment. The Treehouse technical team documented three primary issues with Adabas/Natural that the Agency needed to consider when they began planning data replication on AWS:

  1. Adabas has no concept of “transaction isolation”, in that a program may read a record that another program has updated, in its updated state, even though the update has not been committed.  This means that programmatically reading a live Adabas database—one that is available to update users—will almost inevitably lead to erroneous extraction of data.  Record modifications (updates, inserts and deletes) that are extracted, and subsequently backed out, will be represented incorrectly—or not at all—in the target. Because of this, at Treehouse we say “the only safe data source is a static data source”—not the live database.
  2. Many legacy Adabas applications make use of “record typing”, i.e., multiple logical tables stored in a single Adabas file.  Often, each must be extracted to a separate table in the target RDBMS.  The classic example is that of the “code-lookup file”.  Most shops have a single file containing state codes, employee codes, product-type codes, etc.  Records belonging to a given “code table” may be distinguished by the presence of a value in a particular index (descriptor or superdescriptor in ADABAS parlance), or by a range of specific values.  Thus, the extraction process must be able to dynamically assign data content from a given record to different target tables depending on the data content itself.
  3. Adabas is most often used in conjunction with Software AG’s Natural 4GL, and “conveniently” provides for unique datatypes (“D” and “T”) that appear to be merely packed-decimal integers on the surface, but that represent date or date-time values when interpreted using Software AG’s proprietary Natural-oriented algorithm. The most appropriate way to migrate such datatypes is to recognize them and map them to the corresponding native RDBMS datatype (e.g., Oracle DATE) in conjunction with a transformation that decodes the Natural value and formats it to match the target datatype.

The tcVISION Technology Solution...

Adabas_To_AWS

After technical discussions and a successful proof of concept (POC) that proved out a set of use cases, all teams at the Agency determined that tcVISION real-time mainframe data replication capabilities were the perfect fit for meeting their goals.

tcVISION‘s modeling and mapping facilities are utilized to view and capture logical Adabas structures, as documented in Software AG’s PREDICT data dictionary, as well as physical structures as described in Adabas Field Definition Tables (FDTs).  Given that PREDICT is a “passive” data dictionary (there is no requirement that the logical and physical representations agree), it was necessary to scrutinize both to ensure that the source structures were accurately modeled.

Furthermore, tcVISION generates appropriate mappings and transformations for converting Adabas datatypes and structures to corresponding target datatypes and structures, including automatic handling of the proprietary “D” and “T” source datatypes.

The teams examined the three ways that tcVISION can access Adabas data:

  1. ETL – read the active database nucleus
  2. ETL – read datasets containing unloaded Adabas files created by the ADAULD utility
  3. CDC – read the active and archived PLOGs datasets

It was decided to access the data by reading the active and archived PLOGs datasets. The schema, mappings, and transformations from the metadata import were tailored to the customer’s specific requirements.  It is also now possible to import an existing RDBMS schema and retrofit it, via drag-and-drop in tcVISION, to the source Adabas elements.

Additionally, the Agency’s teams are very pleased with tcVISION‘s minimal usage of mainframe resources. The product’s “staged processing” methodology accomplishes this, whereby the only processing occurring on the mainframe is the capture of changes from Adabas PLOGs. The bulk of the processing occurs on the AWS side, minimizing tcVISION’s footprint on the mainframe as seen in this diagram:

tcVISION_Staged_Processing

The user defines on which platform stage their processing should be done. Do as little as possible on the mainframe: Stage 0 – capture data and send data (internal format) to target, and process data in Stages 1 – 3 in AWS.

Customer Outcome

All requirements were met by tcVISION, which led to a successful project implementation.


__001_TSI_LOGO
Contact Treehouse Software for a tcVISION Demo Today…

No matter where you want your mainframe data to go – the Cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.


Further reading:

Many more mainframe data migration and replication customer case studies can be read on the Treehouse Software Website.

Enterprise Mainframe Change Data Capture (CDC) to Apache Kafka with tcVISION and Confluent

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc. and Ram Dhakne, Solutions Engineer at Confluent

___Mainframe_To_Kafka_Confluent

This blog focuses on using Treehouse Software’s tcVISION to replicate data in real time between mainframes and Confluent, allowing for new use cases and truly setting data in motion.

Why mainframe modernization? Benefits and use cases

Mainframe data stores often hold large amounts of complex and critical data in proprietary legacy formats, making this data difficult to extract and incompatible with modern databases, data types, and data tools.

Enterprises are looking to take advantage of the latest cloud services, such as analytics, artificial intelligence (AI) and machine learning, scalable storage, security, high availability, etc., or move data to a variety of newer databases. Additionally, many customers want to modernize their application on a cloud or open systems platform without disrupting the existing critical work on the legacy system.

How tcVISION syncs legacy data for the cloud

tcVISION is a data replication software product that performs real-time synchronization of mainframe data sources and cloud and open systems, allowing critical mainframe data to be consumed by a variety of leading cloud services.

tcVISION supports many mainframe data sources for both online and offline scenarios. Data can be replicated from IBM Db2 z/OS, Db2 z/VSE, VSAM, IMS/DB, CA IDMS, CA Datacom, or Software AG ADABAS. tcVISION can replicate data to many targets including Confluent Platform, Apache Kafka®, AWS, Google Cloud, Microsoft Azure, PostgreSQL, Snowflake, etc. To learn more, see the complete list of supported tcVISION sources and targets.

tcvision-mainframe-to-confluent-cloud-data-replication-1536x1042

tcVISION focuses on CDC (change data capture) when transferring information between mainframe data sources and cloud and open systems databases and applications. Through innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of cloud and open systems targets.

tcVISION stores metadata in a relational database and the tcVISION manager components are administered by the tcVISION control board, a Windows GUI interface, which can be installed on premises or in the cloud. This allows tcVISION users to create metadata, create and control replication scripts, and control database interactions. tcVISION’s architecture is designed to minimize mainframe resource utilization.

Using the tcVISION control board, the most complex transformations can be specified, and it facilitates the mapping of the mainframe copybooks, redefines, data dictionaries, data catalogs, codepages, data type mapping, and more via the user-friendly interface. The repository editor allows users to control data transformations.

What is Confluent?

Confluent Cloud is a real-time data in motion platform that can be deployed in any public cloud, in any region of your choice. It comes with an SLA and uptime of 99.95%, and fully managed components like ZooKeeper, Kafka brokers, 120+ Kafka connectors, Schema Registry, and ksqlDB so you can leverage it on any cloud without having to worry about how it runs and scales.

Kafka Connect, Connect API, connectors, and tcVISION IBM Db2 connector

Kafka comes with three core APIs:

  • Kafka producer/Consumer API
  • Connect API
  • KStreams API

Kafka Connect is a tool for scalably and reliably streaming data between Kafka and other data systems. It makes it simple to quickly define connectors that move large data sets into and out of Kafka. Kafka Connect can ingest entire databases or collect metrics from all your application servers into Kafka topics, making the data available for stream processing with low latency. Kafka Connect connects APIs under the hood with fully managed connector support in Confluent Cloud.

Step-by-step guide on how to use tcVISION and Confluent

This example discusses the integration of tcVISION replication of data from Db2 to Confluent Cloud.

Set up tcVISION access to Confluent

Create an account with Confluent to make a Confluent user ID/password; the user ID is generally your email address. To sign on to Confluent, go to the Confluent Cloud login and enter your user ID:

Confluent Cloud welcome page

Then, enter your password:

Enter your password

When you log in, you’ll be in a Confluent environment called “default”:

Confluent environment called “default”

A Confluent environment is a type of container that holds clusters which in turn hold topics. If you are familiar with messaging systems, Confluent/Kafka will seem familiar. A cluster will need to be created to serve as a target for the data produced by tcVISION. The first attribute to be selected is the type of cluster. Confluent offers three types: Basic, Standard, and Dedicated. For the purposes of this demonstration, Basic will be used. A Basic cluster does not incur charges for simply existing, but does for data transmission and data storage.

Select "Basic cluster" and begin configuration

Select Begin configuration.

Select a cloud provider

Here, a cloud provider can be chosen—AWS, Google Cloud, or Microsoft Azure. For this example, AWS is used. Select Continue and the characteristics of the new cluster are displayed, which we’ve named “tcVISION_cluster_0”:

Cluster characteristics

After entering your payment information (not shown), you can click on the cluster name to launch the cluster overview.

Cluster overview

In order to use Confluent with tcVISION, the user must provide tcVISION with information about the cluster they intend to use. Specifically, the user must supply the hostname and port of the Confluent AWS virtual machine, and the credentials needed to access the cluster.

Confluent refers to the hostname and port as a bootstrap server. There can be multiple bootstrap servers for the purpose of load balancing, but a single server is used for this demonstration.

To find bootstrap server information, click Cluster Settings on the left-hand side:

Cluster settings

The bootstrap server will be listed under “Identification,” and includes both the AWS hostname and the port.

Credentials in Confluent consist of an API Key and an API Secret. These are generated for the cluster and take the place of the Confluent user ID and password used to log in. To generate a key/secret pair, click API Access on the left:

API Keys page

Followed by Create Key:

Select API Key scope

For this example, we use “Global Access” here, so click Next:

API Key and secret

Pay particular attention to the tip about saving the key and secret somewhere safe, because once this panel is exited, there is no way to display the secret again. A descriptive string for this key/secret pair can be filled in. The key or secret text to be copied can be selected, or use the convenient icons at the end of the field to copy. Once the key/secret has been safely stored, check the box that says it has been done, and click Save. You will return to the “API Keys” panel, and the key is now displayed:

API Key displayed

Set up Confluent and define the topic

The last thing to do is define a topic within the cluster. Confluent producers have the capability to define their own topics within a cluster, but this capability can be disabled by a Confluent configuration and is disabled in the configuration used here.

Go back to the cluster Overview:

Cluster Overview

On the left sidebar, click Topics:

Topics

Then Create Topic:

Create a topic

The topic name is filled in (“CONFLUENT_CLOUD_TOPIC1”), overriding the number of partitions from 6 to 1, since that is what the Confluent demo uses. Click Create with defaults:

Cloud topic

A topic is now available, which can be populated with Db2 data.

Set up tcVISION and run a bulk load of Db2 data

tcVISION’s control board is a Windows graphical user interface (GUI) that allows users to configure the replication stream between various database platforms, including the IBM mainframe and Confluent. Using the control board and built-in wizards, users can define the metadata and the mappings between the mainframe and target.

The following sequence of screens shows the steps required to create the tcVISION metadata and scripts for replicating mainframe Db2 z/OS data to Confluent.

Access the tcVISION control board:

tcVISION control board

Log on to Db2 z/OS:

Db2 z/OS

Create metadata that is specific to the input (Db2) and output (Kafka) and the replication definition. In this example, the Db2 table is mapped to the Confluent Cloud Kafka topic using JSON:

Import of structure definitions

The tcVISION metadata wizard asks for the information required for the replication of the mainframe database to Confluent Cloud. For Db2 z/OS, it asks for the mainframe Db2 subsystem:

Source type for structure definition import

Db2 subsystem

tcVISION presents the tables contained in the Db2 z/OS catalog on the mainframe. Select the schemas and associated tables for replication:

Select the schemas and associated tables for replication

Once the required tcVISION wizard-based screens are completed, the tool automatically defines the mappings between the source and target. tcVISION’s metadata import wizard creates a default mapping that handles data type conversion issues, such as EBCDIC to ASCII, Endianness conversion, codepages, redefines data types, and more:

Default mapping

tcVISION data scripts are created through wizards. Data scripts control the replication of data from the source (Db2 z/OS) to the target (Confluent Cloud Kafka JSON). tcVISION bulk load scripts are a type of data script that performs the initial load of the Kafka topic. The following script shows data being accessed directly from the mainframe Db2 z/OS database. Another alternative to reduce MIPS consumption is to read the data from a Db2 image copy.

Data script

Bulk load script running:

Bulk load script running

After execution of the bulk load script, replication statistics of the Db2 bulk load into the Confluent Cloud Kafka topic can be viewed:

Replication statistics of the Db2 bulk load

Now that the topic has been loaded with data from Db2, it can be displayed in Confluent. To do this, navigate to the topics panel again:

Notice that there are now statistics indicating that the tcVISION producer uploaded some data to the topic. On the horizontal menu, switch from “Overview” to “Messages” to display the messages (data records) that the tcVISION bulk load placed in the topic. The display can be filtered in various ways, but for this example, the default is used: “Jump to Offset,” which says “start displaying sequentially from this offset.” Here, an offset of 0 (start at the beginning) is specified, since we just want to verify that the Db2 data uploaded by tcVISION was actually delivered:

Messages (data records) from tcVISION bulk load

Run a change script in tcVISION to show the changes in Confluent

To capture ongoing changes to Db2 in real time, a Db2 z/OS CDC replication script is created.

This script captures the changes on the Db2 z/OS side and applies them into the repository where the output target is Confluent Cloud topic.

Replication script

Replication script

Target database Confluent Cloud topic

The CDC replication is initiated from the tcVISION control board. The tcVISION control board shows a graphical representation of the replication:

Graphical representation of the replication

The CDC replication is now actively capturing and replicating data changes whenever they occur on the Db2 z/OS side. You can test it by making a change in the Db2 z/OS table:

 
********************************* Top of Data **********************************
---------+---------+---------+---------+---------+---------+---------+---------+
UPDATE SXE1.TVKFKATB                                                    00010004
SET DEPT = '696969'                                                     00040029
WHERE PERS_ID = 5;                                                      00050004
---------+---------+---------+---------+---------+---------+---------+---------+
DSNE615I NUMBER OF ROWS AFFECTED IS 1                                           
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0                       
---------+---------+---------+---------+---------+---------+---------+---------+
--COMMIT;                                                               00060019
---------+---------+---------+---------+---------+---------+---------+---------+
DSNE617I COMMIT PERFORMED, SQLCODE IS 0                                         
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 0                       
---------+---------+---------+---------+---------+---------+---------+---------+
DSNE601I SQL STATEMENTS ASSUMED TO BE BETWEEN COLUMNS 1 AND 72                  
DSNE620I NUMBER OF SQL STATEMENTS PROCESSED IS 1                                
DSNE621I NUMBER OF INPUT RECORDS READ IS 4                                      
DSNE622I NUMBER OF OUTPUT RECORDS WRITTEN IS 17                                 
******************************** Bottom of Data ********************************

This change is processed and replicated by tcVISION. The tcVISION control board shows the statistics highlighting that one update was performed:

Display of extended statistics

Checking in Confluent, the Db2 z/OS change has successfully been propagated to the Confluent Cloud topic:

Db2 z/OS change successfully propagated to Confluent Cloud topic

tcVISION and Confluent are better together

With tcVISION’s groundbreaking Db2 CDC connector and Confluent’s ability to serve as the multi-tenant data hub, this combination creates a very powerful solution to aggregate data from multiple sources and have data published into various Kafka topics. Sourcing events from any kind of Db2 via a connector into Confluent will set data in motion for the entire organization. Simplicity and agility are key elements of the tcVISION and Confluent “better together” story.


__001_TSI_LOGO

Video: tcVISION Demonstration…

In this video, we show a tcVISION overview, then a demonstration of replication of mainframe data on AWS RDS for PostgreSQL:

Contact Treehouse Software for a tcVISION Demo Today!

No matter where you want your mainframe data to go – the Cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

Treehouse Software Customer Success: ETS uses tcVISION for Real-Time Synchronization Between their Mainframe IDMS Data and AWS RDS for PostgreSQL

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

ETS_Graphic

This blog focuses on a current Treehouse Software customer – ETS. Headquartered in Princeton, New Jersey, ETS is a private, nonprofit organization with approximately 3,000 employees devoted to educational measurement and research. ETS develops and administer a broad range of educational products and services for government agencies, academic institutions and corporations, including the TOEFL® and TOEIC® tests, the GRE® General and Subject Tests, and the Praxis® assessments. At nonprofit ETS, our belief in the life-changing power of learning is at the root of everything we do — it’s behind the tools we develop to move learning forward, the research that inspires educational progress and the commitment we make to enable opportunity for learners everywhere. We’re with you on the journey to what’s possible.

Business Background

ETS products and services are available to institutions, businesses, organizations and governments in more than 180 countries around the world. The top industries served by ETS are K–12 Education, Higher Education, English-language Learning, Career Development, and Consulting Services.

Business Issue

Most of ETS’s high volume critical application data is stored on an IBM mainframe in IDMS databases.  The technology is very old, therefore it is difficult to recruit and retain qualified technical personnel to maintain applications.  ETS is moving to Cloud-based computing which will allow them to retire the mainframe environments and modernize the applications.  The data is used and shared across several applications.  ETS required a solution that would allow them to continue, uninterrupted, daily operations on their mainframe while replicating data to their AWS Cloud platform, where they could develop modern application features.  This solution enables ETS to maintain demanding daily processing while they modernize and develop innovative Cloud solutions to meet and exceed customer requirements.

The Technology Solution

ETS_Diagram

Treehouse Software and the ETS team developed a rigid testing plan to implement tcVISION and performed a Proof of Concept to measure the effectiveness of the data replication, considering the high volumes of data changes on the source databases.  We collaborated on architecture requirements and installation steps.  There were many considerations associated with this process, including monitoring, alarming, configuration options, high availability, measuring the impact to existing mainframe database performance, restart capability, and security.  Concurrently, a team of subject matter experts worked on data mappings and translation of database designs from the IDMS network databases to AWS PostgreSQL relational databases.  The goal was to be able to replicate two very large IBM mainframe IDMS databases real-time on two Cloud-based PostgreSQL databases. Implementation was done in phases, starting with one non-production database being replicated to the Cloud.  High-volume testing was performed on the source database to simulate peak processing, replicating millions of transactions to the target PostgreSQL databases.  Many technical challenges were encountered and resolved with outstanding technical assistance from the Treehouse Software support team.  Once in production, the tcVISION product was able to deliver real-time data to the Cloud platform with no interruptions to the customer’s daily processing. The customer was then able to develop modern application features and functions in the Cloud to achieve independence from the legacy mainframe systems.  Using new Cloud-based capabilities enabled the customer to be more agile with meeting new requirements.


__001_TSI_LOGO

Video: tcVISION Demonstration…

In this video, we show a tcVISION overview, then a demonstration of replication of mainframe data on AWS RDS for PostgreSQL:

Contact Treehouse Software for a tcVISION Demo Today!

No matter where you want your mainframe data to go – the Cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

Treehouse Software Customer Case Study: A Large Airline’s Real-time Data Synchronization Between IBM Mainframe Adabas and Oracle RDBMS

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

_0_Airline_Manitenance

This blog focuses on a current Treehouse Software customer – a major airline that is a long-time user of Software AG’s Adabas database on their mainframe system.

Business Background

This U.S. based airline is one of the largest domestic air carriers, and during peak travel seasons, they operate thousands of weekday departures within a network of hundreds of destinations in the United States and several other countries.

Business Issue

The airline’s IT modernization team was looking for a technology solution that enabled them to move their Adabas data off of the mainframe to more modern applications. However, their mainframe contains vital airline maintenance and parts data that must always be highly available, so rather than performing a complete migration from the mainframe, Treehouse technical representatives and the customer decided to utilize a data warehouse and real-time mainframe data replication. This architecture allows the mainframe legacy teams to maintain existing critical applications, while the modernization team develops applications on the newly created Oracle-based RDBMS. All teams at the airline determined that tcVISION real-time mainframe data replication was the perfect fit for meeting their goals.

The airline requires that tcVISION must support online and batch Adabas transactions and provide data replication between Adabas and their new Oracle RDBMS. Additionally, their large databases contain millions of rows (> 50 million) that must be supported, as well as support for database change audit requirements (datetime and type of operation), transaction management, and notification of exception events. There must also be support for configuration management between development, QA, and production.

The tcVISION Technology Solution...

The following is a high-level view of the airline customer’s tcVISION data replication architecture:

___Airline_tcV_Overview

  • Adabas: Mainframe data source containing business critical information replicated to the RDBMS.
  • Oracle: Open Platform RDBMS chosen by customer as replication target for both data warehouse and modernization project. The tcVISION Manager also uses Oracle as a repository for the metadata (field mappings).
  • tcVISION Manager: Software component deployed on both source and target systems. It is responsible for provisioning resources for:
    • Processing scripts
    • Metadata import
    • Scheduling
  • tcSCRIPT: Software component deployed on both source and target systems. It works in conjunction with the tcVISION Manager to:
    • Perform initial load of Adabas into Oracle
    • Ongoing near realtime CDC (Change Data Capture) and replication from Adabas to Oracle
    • tcSCRIPT processes data from:
      • Direct from the DBMS (initial load) or from DBMS extracts
      • Adabas Protection log (PLOG)
  • tcVISION Control Board: Software component deployed on a Windows machine which provides graphical user interface for a single point of control to administer the tcVISION environment:
    • Metadata import, creates metadata rules governing relationship between mainframe and open platform data structures
    • Replication rule maintenance
    • DDL creation
    • Creation of ETL and replication processes
    • Start, stop, and schedule replication processes

Customer Outcomes

All requirements were met by tcVISION, which led to a successful project implementation.  Here is a look at the customer’s reported outcomes and benefits:

Business Outcome Customer Benefit
Data warehouse is always in sync with Adabas using tcVISION data replication Improved timeliness and reliability of reporting data
Reduced usage of mainframe MIPS Reduced cost
All data required for modernization project was successfully replicated to target environment Increased business efficiently and agility

__TSI_LOGO

Contact Treehouse Software for a Demo Today…

No matter where you want your mainframe data to go – the cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

_0_Treehouse_tcV_Cloud_OpenSystems

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.


Further reading:

Many more mainframe data migration and replication customer case studies can be read on the Treehouse Software Website.

Treehouse Software’s Differentiator: Enterprise Mainframe Expertise Since 1982

by Joseph Brady, Director of Business Development and Cloud Alliance Lead at Treehouse Software, Inc. 

Treehoue_Mainframe_Experience

This blog explores Treehouse Software‘s decades worth of experience in helping mainframe customers with innovative tools, services, and training. 

When Treehouse Software began in 1982, the business focused on software that was complementary to the Software AG mainframe product line (Adabas database management system and Natural programming language) in the areas of security, control, auditing, performance enhancement, etc.

In more recent years, Treehouse Software has become a global leader in providing solutions for real-time and bi-directional data replication between a variety of mainframe and non-mainframe sources, including (Mainframe): Adabas, Db2, VSAM, IMS, CA Datacom, and CA IDMS; and (Non-mainframe): Amazon Web Services (AWS), Google Cloud, Microsoft Azure, PostgreSQL, Kafka, Oracle, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, MongoDB, Hadoop, and many more. Here is our list of supported data sources and targets.

Decades Worth of Mainframe Knowledge…

When asked by prospective customers, “What are your primary differentiators?”, we can be tempted to first talk about superior product features and capabilities, but in addition to our exceptional products, it is Treehouse Software’s depth of knowledge and experience in the mainframe world that is the real game changer.

Most mainframe users face critical data management challenges due to the complexity and proprietary nature of deeply entrenched databases on the platform. Our extensive experience, deep knowledge, and wide-ranging capabilities in mainframe technologies make the company a valued partner for third-party solution providers and a trusted advisor to customers.

Treehouse Software’s visionary leadership in this market has included pioneering Adabas-to-RDBMS ETL and CDC with tRelational/DPS in the mid-1990s.  Today, Treehouse Software stands alone in its product maturity, and capability, including expanded capabilities with the tcVISION product, which enables migration and synchronization of virtually any mainframe or non-mainframe database or data source.

Despite the rapid pace of change in the IT landscape, Treehouse Software’s customer base can be assured that there remains a strong commitment to providing continued support and upgrades for the product suite.

Treehouse Software provides tools and expertise for the riskiest and most-often overlooked parts of modernization and integration projects – data migration and integration.   Tapping into the vast experience of Treehouse’s technicians, and using proven products and services eliminates reliance on end-customer programming staff to write and maintain data extracts and middleware. Treehouse Software’s know-how reduces cost and mitigates risk in legacy modernization initiatives, where data migration and integration complexity is often underestimated, yet critical to success.

Our Mainframe Experts are Our Best Assets

We are fortunate to have a staff with a wealth of knowledge and skills that span not only Mainframe, but Cloud, LUW, and Open Systems technologies. Whether a customer wants to move data from their mainframe platform to other on-premises open systems or LUW databases, or to the Cloud (e.g., AWS, Google Cloud, Azure, etc.), Treehouse Software has the technical expertise and support needed to ensure successful project completion.

Treehouse Software‘s technicians have installed products and trained end-users in some of the largest mainframe sites around the world.  Mature, robust, and reliable, these products are also backed by our highly-rated 24X7 technical support.

The Treehouse Team Approach

TechniciansConnectivity

Treehouse Software has proven its ability to partner and work effectively as part of a larger team to solve client problems.  AWS, Google, Microsoft, Deloitte, Accenture, and other large vendors have selected our technology, services, and training for their mainframe data migration and application modernization practices.


__tsi_logo_400x200

Contact Treehouse Software for a Product Demo Today…

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online product demonstration.

The New 2020 IBM Z Solutions Directory Features a Wide Selection of Mainframe Tools and Services, Including Treehouse Software’s Data Integration Products

by Joseph Brady, Director of Business Development / Cloud Alliance Lead at Treehouse Software, Inc. 

2020_IBM_Systems_Solutions_Directory

Enterprises’ investments in IBM Z mainframe technology is significant, and the IBM Z Solutions Directory showcases some of the best hardware, accessories, software products, and services available to help customers maintain and expand the platform.

Treehouse Software encourages mainframe customers to explore the IBM Z Solutions Directory, where they will find neatly organized listings that help them quickly find the products and services they need. We are very pleased to have our products included, side-by-side, with most of the top mainframe solutions in the world in this valuable guide. 


__TSI_LOGO

More About Treehouse Software’s Mature and Proven Mainframe Enterprise Transformation Products…

Treehouse_Products_Diagram_General_Cloud.jpg

Treehouse Software has been developing, marketing, selling, and supporting mainframe software since 1982, and we are committed to helping customers easily access some of the most advanced Cloud and open systems technologies in the world, while maintaining their valuable legacy environments. 

Treehouse Software’s visionary leadership in the mainframe market has included pioneering Adabas-to-RDBMS ETL and CDC with tRelational/DPS in the mid-1990s.  Today, with the tcVISION product, Treehouse Software is a global leader in providing real-time and bi-directional data replication between a variety of mainframe and non-mainframe sources, including (Mainframe): VSAM, IMS, Db2, CA Datacom, Adabas, and CA IDMS; and (Non-mainframe): Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Oracle Cloud, PostgreSQL, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, Kafka, MongoDB, MariaDB, Hadoop, SAP Hana, and many more.

Despite the rapid pace of change in the IT landscape, Treehouse Software’s customer base can be assured that there remains a strong commitment to providing continued support and upgrades for the product suite.

Contact Treehouse Software for a Product Demo Today…

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online product demonstration.

 

Treehouse Software’s Differentiator: We’ve Been Helping Enterprise Mainframe Sites Since 1982

by Joseph Brady, Director of Business Development / AWS and Cloud Alliance Lead at Treehouse Software, Inc. 

Treehoue_Mainframe_Experience

This blog explores Treehouse Software‘s decades worth of experience in helping mainframe customers with innovative tools, services, and training. 

When Treehouse Software began in 1982, the business focused on software that was complementary to the Software AG mainframe product line (Adabas database management system and Natural programming language) in the areas of security, control, auditing, performance enhancement, etc.

In more recent years, Treehouse Software has become a global leader in providing solutions for real-time and bi-directional data replication between a variety of mainframe and non-mainframe sources, including (Mainframe): VSAM, IMS, Db2, CA Datacom, Adabas, and CA IDMS; and (Non-mainframe): Amazon Web Services (AWS), PostgreSQL, Oracle, Microsoft Azure, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, Kafka, MongoDB, Hadoop, SAP Hana, and many more. Here is our list of supported data sources and targets.

Decades Worth of Mainframe Knowledge…

When asked by prospective customers, “What are your primary differentiators?”, we can be tempted to first talk about superior product features and capabilities, but in addition to our exceptional products, it is Treehouse Software’s depth of knowledge and experience in the mainframe world that is the real game changer.

Most mainframe users face critical data management challenges due to the complexity and proprietary nature of deeply entrenched databases on the platform. Our extensive experience, deep knowledge, and wide-ranging capabilities in mainframe technologies make the company a valued partner for third-party solution providers and a trusted advisor to customers.

Treehouse Software’s visionary leadership in this market has included pioneering Adabas-to-RDBMS ETL and CDC with tRelational/DPS in the mid-1990s.  Today, Treehouse Software stands alone in its product maturity, and capability, including expanded capabilities with the tcVISION product, which enables migration and synchronization of virtually any mainframe or non-mainframe database or data source.

Despite the rapid pace of change in the IT landscape, Treehouse Software’s customer base can be assured that there remains a strong commitment to providing continued support and upgrades for the product suite.

Treehouse Software provides tools and expertise for the riskiest and most-often overlooked parts of modernization and integration projects – data migration and integration.   Tapping into the vast experience of Treehouse’s technicians, and using proven products and services eliminates reliance on end-customer programming staff to write and maintain data extracts and middleware. Treehouse Software’s know-how reduces cost and mitigates risk in legacy modernization initiatives, where data migration and integration complexity is often underestimated, yet critical to success.

Our Mainframe Experts are Our Best Assets

We are fortunate to have a staff with a wealth of knowledge and skills that span not only Mainframe, but Cloud, LUW, and Open Systems technologies. Whether a customer wants to move data from their mainframe platform to other on-premises open systems or LUW databases, or to the Cloud (e.g., AWS, Azure, etc.), Treehouse Software has the technical expertise and support needed to ensure successful project completion.

Treehouse Software‘s technicians have installed products and trained end-users in some of the largest mainframe sites around the world.  Mature, robust, and reliable, these products are also backed by our highly-rated 24X7 technical support.

The Treehouse Team Approach

TechniciansConnectivity

Treehouse Software has proven its ability to partner and work effectively as part of a larger team to solve client problems.  AWS, Microsoft, Oracle, Accenture, and other large vendors have selected our technology, services, and training for their mainframe data migration and application modernization practices.


__tsi_logo_400x200

Contact Treehouse Software for a Product Demo Today…

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online product demonstration.

Treehouse Software TREETIP: Data Replication for a Mainframe Database that has no Primary Key, using tcVISION

by Joseph Brady, Director of Business Development / AWS and Cloud Alliance Lead at Treehouse Software, Inc. and Chris Rudolph, Senior Technical Representative at Treehouse Software, Inc.

This blog takes a look at tcVISION’s support for a mainframe database that has no primary key at the source (a mainframe primary key is a column, or set of columns that uniquely identifies one row of a table). 

In situations where the source database does not contain any unique values, Treehouse technicians discuss with the customer how the application currently works and their expectations for how the data should be treated when planning replication/migration to a new target environment. Depending on the application, Journal Replication or Data Warehouse Replication may be a better fit than a “normal” RDBMS table definition.

tcVISION‘s Key ID management can be used to create a column to use as the key on the target table. Another option would be to use a SQL Lookup to query another table. Either way, target column(s) that contain unique values will need to be identified. Once these columns are identified, they can be marked with the key identifier, using the tcVISION repository editor, or Key ID Management can be used to create a new unique column, then those columns are used within the wizard.

For example, this source table does not contain a primary key:

___tcV_NoKey01

Analysis of the target data shows the columns FIRST_NAME, LAST_NAME and MIDDLE_NAME together will provide a unique value. These fields can be marked in the tcVISION repository as being members of a key:

___tcV_NoKey02

As mentioned earlier, the tcVISION Key ID Management wizard can also be used to create a new unique target column:

___tcV_NoKey03

Specify the key name:

___tcV_NoKey04

Specify the key value creation:

___tcV_NoKey05

Various options are available: 

___tcV_NoKey06


__tsi_logo_400x200

Contact Treehouse Software for a Demo Today…

tcVISION_Overall_Diagram

No matter where you want your mainframe data to go – the cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.


Further reading: tcVISION Mainframe data replication is featured on the AWS Partner Network Blog…

___tcVISION_AWS_HA_Architecture

AWS recently published a blog about tcVISION’s Mainframe data replication capabilities, including a technical overview, security, high availability, scalability, and a step-by-step example of the creation of tcVISION metadata and scripts for replicating mainframe Db2 z/OS data to Amazon Aurora. Read the blog here: AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.

Treehouse Software Customer Success: Data Replication from Mainframe Adabas to PostgreSQL using tcVISION

BMF_Building

The Bundesministerium der Finanzen (BMF) is Germany’s Ministry of Finance and establishes sustainable fiscal policy that ensures financial empowerment of the federal budget. From tax policy via development of federal budget, to regulation of national and international financial markets – for these and other fiscal and economic questions of principle, the BMF creates strategies and concepts, and implements them. The Federal Tax Administration is part of BMF, and controls not only the cross-border goods traffic, but acts against illegal employment and other crimes. The tax administration also imposes consumer taxes (e.g., energy and tobacco tax, car tax, etc.). Financial relations between federation, countries, and communities are also coordinated by BMF.

Department II (federal budget) is part of the German government in charge of establishing the budget and financial planning of the federation. Throughout the year, it monitors execution of the budget for eventual intervention (e.g., with a budget freeze, or supplementary budget). After closing the fiscal year, the budget and balance sheet will be presented. The budget is a supplement of the budget act, legally binding.

The central service organization of BMF is the Informationstechnikzentrum Bund – ITZBund (Information technic center).

BUSINESS BACKGROUND

Drawing up the budget is a yearly, highly time consuming, and formalized business process. All departments are involved in nearly every sub-process, and budgeting and financial planning is supported by the application, “Haushaltsaufstellung / Budgetgeneration”. Using the generated reports, various addressees/receivers are supported (e.g., German Federal Government, German Federal Parliament, Federal Council of Germany, finance department in BMF, the employees in the departments, and the public).

Technically, the budget plan of the federation is based on technologies, including the IBM Mainframe with z/OS running Adabas and Natural.

The challenge was to provide an environment for employees in all departments that enables them to do their work quickly, easily, and efficiently. In the BMF, users must have an editorless, end-user driven, and real-time creation of ready-to-print products.
An informative description of the workflow is shown on the website of the BMF.

The federal budget is available as download, or one can directly navigate through the data using the online application.

BUSINESS ISSUE

Some time ago, BMF decided to re-engineer the application for budget planning and port it to Open Source. To guarantee a seamless transition, the first step is propagation of data out of Adabas on z/OS to PostgreSQL, concluding with permanent synchronization.
The difficulties of this task are the complexities of setting up data definitions for the data structures in Natural and the propagation of data from Adabas on z/OS to PostgreSQL.

TECHNOLOGY SOLUTION: tcVISION

tcVISION_Overall_Diagram

After an analysis of the project, Treehouse Software proposed creating an extension to tcVISION’s change data capture (CDC) functionality for integration, so that tcVISION could enable BMF to continue using the implemented data definitions in a format suitable for the RDBMS.

The extension was developed within a few days, and a two-day on premise test demonstrated the solution fit the requirements of BMF.

BMF can now provide its data definitions from Natural LDA to the extension of tcVISION, and after the transformation, onto the PostgreSQL load process for processing. Another advantage of the tcVISION solution is that when needed, other targets can be integrated for propagation of data from the mainframe (e.g., Kafka, which BMF indicated is a future target environment).

Additionally, bi-directional propagation can be added in budget planning when BMF is ready.

Data structures are held in LDA, because this provides the advantages of higher flexibility in development and the adaption of new requirements to the data definitions. If definitions would have to be ported manually, in part, to PostGreSQL, it would have been a much bigger and error-prone effort.

Subsequent changes to Adabas structures can now use tcVISION’s newly developed extension to easily regenerate and load the correct definitions to the RDBMS, and tcVISION completely covers the customer’s requirements for special usage of *PEs and *MUs.

After thorough preparation and extensive testing, the solution was released to selected users first, then made available to all users.

* PEs and MUs are special Adabas formats for definition of tables. PE = Periodic Group, MU = Multiple Value Field.


__tsi_logo_400x200

Contact Treehouse Software for a Demo Today…

No matter where you want your mainframe data to go – the cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.


Further reading: tcVISION Mainframe data replication is featured on the AWS Partner Network Blog…

___tcVISION_AWS_HA_Architecture

AWS recently published a blog about tcVISION’s Mainframe data replication capabilities, including a technical overview, security, high availability, scalability, and a step-by-step example of the creation of tcVISION metadata and scripts for replicating mainframe Db2 z/OS data to Amazon Aurora. Read the blog here: AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.

A real-world example of how Treehouse Software is helping enterprises with their Mainframe-to-Hybrid Cloud / LUW / Open Systems data replication initiatives…

by Joseph Brady, Director of Business Development for Treehouse Software

Many medium to large size companies are still investing in new mainframe systems, because they have been relying on them for decades to handle mission critical data transaction processing. For these companies, their mainframes are deeply entrenched, and have been rock-solid reliable over the years, allowing many users and applications to simultaneously access the same data without interfering with each other. Mainframes also perform large-scale and extremely fast transaction processing (thousands of transactions per second). Chances are, most transactions for which you use your debit or credit cards have a mainframe processing the data in the background.

There have been countless predictions of the demise of the mainframe over the years, but they have been proven wrong every time, and there’s no sign of extinction any time soon. Mainframes remain a vital part of IT infrastructures used by companies spanning a variety of industries around the world, including banking, insurance, healthcare, government, aviation, and retail.

Today, most companies that have their enterprise data stored on a mainframe system are discovering the advantages of newer cloud and open systems databases and services, including instantaneous global database deployments (e.g., Amazon Aurora), economics to scale, high availability, elasticity, security, testing at scale, disaster recovery, etc. Consequently, these companies are looking for an automated solution that allows real-time data replication between the two environments, and many are turning to Treehouse Software.


_0_Airline_Manitenance

Customer Scenario: A Look at a Large Airline’s Mainframe Data Replication Project

Treehouse Software has a major airline customer that is a long-time user of Software AG’s Adabas database on their mainframe system. Their desire was to move their data to more modern applications off of the mainframe. However, their mainframe contains vital airline maintenance and parts data that must always be highly available, so rather than performing a complete migration from the mainframe, the customer decided to utilize a data warehouse with a requirement of real-time data replication. This allows the mainframe legacy teams to maintain existing applications, while the modernization team develops applications on the newly created RDBMS. All teams determined that tcVISION real-time mainframe data replication was the perfect fit for meeting their goals.

How tcVISION Addressed the Customer Needs...

The high-level environment for the airline customer using tcVISION for data replication looks like this…

___Airline_tcV_Overview

  • Adabas: Mainframe data source containing business critical information replicated to the RDBMS.
  • Oracle: Open Platform RDBMS chosen by customer as replication target for both data warehouse and modernization project. The tcVISION Manager also uses Oracle as a repository for the metadata (field mappings).
  • tcVISION Manager: Software component deployed on both source and target systems. It is responsible for provisioning resources for:
    • Processing scripts
    • Metadata import
    • Scheduling
  • tcSCRIPT: Software component deployed on both source and target systems. It works in conjunction with the tcVISION Manager to:
    • Perform initial load of Adabas into Oracle
    • Ongoing near realtime CDC (Change Data Capture) and replication from Adabas to Oracle
    • tcSCRIPT processes data from:
      • Direct from the DBMS (initial load) or from DBMS extracts
      • Adabas Protection log (PLOG)
  • tcVISION Control Board: Software component deployed on a Windows machine which provides graphical user interface for a single point of control to administer the tcVISION environment:
    • Metadata import, creates metadata rules governing relationship between mainframe and open platform data structures
    • Replication rule maintenance
    • DDL creation
    • Creation of ETL and replication processes
    • Start, stop, and schedule replication processes

Customer’s Technical Requirement for tcVISION

The airline requires that the replication solution must support online and batch Adabas transactions and provide data replication between Adabas and Oracle. Additionally, their large databases contain millions of rows (e.g., > 50 million) that must be supported, as well as support for database change audit requirements (datetime and type of operation), transaction management, and notification of exception events.

There must also be support for configuration management between development, QA, and production.

Finally, the solution must address hardware and software failures on the mainframe, Linux, and database nodes.

Customer Outcomes

All requirements were met by tcVISION, which led to a successful project implementation.  Here is a look at the customer outcomes and benefits:

Business Outcome Customer Benefit
Data warehouse is always in sync with Adabas using tcVISION data replication Improved timeliness and reliability of reporting data
Reduced usage of mainframe MIPS Reduced cost
All data required for modernization project was successfully replicated to target environment Increased business efficiently and agility

Many more mainframe data migration and replication customer case studies can be read on the Treehouse Software Website.


__TSI_LOGO

Contact Treehouse Software for a Demo Today…

No matter where you want your mainframe data to go – the cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer. 

_0_Treehouse_tcV_Cloud_OpenSystems

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.


Further reading: AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software