TDT: Much more than a mere “data connector” for Snowflake

by Joseph Brady, Director of Business Development at Treehouse Software, Inc. and Dan Vimont, Director of Innovation at Treehouse Software, Inc.

____0_TDT_Snowflake_Splash

Over the past few months, we have been rolling out information on Treehouse Dataflow Toolkit (TDT), a state-of-the-art, fully automated offering for data transfer from Kafka pipes to Analytics/ML/AI frameworks.  TDT is a set of proprietary microservices that assures highly-available, auto-scalable, and event-driven data transfers to your data science teams’ favorite analytics frameworks, such as Snowflake, Amazon Redshift, Amazon Athena/S3Amazon S3 Express One Zone Buckets, as well as Amazon Aurora PostgreSQL, all the while adhering to AWS’s and Snowflake’s recommended best practices for massive data loading. Make no mistake, TDT is MUCH more than merely a “connector”.

In this blog, we will focus on how TDT handles data transfers to perhaps the most complex environment: Snowflake.  Out of all TDT functions and features, our Snowflake connectivity offers the biggest “value added” to customers, because Snowflake has quickly become a top choice for enterprises looking for a Cloud platform onto which they can mobilize data at near-unlimited scale and performance, and bring advanced ML/AI capabilities.

Snowflake overview video…

Connectivity using Snowflake’s best practices vs. traditional ODBC…

TDT’s innovative Lambda-based (microservices) approach enables faster data flow than any conceivable ODBC-based solution, which is the standard tool used for most “roll your own” approaches, or “we have a connector for that” offerings.  

To load massive quantities of data to a target, TDT uses Snowflake’s (hugely scalable) bulk load utilities—not ODBC. It is vital to note that Snowflake is NOT a relational (OLTP) database, so doing CDC transfers to these targets via ODBC (with update, insert, delete transactions) goes directly against “best practices” advice from Snowflake, and would almost assuredly result in unwieldy bottlenecks.

____0_TDT_Snowflake01

TDT loads data into Snowflake’s “delta tables”, which inherently retain the entire history of source data ever since the source-to-target synchronization began (perfect for time-based trend/predictive/prescriptive analytics). Again, TDT adheres to Snowflake’s best practices recommendation for pulling data from S3 for bulk loading massive quantities of data…

____0_TDT_Snowflake02

Publishing both bulk-load and CDC data to a reliable and scalable framework like Kafka allows you to maintain a broad array of options to ultimately feed your legacy data to any number of JSON-friendly ETL tools, target data stores, and data analytics packages (some of which have not even been invented yet!). 

The “build vs buy” question is put to rest…

The Snowflake-proprietary target DDL/metadata/resources that TDT automatically produces for the staging of data in Snowflake are of such complexity that it is easy to justify the “buy” option in the “build vs buy” conversations customers have. A decision by an enterprise not to use TDT, but instead to build its own Kafka-to-Snowflake solution, could result in any or all of the following:

  • accumulation of technical debt
  • extensive/unpredictable time to production
  • ongoing resource planning to maintain home-grown technologies
  • potential vendor lock for maintenance of custom-made technologies designed and developed by consultants
  • managing a mix of manual and automated functions
  • tracking cobbled together components created by multiple staff and consultants
  • limited agility for future customization and innovation
  • problems adhering to evolving best practices over time
  • higher costs for future growth/scaling
  • potential lack of proper security/ongoing security updates
  • your organization has now become an enterprise software development company, whether or not you intended that, and whether or not you realized that!

Simply put, TDT is a self-contained, turn-key solution that can eliminate months, or years, of research and development time and costs. With TDT, high-speed and massive data movement to Snowflake takes minutes to ramp up.

Download the TDT AWS Partner Solution Brief to share with your team…

DOWNLOAD…AWS_TDT_Product_Brief_Thumb01

Treehouse Dataflow Toolkit (TDT) is Copyright © 2024 Treehouse Software, Inc. All rights reserved.

____Treehouse_AWS_Badges 

Contact Treehouse Software for a Demo Today!

Contact Treehouse Software today for more information or to schedule a product demonstration.

So, You’ve Managed to Start Streaming Your Legacy Data into Kafka Pipelines… Now What?

by Joseph Brady, Director of Business Development at Treehouse Software, Inc. and Dan Vimont, Director of Innovation at Treehouse Software, Inc.

Treehouse_Dataflow_Toolkit_Splash

Treehouse Software is helping customers modernize their valuable enterprise data on Cloud and Hybrid Cloud environments without disrupting the existing critical work on their legacy systems. However, a new strategic imperative has been added to the modernization game—the requirement to utilize today’s advanced Analytics/AI/ML-friendly platforms, such as Amazon Redshift, Snowflake, Amazon Athena/S3, Amazon S3 Express One Zone Buckets, as well as Amazon Aurora PostgreSQL, where an ever-expanding array of AI/ML tools are available to generate vital insights from the customer’s data. Many of these customers are already using software tools provided by Treehouse, or other vendors to replicate their data into various target data stores, but also more crucially into Kafka pipelines (i.e., Amazon MSK, Confluent, etc.). Kafka is now the top choice for high-speed streaming of massive volumes of mission critical data, providing stable performance under extreme loads. This is especially valuable for enterprises that require up-to-the-second data delivery for use cases that include e-commerce, financial services, logistics, telecommunications, and government IT.

Traditionally, Treehouse customers utilized our data replication technologies to load legacy data into Kafka pipelines, and that was where our involvement generally ended…

____0_Traditional_Mainframe_To_Kafka

However, once Kafka is designated as a target in the customer’s architecture, we have increasingly become involved in two questions: “What now?”, and/or “What is the best mechanism for us to rapidly transfer data from Kafka to advanced analytics platforms?” Our answer: Look no further than Treehouse Software!

Treehouse Software brings a state-of-the-art, fully automated offering for data transfer from Kafka pipes to Analytics/ML/AI frameworks: the Treehouse Dataflow Toolkit (TDT).  TDT is a set of proprietary microservices that assures highly-available, auto-scalable, and event-driven data transfers to your data science teams’ favorite analytics frameworks, all the while adhering to AWS’s and Snowflake’s recommended best practices for massive data loading, thus assuring shortest and surest loads. Additionally, TDT provides a frictionless and instant implementation, accelerating your path to deep data insights for optimizing business processes.

Why do AWS’s and Snowflake’s best practices recommend against using ODBC?

Your data science teams need large quantities of the very latest data in near-real-time, and ODBC doesn’t really do the job, offering only single-threaded, difficult to scale pipes. By contrast, TDT’s approach not only keeps things up-to-date faster than any conceivable ODBC-based solution, but the “delta tables” into which it loads data also inherently retain the entire history of source data ever since the source-to-target synchronization began (perfect for time-based trend/predictive/prescriptive analytics).  To load massive quantities of data to a target, TDT uses the target vendors’ (massively scalable) bulk load utilities—not ODBC. It’s vital to note that Snowflake and Redshift are NOT relational (OLTP) databases, so doing CDC transfers to these targets via ODBC (with update, insert, delete transactions) goes directly against “best practices” advice from the vendors, and would almost assuredly result in unwieldy bottlenecks.

What if my data is not on a mainframe?

No worries. Treehouse Software’s messaging is primarily mainframe-centric, since that has been our area of expertise and bread-and-butter for over 40 years. However, data movement is data movement, and if your mainframe, or non-mainframe, data is being pumped to a Kafka pipeline, TDT will take it from there. When a data replication tool publishes both bulk-load and CDC data in JSON format to a reliable and scalable framework like Kafka, it sets the stage for TDT to feed legacy data to any number of JSON-friendly ETL tools, target data stores, and the latest (or yet to be invented) data analytics packages. TDT is the turn-key solution for the easiest and fastest implementation of Kafka data transfer…

Treehouse_Dataflow_Toolkit03

TDT allows you to quickly ramp up your data analytics game by providing a rapid flow of data fresh off your enterprise data systems.

Download: TDT AWS Partner Solution Brief to share with your team…

DOWNLOAD…AWS_TDT_Product_Brief_Thumb01

Treehouse Dataflow Toolkit (TDT) is Copyright © 2024 Treehouse Software, Inc. All rights reserved.


____Treehouse_AWS_Badges 

Contact Treehouse Software for a Demo Today!

Contact Treehouse Software today for more information or to schedule a product demonstration.

A Treehouse Software Proof of Concept is the low-risk approach to testing mainframe data replication on Cloud and Hybrid Cloud environments

by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.

____0_Mainframe_To_Cloud

Many Treehouse Software customers have discovered the value of saving weeks, or months in their mainframe modernization initiatives by engaging in a Rocket Data Replicate and Sync (RDRS) Proof of Concept (POC) for Mainframe-to-Cloud data replication. Depending on the complexity of the customer’s project, an RDRS POC generally lasts as little as 10 business days after the product is installed and all connectivity is set up between the mainframe and Cloud environments.

How does it work?

  1. Treehouse Software provides documentation beforehand that outlines all of the requirements and agenda for the POC, and Treehouse technicians assist in downloading and installing RDRS.
  2. The customer provides a representative subset of z/OS or z/VSE mainframe data (e.g., Db2, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, etc.), use case, and goals for the POC, and the Treehouse team mentors the customer’s technical team via remote screen sharing sessions.
  3. The application is executed on customer facilities, in a non-production environment, and a limited-scope implementation of RDRS is conducted to prove that the product meets the customer’s desired use case.

By the end of the POC, customers will have replicated mainframe data on their Cloud target, tested out product capabilities, and demonstrated a successful, repeatable data replication process, with documented results. After the POC, the customer has all the connectivity and processes in place to begin setting up the production phase of their mainframe data modernization project. The minimal cost and resources makes an RDRS POC a valuable ROI in the customer’s mainframe modernization journey.

About RDRS…

Many Cloud and Systems Integration partners are recommending RDRS for mainframe data modernization projects. RDRS focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud targets. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.

RDRS utilizes a Windows-based GUI Control Board, which is ideal for non-mainframe programmers. While mainframe experts are required in the design/architecture phase during the POC and occasionally during implementation, the requirement for their involvement is limited. The RDRS Control Board acts as a single point of administration, data modeling and mapping, script generation, and monitoring. Comprehensive monitoring and logging of all data movements ensure transparency across all data exchange processes.

Additionally, once RDRS is up and running, the customer’s legacy mainframe environment can continue as long as needed, while they replicate data – in real time and bi-directionally – on the new Cloud platform. Now the enterprise can quickly take advantage of the latest Cloud services, such as advanced analytics, ML/AI, etc., as well as move data to a variety of highly available and secure databases and data stores.


__TSI_LOGO

Contact Treehouse Software Today…

Contact us to discuss how a Treehouse Software POC can accelerate your mainframe Cloud and hybrid Cloud data modernization journey.

Treehouse Software helps mainframe customers quickly take advantage of Amazon Redshift via Kafka and S3

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

Treehouse Software’s customer base of enterprises using mainframes for storing mission-critical data are asking for ways to take advantage of the most advanced Cloud-based data warehouses. Recently, one of our customers sent us an inquiry asking about how tcVISION can be used to move some of their data to Amazon Redshift, the fully-managed data warehouse from AWS, for reporting and analytics using their existing Business Intelligence (BI) tools. They also expressed concerned about scalability of the data warehouse, since they may increase their data movement quite quickly in the near future.

Fortunately, one of our Cloud experts has been testing out the fastest and cleanest ways to use tcVISION to pump z/OS mainframe data into Amazon Redshift.  His testing proved that tcVISION fully supports an AWS best practices framework of: Amazon MSK(Kafka)-to-Amazon S3-to-Amazon Redshift. To achieve this within the framework, tcVISION is used to convert mainframe-based data (Adabas, VSAM, Db2, etc.) to JSON format and publish it to Kafka topics (Amazon MSK), then ultimately to Amazon Redshift as seen here…

____01_Amazon_Redshift_tcVISION

About Amazon Redshift… 


Your journey to Amazon Redshift begins today with Treehouse Software and tcVISION…

____Treehouse_AWS_Badges

Treehouse Software is an AWS Technology Partner and tcVISION is a Validated AWS Qualified Software that helps customers replicate their mainframe data between a vast array of source databases and Cloud technologies, including Amazon S3, Amazon MSK (Kafka), Amazon RDS, Amazon Redshift, DynamoDB, and many more. 

tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources AWS. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to AWS-based data stores. From there, the door is open for the enterprise to take advantage of the latest and most popular AWS AI and ML resources.

Fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for an online tcVISION demonstration.


__AWS_On_White

Further reading: tcVISION is featured on the AWS Partner Network Blog…

The AWS Partner blog talks about tcVISION’s Mainframe-to-AWS data replication capabilities, including a technical overview, security, high availability, scalability, and a step-by-step example of the creation of tcVISION metadata and scripts for replicating mainframe data on AWS. Read the blog here: AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.

AI is “IN” and Treehouse Software can help mainframe customers who want to add advanced intelligence to their enterprise data on AWS

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

____0_Cloud_AI

Artificial Intelligence (AI) and Machine Learning (ML) are not only revolutionizing the computing and business worlds in general, but the value proposition for Treehouse Software’s customer base who are replicating large amounts of mainframe data on AWS can be profound. AI/ML is proving to be a powerful force in improving customer experience, optimizing business operations, and accelerating innovation. For example, this powerful and predictive technology can help improve customer engagement and conversion by creating personalized web experiences that are based on data that shows individual preferences and behaviors. Additionally, business forecasting AI and ML tools can accurately predict demand and manage, optimize, and augment supply chain decisions by combining historical time series data with additional variables, such as new product features, pricing, holiday demand, etc.

AWS makes AI and ML technologies available at your fingertips… 

AWS is making incredible leaps in creating the most sophisticated AI tools available today — all literally at one’s fingertips. Never before has such powerful and useful technology been so easily available to so many, bringing the deepest set of ML services and supporting Cloud infrastructures instantly into the hands of developers and data scientists.

We are hearing first-hand how customers want to improve customer experience, optimize business processes, and accelerate innovation. For these purposes, businesses can use ready-made, purpose-built AI services, or customized models with AWS AI and ML offerings.


Examples of leading AI product available on AWS:


Your journey to AWS begins today with Treehouse Software and tcVISION…

____Treehouse_AWS_Badges

Treehouse Software is an AWS Technology Partner and tcVISION is a Validated AWS Qualified Software that helps customers replicate their mainframe data between a vast array of source databases and Cloud technologies, including Amazon S3, Amazon MSK (Kafka), Amazon RDS, Amazon Redshift, DynamoDB, and many more. 

tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources AWS. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to AWS-based data stores. From there, the door is open for the enterprise to take advantage of the latest and most popular AWS AI and ML resources.

Fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for an online tcVISION demonstration.


__AWS_On_White

Further reading: tcVISION is featured on the AWS Partner Network Blog…

The AWS Partner blog talks about tcVISION’s Mainframe-to-AWS data replication capabilities, including a technical overview, security, high availability, scalability, and a step-by-step example of the creation of tcVISION metadata and scripts for replicating mainframe data on AWS. Read the blog here: AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.

Try the no-risk approach to testing out mainframe data replication on the Cloud with a tcVISION Proof of Concept

by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.

____01_Mainframe_To_Cloud

Many Treehouse Software customers have discovered that they can save weeks, or months in their mainframe modernization initiatives by doing a tcVISION Proof of Concept (POC) for Mainframe-to-Cloud data replication. Depending on the complexity of the customer’s project, a tcVISION POC generally lasts as little as 10 business days after the product is installed and all connectivity is set up between the mainframe and Cloud environments. Treehouse Software provides documentation beforehand that outlines all of the requirements and agenda for the POC, and Treehouse technicians assist in downloading and installing tcVISION.

The customer provides a representative subset of z/OS or z/VSE mainframe data (e.g., Db2, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, etc.), use case, and goals for the POC, and the Treehouse team mentors the customer’s technical team via remote screen sharing sessions. The application is executed on customer facilities, in a non-production environment, and a limited-scope implementation of a tcVISION application is conducted to prove that the product meets the customer’s desired use case.

By the end of the POC, customers will have replicated mainframe data on their Cloud target, tested out product capabilities, and demonstrated a successful, repeatable data replication process, with documented results. After the tcVISION POC, the customer has all the connectivity and processes in place to begin setting up the production phase of their mainframe data modernization project. The minimal cost, in terms of human resources and time makes a tcVISION POC a valuable ROI in the customer’s mainframe modernization journey.

A key advantage for customers is once tcVISION is up and running, their legacy mainframe environment can continue as long as needed, while they replicate data – in real time and bi-directionally – on the new Cloud platform. Now the enterprise can quickly take advantage of the latest Cloud services, such as analytics, machine learning and artificial intelligence (AI), etc., as well as move data to a variety of highly available and secure databases and data stores.

About tcVISION…

___tcVISION_V7_Diagram_Marketing

Many Cloud and Systems Integration partners are recommending tcVISION from Treehouse Software for Mainframe-to-Cloud modernization projects. tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud targets. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.

Additionally, tcVISION utilizes a Windows-based GUI Control Board, which is ideal for non-mainframe programmers. While mainframe experts are required in the design/architecture phase during the POC and occasionally during implementation, the requirement for their involvement is limited. The tcVISION Control Board acts as a single point of administration, data modeling and mapping, script generation, and monitoring. Comprehensive monitoring and logging of all data movements ensure transparency across all data exchange processes.

Further reading…

AWS-Partner_Qualified_Software-badge

Treehouse Software is an AWS Technology Partner and tcVISION is a Validated AWS Qualified Software. The AWS Partner Network published a blog about tcVISION, which describes how tcVISION allows legacy mainframe environments to continue, while replicating data on highly available and secure AWS targets.


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

5-Minute Video: Connecting a Mainframe IDMS Database to an AWS SQL Server Database with tcVISION

Treehouse Software is the worldwide distributor of tcVISION, the leading tool for using changed data capture (CDC) when transferring information between most mainframe data sources (IBM Db2, IBM VSAM, IBM IMS/DB, Software AG Adabas, CA IDMS, CA Datacom, or even sequential files) and Cloud and open systems-based databases and applications.

The following video takes a quick look at how tcVISION’s repository is used to import mainframe IDMS schema and builds out a target system on AWS:


__tsi_logo_400x200

Interested in seeing a live, online demo of tcVISION?

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

7-Minute Video: Manage All of Your Mainframe-to-Cloud Data Replication Tasks from tcVISION’s Control Board

Treehouse Software is the worldwide distributor of tcVISION, the leading tool for using changed data capture (CDC) when transferring information between most mainframe data sources (IBM Db2, IBM VSAM, IBM IMS/DB, Software AG Adabas, CA IDMS, CA Datacom, or even sequential files) and Cloud and open systems-based databases and applications.

The following video briefly takes a look at how the tcVISION Control Board (a Windows-based GUI interface) functions as a central point of administration, data mapping and modeling, script generation, and overall monitoring. In this example, we show how to manage connectivity and data replication between a Mainframe database and AWS PostgreSQL using the tcVISION Control Board:


Further reading: tcVISION is featured on the AWS Partner Network Blog showing a walk-through of data replication between Mainframe and Amazon Aurora…

AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.


__tsi_logo_400x200

Interested in seeing a live, online demo of tcVISION?

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

tcVISION Video Demonstrations: Real-time Mainframe Data Replication on Three Popular Cloud SQL Databases

Treehouse Software is the worldwide distributor of tcVISION, the leading tool for using changed data capture (CDC) when transferring information between most mainframe data sources (IBM Db2, IBM VSAM, IBM IMS/DB, Software AG Adabas, CA IDMS, CA Datacom, or even sequential files) and Cloud and open systems-based databases and applications. Changes occurring in the mainframe application data are then tracked and captured, and published to a variety of targets.

This video demonstrates real-time data replication between Db2 and PostgreSQL on AWS:

This video demonstrates real-time data replication between Db2 and Google Cloud SQL:

This video demonstrates real-time data replication between Db2 and Azure SQL:


Further reading: tcVISION is featured on the AWS Partner Network Blog showing a walk-through of data replication between Mainframe DB2 z/OS and Amazon Aurora…

AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.


__tsi_logo_400x200

Interested in seeing a live, online demo of tcVISION?

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

Replicating Enterprise Mainframe Data to Cloud-based SQL Databases with tcVISION

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

Treehouse Software has been helping enterprise mainframe customers since 1982, and in recent years, we have been developing a strong presence in the Mainframe-to-Cloud data replication market space. This blog takes a quick look at three of the most popular Treehouse-supported Cloud-based SQL database services…

Amazon RDS, a collection of managed services that makes it simple to set up, operate, and scale databases in the Cloud. Users can control the type of database, as well as where data is stored. Specific database formats that are supported include Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle Database, and SQL Server:

Google Cloud SQL, a fully managed relational database service for MySQL, PostgreSQL, and SQL server. You can connect with nearly any application, anywhere in the world. Cloud SQL automates backups, replication, and failover to ensure your database is reliable, highly available, and flexible to your performance needs:

Microsoft Azure SQL, a part of the Azure SQL family, Azure SQL Database is an always-up-to-date, fully managed relational database service built for the Cloud:


Wherever you want to target your mainframe data on the Cloud, Treehouse Software helps to make the process easy…

Treehouse Software is the worldwide distributor of tcVISION, the leading tool for using changed data capture (CDC) when transferring information between most mainframe data sources (IBM Db2, IBM VSAM, IBM IMS/DB, Software AG Adabas, CA IDMS, CA Datacom, or even sequential files) and Cloud and open systems-based databases and applications. Changes occurring in the mainframe application data are then tracked and captured, and published to a variety of targets.

tcVISION_Overall_Diagram_Cloud_OS

Additionally, tcVISION supports bi-directional data replication, where changes on either platform are reflected on the other platform (e.g., a change to a PostgreSQL table in the Cloud is reflected back on mainframe), allowing the customer to modernize their application on the Cloud or open systems without disrupting the existing critical work on the legacy system. tcVISION’s bi-directional replication writes directly to the mainframe database, thereby bypassing all mainframe business logic, so this architecture requires careful planning, as well as thorough and repeated testing.

Sales and technical leaders at the major Cloud platform companies, as well as systems integrators are engaging with Treehouse Software to take advantage of our tcVISION data replication solution to help them tap into the mainframe data that customers want to be made available on new technologies.


Further reading: tcVISION is featured on the AWS Partner Network Blog showing a walk-through of data replication between Mainframe DB2 z/OS and Amazon Aurora…

AWS Partner Network (APN) Blog: Real-Time Mainframe Data Replication to AWS with tcVISION from Treehouse Software.


__tsi_logo_400x200

Interested in seeing a live, online demo of tcVISION?

Just fill out the Treehouse Software tcVISION Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.