Government and Infrastructure Customers are Looking to Modernize Their Crucial Mainframe Data on Highly Available, Scalable, and Secure Cloud Databases

by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.

___Multi_IT_Cloud

Everyone has seen the recent headlines about how aging and outdated technology nearly crippled the airline industry. As a result, modernizing and securing information systems has taken center stage and top priority again. Even before the airline IT disaster, the COVID-19 crisis was a critical event that forced modernization to become a strategic imperative for government, supply chain, healthcare, utilities/energy, financial, and defense industries. All of these sectors have critical data residing on a variety of long-standing mainframe databases (often still updated by COBOL applications), including Db2, VSAM, IMS/DB, Adabas, IDMS, Datacom, and sequential files. Unlocking the value of this important data can be difficult, because the data is often utilized by numerous interlinked and dependent programs that have been in place for decades.


“The Federal Aviation Administration’s 30-year-old hazard-notification system recently had its first crash ever to cause a nationwide grounding of flights. The incident is focusing a bright light on the outdated federal computer systems that, IT experts say, are increasingly vulnerable to failure and cyberattack.” – Source: Christian Science Monitor Daily

Read the entire article here: Bringing US up to code: How outdated software has become a safety issue


As a result of this renewed push to modernize IT systems, Treehouse Software has been seeing a significant increase in requests from Cloud platform partners, government agencies, and other infrastructure customers to evaluate modernization solutions that replicate data, in real time, on highly available and secure Cloud-based databases, data warehouses, etc.. Fortunately, Treehouse has the deep mainframe expertise and software tools to help. 

Since 1983, Treehouse Software has been working with many of these enterprises with mainframes in the areas of data migration, security, control, auditing, performance enhancement, etc.. Treehouse has also expanded its capabilities to focus on new requirements for modernizing legacy mainframe databases on various Cloud and open systems platforms with the tcVISION mainframe data replication product.  tcVISION is the primary tool  in Treehouse Software’s “data-first” approach, whereby immediate data replication to the Cloud helps customers get on the fast track to meeting spikes in demand for vital information, especially in times of crisis.

Some examples of popular Cloud databases supported by tcVISION are Amazon RDS PostgreSQL, Google Cloud SQL for SQL Server, and Azure SQL-Database. A complete list of data sources and targets that are supported by tcVISION can be found here.

Replicating mainframe data on the Cloud can happen within days during a tcVISION Proof of Concept (POC)…

After setup and installation, a tcVISION POC is approximately 10 business days, with the customer providing a small subset of data and use case for the POC. A Treehouse Software technician will assist in downloading and installing tcVISION and conducting a limited-scope implementation of a tcVISION application. This application uses a small subset of customer data and executes on customer facilities, usually in a non-production environment. A document is provided beforehand for the customer to fill out their requirements, use cases, and agenda for the POC.

By the end of the 10-day POC, customers can replicate and test mainframe data on their Cloud target database.  It can happen that fast!


Further Reading…

_0_AWS_Logo

Treehouse Software and AWS published a blog about tcVISION’s Mainframe-to-AWS data replication capabilities:

https://aws.amazon.com/blogs/apn/real-time-mainframe-data-replication-to-aws-with-tcvision-from-treehouse-software/

Confluent_Logo

Treehouse Software and Confluent recently co-authored a blog on modernizing on Hybrid and Multi-Cloud Environments:

https://www.confluent.io/blog/modern-data-management-for-hybrid-and-multi-cloud-architectures/


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

Mainframe-to-Cloud Data Replication with tcVISION: Recommendations for Roadmapping Your Deployment on a Cloud Environment

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software

Mainframe_To_Cloud_Roadmap

Careful planning must occur for a Mainframe-to-Cloud data modernization project, including how a customer’s desired Cloud environment will look. This blog serves as a general guide for organizations planning to replicate their mainframe data on Cloud platforms using Treehouse Software‘s tcVISION.

A successful move to the Cloud requires a number of post-migration considerations and solutions in order to modernize an application on the Cloud.  Some examples of these considerations and solutions include: 

Personnel Resource Considerations

Staffing for Mainframe-to-Cloud data replication projects depends on the scale and requirements of your replication project (e.g., bi-directional data replication projects will require more staffing).  

Most customers deploy a data replication product with Windows and Linux knowledgeable staff at varying levels of seniority.  For the architecture and setup tasks, we recommend senior technical staff to deal with complex requirements around the mainframe, Cloud architecture, networking, security, complex data requirements, and high availability.  Less senior staff are effective for the more repeatable deployment tasks such as mapping new database/file deployments.  Business staff and system staff are rarely required but can be necessary for more complex deployment tasks.  For example, bi-directional replication requires matching keys on both platforms and their input might be required.  Other activities would be PII consideration, specifics of data transformation and data verification requirements.

An example of staffing for a very large deployment might be one very part-time project manager, a part-time mainframe DBA/systems programmer, 1-2 staff to setup and deployment the environment and an additional 1-2 staff to manage the existing replication processes.

Environment Considerations

As part of the architecture planning, your team needs to decide how many tiers of deployment are needed for your replication project.  Much like with applications, you may want a Dev, QA, and Prod tier.  For each of these tiers, you will need to decide the level of separation.  For example, you might combine Dev and QA, but not Prod.  Many customers will keep production as a distinct environment.  Each environment will have its own set of resources, including mainframe managers (possibly on separate LPARs), Could VMs (e.g., EC2) for replication processing, and for managed Cloud RDBMSs (such as AWS RDS).  

After the required QA testing, changes are deployed to the production environment.  Object promotion test procedures should be detailed and documented, allowing for less experience personnel to work in some testing tasks.  Adherence to details, processes, and extended testing is most import when deploying bi-directional replication, due to the high impact of errors and difficult remediation.

Rollout Planning

A data replication product is typically deployed using Agile methods with sprints.  This allows for incrementally realized business value.  The first phase is typically a planning/architecture phase during which the technical architecture and deployment process are defined.   Files for replication are deployed in groups during sprint planning.  Initial sprint deployments might be low value file replications to shield the business from any interruptions due to process issues.  Once the team is satisfied that the process is effective, replication is working correctly, and data is verified on the source and targets, wide scale deployments can start.  The number of files to deploy in a sprint will depend on the customer’s requirements.  An example would be to deploy 20 mainframe files per 2–3-week sprint.  Technical personnel and business users need to work together to determine which files and deployment order will have the greatest business benefit.

Security

For security, both on-premises and to the major Cloud environments, there are several considerations:

  • Data will be replicated between a source and target. The data security for PII data must be considered.  In addition, rules such as HIPPA, FIPS, etc. will govern specific security requirements.
  • The path of the data must be considered, whether it is a private path, or if the data transverses the internet. For example, when going from on-premises to the Cloud the major Cloud providers have a VPN option which encrypts data going over the internet.  More secure options are also available, such as AWS Direct Connect and Azure ExpressRoute.  With these options, the on-premises network is connected directly to the Cloud provider edge location via a telecom provider, and the data goes over a private route rather than the internet.
  • Additionally, Cloud services such as S3, Azure Blob Storage, and GCP buckets default to route service connections over the internet. Creating a private end point (e.g., AWS PrivateLink) allows for a private network connection within the Cloud provider’s network.  Private connections that do not traverse the Internet provide better security and privacy.
  • Protecting data at rest is important for both the source and target environments. The modern Z/OS mainframe has advanced pervasive and encryption capabilities: https://www.redbooks.ibm.com/redbooks/pdfs/sg248410.pdf.  The major Cloud providers all provide extensive at-rest encryption capabilities.  Turning on encryption for Cloud Storage and databases is often just a parameter setting and the Cloud provider takes care of the encryption, keys, and certificates automatically.    
  • Protecting data in transit is equally important. There are often multiple transit points to encrypt and protect.  First, is the transit from the mainframe to on-premises to the Cloud VM instance.  A mainframe data replication product should provide protection employing TLS 1.2 to utilize keys and certificates on both the mainframe and Cloud.  Second is from the Cloud VM to the Cloud target database or service.  Encryption may be less important since often these services are in a private environment.  However, encryption can be achieved as required.

High Availability

  • During CDC processing, high availability must be maintained in the Cloud environment. The data replication product should keep track of processing position.  The first can be a Restart file, which keeps track of mainframe log position, target processing position, and uncommitted transactions.  The second can be a container stored on Linux or Windows to store committed unprocessed transactions.  Both need to be on highly available storage with a preference for storage across Availability Zones (AZs), such as Elastic File System (Amazon EFS) or Windows File Server (FSx).
  • The Amazon EC2 instance (or other Cloud instance) can be part of an Auto Scaling Group spread across AZs with minimum and maximum of one Amazon EC2 instance.
  • Upon failure, the replacement Amazon EC2 instance of the replication product’s administrator function is launched and communicates its IP address to the product’s mainframe administrator function. The mainframe then starts communication with the replacement Amazon EC2 instance.
  • Once the Amazon EC2 instance is restarted, it continues processing at the next logical restart point, using a combination of the LUW and Restart files.
  • For production workloads, Treehouse Software recommends turning on Multi-AZ target and metadata databases.

Scalable Storage

  • With scalable storage provided on most Cloud platforms, the customer pays only for what is used. The data replication product should require file-based storage for its files that can grow in size if target processing stops for an unexpected reason.  For example, Amazon EFS, and Amazon FSx provide a serverless elastic file system that lets the customer share file data without provisioning or managing storage.

Analytics

  • All top Cloud platform providers give customers the broadest and deepest portfolio of purpose-built analytics services optimized for all unique analytics use cases. Cloud analytics services allow customers to analyze data on demand, and helps streamline the business intelligence process of gathering, integrating, analyzing, and presenting insights to enhance business decision making.
  • A data replication product should replicate data to several data sources that can easily be captured by various Cloud based analytics services. For example, mainframe database data can be replicated to the various Cloud ‘buckets’ in JSON, CSV, or AVRO format, which allows for consumption by the various Cloud analytic services.  Bucket types include AWS S3, Azure BLOB Data, Azure Data Lake Storage, and GCP Cloud storage.  Several other Cloud analytics type services also support targets including Kafka, Elasticsearch, HADOOP, and AWS Kinesis.
  • Kafka has become a common target and can serve as a central data repository. Most customers target Kafka using JSON formatted replicated mainframe data.  Kafka can be installed on-premises, or using a managed Kafka service, such as the Confluent Cloud, AWS Managed Kafka, or the Azure Event Hub.

Monitoring

  • Monitoring is a critical part of any data replication process. There are several levels of monitoring at various points in a data replication project.  For example, each node of the replication including the mainframe, network communication, Cloud VM instances (such as EC2) and the target Cloud database service all can require a level of monitoring.  The monitoring process will also be different in development or QA vs. a full production deployment.
  • A data replication product should also have its own monitoring features. One important area to measure is performance and it is important to determine where any performance bottleneck is located.  Sometimes it could be the mainframe process, the network, the transformation computation process, or the target database.  A performance monitor helps to detect where the bottleneck is occurring and then the customer can drill down into specifics.  For example, if the bottleneck is the input data, areas to examine are the mainframe replication product component performance, or the network connection.  The next step is to monitor the area where the bottleneck is occurring using the data replication product’s statistics, mainframe monitoring tools, or Cloud monitoring such as AWS CloudWatch.
  • A data replication product should also allow the customer to monitor processing functions during the replication process. The data replication product should also have extensive logs and traces that allow for detailed monitoring of the data replication process and produce detailed replication statistics that include a numeric breakdown of processing statistics by table, type of operation (insert, update delete), and where these operations occurred (mainframe, or target database). 
  • CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing customers with a unified view of AWS resources, applications, and services that run on AWS, and on-premises servers. You can use CloudWatch to set high resolution alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, discover insights to optimize your applications, and ensure they are running smoothly.
  • Some customers are satisfied with a basic monitoring that polls every five minutes, while others need more detailed monitoring and can choose polls that occur every minute.
  • CloudWatch allows customers to record metrics for EC2 and other Amazon Cloud Services and display them in a graph on a monitoring dashboard. This provides visual notifications of what is going on, such as CPU per server, query time, number of transactions, and network usage.
  • Given the dynamic nature of AWS resources, proactive measures including the dynamic re-sizing of infrastructure resources can be automatically initiated. Amazon CloudWatch alarms can be sent to the customer, such as a warning that CPU usage is too high, and as a result, an auto scale trigger can be set up to launch another EC2 instance to address the load. Additionally, customers can set alarms to recover, reboot, or shut down EC2 instances if something out of the ordinary happens.

Disaster Recovery

  • IT disasters such as data center failures, or cyber attacks can not only disrupt business, but also cause data loss, and impact revenue. Most Cloud platforms offer disaster recovery solutions that minimize downtime and data loss by providing extremely fast recovery of physical, virtual, and Cloud-based servers.
  • A disaster recovery solution must continuously replicate machines (including operating system, system state configuration, databases, applications, and files) into a low-cost staging area in a target Cloud account and preferred region.
  • Unlike snapshot-based solutions that update target locations at distinct, infrequent intervals, a Cloud based disaster recovery solution should provide continuous and asynchronous replication.
  • Consult with your Cloud platform provider to make sure you are adhering to their respective best practices.
  • Example: https://docs.aws.amazon.com/whitepapers/latest/disaster-recovery-workloads-on-aws/introduction.html

Artificial Intelligence and Machine Learning

  • Many organizations lack the internal resources to support AI and machine learning initiatives, but fortunately the leading Cloud platforms offer broad sets of machine learning services that put machine learning in the hands of every developer and data scientist. For example, AWS offers SageMaker, GCP has AI Platform, and Microsoft Azure provides Azure AI.
  • Applications that are good candidates for AI or ML are those that need to determine and assign meaning to patterns (e.g., systems used in factories that govern product quality using image recognition and automation, or fraud detection programs in financial organizations that examine transaction data and patterns).

The list goes on…

  • Treehouse Software and our Cloud platform and migration partners can advise and assist customers in designing their roadmaps into the future, taking advantage of the most advanced technologies in the world.
  • Successful customer goals are top priority for all of us, and we can continue to work with our customers on a consulting basis even after they are in production.

Of course, each project will have unique environments, goals, and desired use cases. It is important that specific use cases are determined and documented prior to the start of a project and a tcVISION POC. This planning will allow the Treehouse Software team and the customer develop a more accurate project timeline, have the required resources available, and realize a successful project. 

Your Mainframe-to-Cloud Data Migration Partner…

Treehouse Software is a global technology company and Technology Partner with AWS, Google Cloud, and Microsoft. The company assists organizations with migrating critical workloads of mainframe data to the Cloud.

Further reading on tcVISION from AWS, Google Cloud, and Confluent:

More About tcVISION from Treehouse Software…

__Plans_To_Reality

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both Cloud and on-premises.

tcVISION acquires data in bulk or via CDC methods from virtually any IBM mainframe data source (Software AG Adabas, IBM Db2, IBM VSAM, CA IDMS, CA Datacom, and sequential files), and transform and deliver to a wide array of Cloud and Open Systems targets, including AWS, Google Cloud, Microsoft Azure, Confluent, Kafka, PostgreSQL, MongoDB, etc. In addition, tcVISION can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, and PostgreSQL.


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

Mainframe VSAM Change Data Capture (CDC) to Cloud and Open Systems with tcVISION from Treehouse Software

by Joseph Brady, Director of Business Development and AWS and Cloud Alliance Leader at Treehouse Software, Inc.

tcVISION_Mainframe_VSAM

Treehouse Software is the worldwide distributor of tcVISION, the innovative software product that allows immediate data replication between an impressive array of Mainframe sources and Cloud and Open Systems targets. This blog focuses on tcVISION‘s support of VSAM mainframe data sources (batch and CICS on z/OS, and CICS on z/OS and z/VSE).

tcVISION performs VSAM Change Data Capture (CDC) either via its own “DBMS-Extensions”, or via IBM’s CICS VR product. tcVISION has separate DBMS-Extensions to capture changes from CICS (using the CICS External Interface) and batch (via a JCL wrapper). All captured changes, regardless of whether they are performed by tcVISION or CICS VR are written to the z/OS Logstream on the mainframe. tcVISION then reads the Logstream and transfers the transactions to a tcVISION server running in the Cloud or on-prem, which is responsible for queueing, transforming, and applying the captured changes to the specified target.

Additionally, when planning VSAM CDC there are a number of operational items to consider, such as volume of batch transactions, data changes that occur during periods of time while the VSAM file is offline, etc.

In this instructional video, tcVISION is shown capturing changes from VSAM on z/OS and writing them to SQL Server on Windows:

 


__tsi_logo_400x200

Contact Treehouse Software Today for a tcVISION Demonstration…

No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

How to Replicate Mainframe Data to a Big Data Environment via Kafka with tcVISION

by Joseph Brady, Director of Business Development and AWS and Cloud Alliance Leader at Treehouse Software, Inc.

tcVISION from Treehouse Software allows enterprise customers to replicate data between mainframe, Cloud, or Hybrid Cloud while maintaining their legacy environments, and one of the more popular targets for mainframe modernization that we have been seeing is Apache Kafka®.

tcVISION_Mainframe_To_Kafka

What is Kafka? 

Kafka is an open-source distributed event streaming platform used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. A data pipeline processes and moves data from one system to another, and a streaming application is an application that consumes streams of data.

Kafka is reliable, stable, flexible, robust, and scales well with numerous consumers, working seamlessly with most popular data warehouses and data lakes like Hadoop, Redshift, S3, BigQuery, Azure, etc. Kafka can also be used for real-time analytics, as well as to process real-time streams to collect Big Data.

See how tcVISION easily connects mainframe systems to Kafka…

Kafka handles massive volumes of data and remains responsive, making Kafka a preferred platform when the volume of the data at the mainframe level –> BIG.

Kafka is a supported target in tcVISION, and in this instructional video, tcVISION is shown synchronizing data in real-time from Db2 on z/OS via Kafka to a Big Data environment:

Additional Reading: Treehouse Software is a Confluent technology partner and we recently co-authored a blog entitled, “Enterprise Mainframe Change Data Capture (CDC) to Apache Kafka with tcVISION and Confluent”.


__tsi_logo_400x200

Contact Treehouse Software Today…

No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

New Faces at Treehouse Software

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

__TSI_LOGO

Treehouse Software is growing and on the move! We are proud to have many staff members who have been here for 20+ years, and we have recently brought on several experienced business, mainframe, and Cloud experts. Meet our newest team members:

John Szakach, Chief Operating Officer

John joined Treehouse as a Business Strategy Consultant and in 2021 was promoted to Chief Operating Officer. While new to Treehouse, John brings over 40 years of relevant work experience to the organization and is an AWS Certified Cloud Practitioner as well as a Certified Project Management Professional. John has held a variety of management roles in different industries including VP of Organizational Effectiveness, VP of Quality Assurance, and VP of New Product Development. He has also held positions as Director of Flight Standards and Quality Control, and Director of Operations. In addition to over 51 years of total flight experience, including 20 years as a pilot for United Airlines, he has received numerous awards including the United Airlines Captain of the Year and the FAA Master Pilot Award, the FAA’s highest award for safety and compliance. John has a Bachelor’s Degree in aviation management.

Dan Miley, Product Support

Dan is a software engineer with deep experience and understanding of IBM Assembler, COBOL, JCL, IDMS, SAP ECC. He has worked with some of the world’s largest organizations, including president/consultant of his own company for over 10 years. Dan has already been instrumental in landing some major mainframe-to-Cloud data modernization customers for Treehouse Software.

Sasha Efron, Senior Technical Representative

Sasha is a mainframe technical specialist and DBA with over 25 years experience in in systems analysis, design, development, enhancement, testing, implementation and maintenance in insurance and banking systems with specialization in Software AG and IBM Mainframe technologies. He also has been involved in legacy modernization projects for several worldwide companies.

Joseph Rogan, Senior Technical Representative

Joseph is a Senior Technology Leader with 30+ years experience working in multiple industries, including transportation (specifically rail), logistics, education, financial services (banking, re-insurance, and trading systems), commercial insurance, and state government. His core competencies include database design and implementation, OLTP, OLAP, and data warehouse design, project planning, and project management. Joseph is also a highly trusted, conceptual, business partner and leader with excellent presentation, negotiating, management, mentoring, and strategic planning skills.

Daniel Vimont, Senior Technical Representative

Daniel brings 30+ years experience in multiple computer languages, databases, frameworks, and distributed processing for mainframe, Cloud, and open systems. He is very familiar with the principles of ETL and CDC in mainframe data transformation and migration. Dan is a Certified AWS Cloud Practitioner and has experience in designing and developing AWS/SDK (boto3) framework for on-premises invocation/monitoring of AWS services. Additionally, Dan’s versatile background as a data and software engineer, educator, and business advisor is a valuable asset to Treehouse’s vision of being a close partner in our customers’ planning and modernization efforts.

Treehouse Software Experts are Our Best Assets

management-team

When asked by prospective customers, “What are your primary differentiators?”, we immediately point to our people who have decades worth of experience in helping mainframe customers with innovative tools, services, and training. Our extensive experience, deep knowledge, and wide-ranging capabilities in mainframe technologies make the company a valued partner for third-party solution providers and a trusted advisor to customers.

We are fortunate to have a staff with a wealth of knowledge and skills that span not only Mainframe, but Cloud, LUW, and Open Systems technologies. Treehouse Software‘s technicians have installed products and trained end-users in some of the largest mainframe sites around the world, and our highly-rated 24X7 technical support is second to none.

The Treehouse Team Approach

Treehouse Software’s expert staff has proven its ability to work effectively as part of a larger team to meet clients’ complex business goals. AWS, Google, Microsoft, IBM, Oracle, Deloitte, Accenture, Confluent, and other large vendors have selected our expertise, technology, services, and training for their mainframe data modernization practices.


__tsi_logo_400x200

Contact Treehouse Software Today…

Treehouse Software has been helping enterprises mainframe customers since 1982, and in recent years, we have been developing a strong presence in the Cloud market space relating to mainframe data replication and modernization. As a result, Treehouse Software is currently working with technical and sales leaders from all popular Cloud platform companies and major systems integrators to take advantage of our deep mainframe skills and our tcVISION Mainframe-to-Cloud data replication solution.

No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target –Treehouse Software is here to help. Contact us to discuss your needs.

Replicating Mainframe Data on Cloud-based Relational Databases

by Joseph Brady, Director of Business Development / AWS and Cloud Alliance Leader at Treehouse Software, Inc.

Right_Data_Right_Place_Time02

Treehouse Software has been helping enterprises mainframe customers since 1982, and in recent years, we have been developing a strong presence in the emerging Cloud market space relating to mainframe data replication and modernization. As a result, Treehouse Software is currently working with technical and sales leaders from all popular Cloud platform companies and major systems integrators to take advantage of our deep mainframe skills and our tcVISION Mainframe-to-Cloud data replication solution.

The Choice is Yours…

tcVISION provides the means for customers to easily replicate relevant data between most mainframe data sources (IBM Db2, IBM VSAM, IBM IMS/DB, Software AG Adabas, CA IDMS, CA Datacom, or even sequential files) and the most popular Cloud platforms, including AWS, Google Cloud, Microsoft Azure, Confluent Cloud, and Oracle Cloud

Today, customers are finding it easier than ever to set up, operate, and scale relational databases in the Cloud. Here is a quick look at some Cloud relational database systems that tcVISION supports…

Amazon Aurora relational database, a MySQL and PostgreSQL-compatible relational database built for the Cloud that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases:

Google Cloud SQL, a fully managed relational database service for MySQL, PostgreSQL, and SQL server. You can connect with nearly any application, anywhere in the world. Cloud SQL automates backups, replication, and failover to ensure your database is reliable, highly available, and flexible to your performance needs:

Azure SQL Database is an intelligent, scalable, relational database service built for the Cloud. Optimize performance and durability with automated, AI-driven features that are always up to date. Focus on building new applications without worrying about storage size or resource management with serverless compute and Hyperscale storage options that automatically scale resources on demand:

How Does tcVISION Work?

tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud-based databases and applications. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to the targets.

tcVISION allows bi-directional, real-time data synchronization of changes on either platform to be reflected on the other platform (e.g., a change to a Cloud PostgreSQL table is reflected back on mainframe). The customer can then modernize their application on the cloud, open systems, etc. without disrupting the existing critical work on the legacy system.

Here is a high-level walkthrough of tcVISION mainframe data replication on Cloud and open systems…


__tsi_logo_400x200

Contact Treehouse Software Today…

No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target – tcVISION from Treehouse Software is your answer.

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

Why is High Availability so Important for Mainframe Data Modernization on the Cloud?

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

Many customers embarking on Mainframe-to-Cloud data replication projects with Treehouse Software are looking at high availability (HA) as a key consideration in the planning process. All of the major Cloud platforms have robust HA infrastructures that keep businesses running without downtime or human intervention when a zone or instance becomes unavailable. HA basic principles are essentially the same across all Cloud platforms.

In this blog, our example shows how the AWS Global Infrastructure and HA is architected with Treehouse Software’s tcVISION real-time mainframe data replication product. A well planned HA architecture ensures that systems are always functioning and accessible, with deployments located in various Availability Zones (AZs) worldwide.

The following example describes tcVISION‘s HA Architecture on AWS. During tcVISION ’s Change Data Capture (CDC) processing for mainframe data replication on the Cloud, HA must be maintained. The Amazon Elastic Compute Cloud (Amazon EC2), which contains the tcVISION Agent, is part of an Auto Scaling Group that is spread across AZs with Amazon EC2 instance(s).

tcVISION and AWS overall architecture…

___tcVISION_AWS_Overall_Architecture

Upon failure, the replacement Amazon EC2 instance tcVISION Agent is launched and communicates its IP address to the mainframe tcVISION Agent. The mainframe tcVISION Agent then starts communication with the replacement Amazon EC2 tcVISION Agent.

Once the Amazon EC2 tcVISION Agent is restarted, it continues processing at its next logical restart point, using a combination of the LUW and Restart files. LUW files contain committed data transactions not yet applied to the target database. Restart files contain a pointer to the last captured and committed transaction and queued uncommitted CDC data. Both file types are stored on a highly available data store, such as Amazon Elastic File System (EFS).

tcVISION and AWS HA architecture…

___tcVISION_AWS_HA_Architecture

For production workloads, Treehouse Software recommends turning on Multi-AZ target and metadata databases.

To keep all the dynamic data in an HA architecture, tcVISION uses EFS, which provides a simple, scalable, fully managed elastic file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

More information on AWS HA


Treehouse Software helps enterprises immediately start synchronizing their mainframe data on the Cloud, Hybrid Cloud, and Open Systems to take advantage of the most advanced, scalable, secure, and highly available technologies in the world with tcVISION

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data replication for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Cloud, Open Systems, Linux, Unix, and Windows platforms.

tcV_Arch01


__TSI_LOGO

Contact Treehouse Software for a Demo Today…

Just fill out the tcVISION Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your tcVISION demonstration. This will be a live, on-line demonstration that shows tcVISION replicating data from the mainframe to a Cloud target database.

tcVISION from Treehouse Software: Replicate Data Between Mainframe, Cloud, or Hybrid Cloud While Maintaining Your Legacy Environment

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

tcVISION_Website

Visit Treehouse Software’s tcVISION dedicated website, where customers can learn how to keep data in synch in hybrid IT architectures with Z mainframe, distributed, and Cloud platforms through instructional videos, blog articles, and slide shows: https://www.tcvision.com/


__tsi_logo_400x200

Contact Treehouse Software Today…

Treehouse Software is the worldwide distributor of tcVISION, which provides mainframe data replication between Db2, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, or sequential files, and many Cloud and Open Systems targets, including AWS, Google Cloud, Microsoft Azure, Kafka, PostgreSQL, etc. Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your online tcVISION demonstration.

What are the Benefits of Replicating Mainframe Data on Cloud or Hybrid Cloud Systems?

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software

Enterprise customers with mainframe systems have begun their movement of data to the Cloud or hybrid Cloud (a mixed computing, storage, and services environment made up of on-premises infrastructure, private Cloud services, and public Cloud) to benefit from new and powerful technologies that deliver significant business benefits and competitive advantage. Compared to the number of mainframe shops that are in the planning stages of their Cloud projects, existing adopters’ numbers are still relatively small.

Today, it is easier than ever for customers to take advantage of cutting edge, Cloud-based technologies, changing the way they manage, deploy, and distribute mission-critical data currently residing on mainframe systems. During the planning phase of a Cloud or hybrid Cloud modernization strategy, some benefits that are quickly discovered include:

Trade Capital Expense for Variable Expense – Instead of having to invest heavily in data centers and servers before customers know how they are going to use them, they pay only when they consume computing resources, and pay only for how much they consume.

Global Deployments – Cloud platforms span many geographic regions globally. Enterprises can easily deploy applications in multiple regions around the world with just a few clicks. This means there can be lower latency and a better experience for customers at minimal cost.

Economies to Scale – By using Cloud computing, customers can achieve a lower variable cost than they can get on their own, because usage from hundreds of thousands of customers is aggregated in the Cloud. Providers such as AWS, Google Cloud, etc. can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Scale of Services – Cloud-based products offer a broad set of global services including compute, storage, databases, analytics, machine learning and AI, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications. These services help organizations move faster, lower IT costs, and scale.

World Class Security – All major Cloud platforms offer advanced and strict security that complies with the most stringent government and private sector requirements.

Extreme High Availability (HA) – Major Cloud platforms span many geographic regions around the world.  By designing services and applications to be redundant across regions, HA is enhanced far beyond a single on-premises data center.

Testing at Scale – Cloud servers and services can be created and charged on demand for a specific amount of time.  This allows customers to create temporary large-scale test environments prior to deployment that are not practical for on-premises environments.  Large scale testing reduces deployment risks and helps to provide a better customer experience.

Auto Scaling and Serverless Deployments – Major Cloud platforms have many serverless and autoscaling options available, allowing for scalable computing capacity as required.  Customers pay only for the compute time they consume – there is no charge when the code is not running. Another example is the ability for a Cloud database to automatically start up, shut down, and scale capacity up or down based on the application’s needs.

Customer Agility and Innovation – In a Cloud computing environment, new IT resources are only a click away, which means that customers reduce the time to make those resources available to developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

Many companies who haven’t started their modernization journeys yet are looking for tools that allow their legacy mainframe environments to continue, while replicating data – in real time – on a variety of Cloud and open systems platforms. Treehouse Software is the worldwide distributor of tcVISION, a software tool that provides an easy and fast approach for Cloud and hybrid Cloud projects, enabling bi-directional data replication between the hardware source and many targets, including (mainframe): Db2 z/OS, Db2 z/VSE, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, etc. and (Cloud and open systems): AWS, Google Cloud, Microsoft Azure, Kafka, PostgreSQL, etc..

If your enterprise is planning on a Mainframe-to-Cloud data modernization project, we would welcome the opportunity to help get you moving immediately with an online demonstration of tcVISION. Contact Treehouse Software for a tcVISION demonstration today!

Quickly Begin Replicating Mainframe Data on Cloud and Open Systems During a tcVISION Proof of Concept.

by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.

___Start_Moving_Now_Graphic

Customers can start moving mainframe data within days during a tcVISION POC…

An online tcVISION Proof of Concept (POC) is approximately 10 business days, with the customer providing a representative subset of data, use cases, and goals for the POC. A Treehouse Software consultant will assist in downloading and installing tcVISION, and conduct a limited-scope implementation of a tcVISION application. This application uses customer data and executes on customer facilities, in a non-production environment. A document is provided beforehand that outlines the requirements and agenda for the POC.

By the end of the POC, customers can begin replicating mainframe data to their Cloud or Open Systems target database.  It can happen that fast!.

About tcVISION

More Cloud, Open Systems, and Systems Integration partners are recommending tcVISION, Treehouse Software’s Mainframe-to-Cloud data replication product for modernization projects. tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud and Open System databases and applications. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.

tcVISION_Overall_Diagram_Cloud_OS

Further reading…

Treehouse Software is an AWS, Google Cloud, and Microsoft Technology Partner, and the AWS Partner Network published a blog about tcVISION, which describes how tcVISION allows legacy mainframe environments to continue, while replicating data on highly available and secure Cloud platforms.


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our Product Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.