by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.
Many Treehouse Software customers have discovered that they can save weeks, or months in their mainframe modernization initiatives by doing a tcVISION Proof of Concept (POC) for Mainframe-to-Cloud data replication. Depending on the complexity of the customer’s project, a tcVISION POC generally lasts as little as 10 business days after the product is installed and all connectivity is set up between the mainframe and Cloud environments. Treehouse Software provides documentation beforehand that outlines all of the requirements and agenda for the POC, and Treehouse technicians assist in downloading and installing tcVISION.
The customer provides a representative subset of z/OS or z/VSE mainframe data (e.g., Db2, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, etc.), use case, and goals for the POC, and the Treehouse team mentors the customer’s technical team via remote screen sharing sessions. The application is executed on customer facilities, in a non-production environment, and a limited-scope implementation of a tcVISION application is conducted to prove that the product meets the customer’s desired use case.
By the end of the POC, customers will have replicated mainframe data on their Cloud target, tested out product capabilities, and demonstrated a successful, repeatable data replication process, with documented results. After the tcVISION POC, the customer has all the connectivity and processes in place to begin setting up the production phase of their mainframe data modernization project. The minimal cost, in terms of human resources and time makes a tcVISION POC a valuable ROI in the customer’s mainframe modernization journey.
A key advantage for customers is once tcVISION is up and running, their legacy mainframe environment can continue as long as needed, while they replicate data – in real time and bi-directionally – on the new Cloud platform. Now the enterprise can quickly take advantage of the latest Cloud services, such as analytics, machine learning and artificial intelligence (AI), etc., as well as move data to a variety of highly available and secure databases and data stores.
About tcVISION…
Many Cloud and Systems Integration partners are recommending tcVISION from Treehouse Software for Mainframe-to-Cloud modernization projects. tcVISION focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud targets. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.
Additionally, tcVISION utilizes a Windows-based GUI Control Board, which is ideal for non-mainframe programmers. While mainframe experts are required in the design/architecture phase during the POC and occasionally during implementation, the requirement for their involvement is limited. The tcVISION Control Board acts as a single point of administration, data modeling and mapping, script generation, and monitoring. Comprehensive monitoring and logging of all data movements ensure transparency across all data exchange processes.
Further reading…
Treehouse Software is an AWS Technology Partner and tcVISION is a Validated AWS Qualified Software. The AWS Partner Network published a blog about tcVISION, which describes how tcVISION allows legacy mainframe environments to continue, while replicating data on highly available and secure AWS targets.
Contact Treehouse Software for a tcVISION Demo Today…
Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.
by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.
Treehouse Software, Inc. is pleased to announce that we were chosen by US Foods for their mainframe data modernization initiatives using the tcVISION Mainframe-to-Cloud and Open Systems data replication product.
Treehouse Software is the worldwide distributor of tcVISION, the leading tool for using change data capture (CDC) for synchronizing mainframe data with real-time and bi-directional data replication. tcVISION’s intuitive data modeling and mapping, and ease of migrating data, made it the ideal choice for helping to modernize the large mainframe environment at US Foods.
“The entire Treehouse Software team is excited about working with US Foods to make their modernization initiatives a success!” – George Szakach, CEO and President at Treehouse Software
About US Foods
With a promise to help its customers Make It, US Foods is one of America’s great food companies and a leading foodservice distributor, partnering with approximately 250,000 restaurants and foodservice operators to help their businesses succeed. With 70 broadline locations and more than 80 cash and carry stores, US Foods and its 28,000 associates provides its customers with a broad and innovative food offering and a comprehensive suite of e-commerce, technology and business solutions. US Foods is headquartered in Rosemont, Ill. Visit https://www.usfoods.com/ to learn more.
Interested in seeing a live, online demo of tcVISION?
Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.
by Joseph Brady, Director of Business Development / Cloud Alliance Leader at Treehouse Software, Inc.
Everyone has seen the recent headlines about how aging and outdated technology nearly crippled the airline industry. As a result, modernizing and securing information systems has taken center stage and top priority again. Even before the airline IT disaster, the COVID-19 crisis was a critical event that forced modernization to become a strategic imperative for government, supply chain, healthcare, utilities/energy, financial, and defense industries. All of these sectors have critical data residing on a variety of long-standing mainframe databases (often still updated by COBOL applications), including Db2, VSAM, IMS/DB, Adabas, IDMS, Datacom, and sequential files. Unlocking the value of this important data can be difficult, because the data is often utilized by numerous interlinked and dependent programs that have been in place for decades.
“The Federal Aviation Administration’s 30-year-old hazard-notification system recently had its first crash ever to cause a nationwide grounding of flights. The incident is focusing a bright light on the outdated federal computer systems that, IT experts say, are increasingly vulnerable to failure and cyberattack.” – Source: Christian Science Monitor Daily
As a result of this renewed push to modernize IT systems, Treehouse Software has been seeing a significant increase in requests from Cloud platform partners, government agencies, and other infrastructure customers to evaluate modernization solutions that replicate data, in real time, on highly available and secure Cloud-based databases, data warehouses, etc.. Fortunately, Treehouse has the deep mainframe expertise and software tools to help.
Since 1983, Treehouse Software has been working with many of these enterprises with mainframes in the areas of data migration, security, control, auditing, performance enhancement, etc.. Treehouse has also expanded its capabilities to focus on new requirements for modernizing legacy mainframe databases on various Cloud and open systems platforms with the tcVISION mainframe data replication product. tcVISION is the primary tool in Treehouse Software’s “data-first” approach, whereby immediate data replication to the Cloud helps customers get on the fast track to meeting spikes in demand for vital information, especially in times of crisis.
Replicating mainframe data on the Cloud can happen within days during a tcVISION Proof of Concept (POC)…
After setup and installation, a tcVISION POC is approximately 10 business days, with the customer providing a small subset of data and use case for the POC. A Treehouse Software technician will assist in downloading and installing tcVISION and conducting a limited-scope implementation of a tcVISION application. This application uses a small subset of customer data and executes on customer facilities, usually in a non-production environment. A document is provided beforehand for the customer to fill out their requirements, use cases, and agenda for the POC.
By the end of the 10-day POC, customers can replicate and test mainframe data on their Cloud target database. It can happen that fast!
Further Reading…
Treehouse Software and AWS published a blog about tcVISION’s Mainframe-to-AWS data replication capabilities:
Contact Treehouse Software for a tcVISION Demo Today…
Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.
Many Treehouse Software mainframe modernization customers have requirements for continuous near-real-time replication of mainframe data in order to keep a copy of the data synchronized on the Cloud. These customers are using tcVISION from Treehouse Software for changed data capture (CDC) for this synchronization, which allows changes occurring in any mainframe application data to be tracked and captured, and then published to a variety of AWS targets, including Amazon Simple Storage Service (S3). Some of these customers are also now asking us to recommend the best Cloud-based tools and methods to monitor and gain insights to these complex data processes. Coincidentally, while working with a current tcVISION customer, our technicians are testing out two particularly good, fully managed AWS services that can work hand-in-hand to address this need:
Amazon Athena
Since tcVISION supports Amazon S3 as a target, customers modernizing their mainframe systems on AWS can use Amazon Athena for monitoring and analysis of CDC processing from an S3 bucket.
Amazon Athena is a serverless, interactive analytics service built on open-source frameworks, supporting open-table and file formats. Athena provides a simplified, flexible way to analyze data from an S3 Bucket, as well as many other data sources, including on-premises data sources or other Cloud systems. Athena is built on open-source Trino and Presto engines and Apache Spark frameworks, with no provisioning or configuration effort required.
Figure 1: Example of an Athena query showing bulk-load statistics per table
Amazon QuickSight
Once Athena is setup for monitoring an S3 Bucket, users can easily view their CDC processing and analytics with Amazon QuickSight. QuickSight utilizes advanced machine learning-powered insights and intuitive dashboards, so end users can make the best and quickest data-driven business decisions.
Figure 2: Example of Amazon QuickSight monitoring the throughput of our data to Snowflake
Figure 3: Example of Amazon QuickSight pie chart showing the resulting rows loaded for each Snowflake table:
Figure 4: Example of Amazon QuickSight chart showing statistics for our data bulk-load into Snowflake:
Figure 5: Example of Amazon QuickSight chart showing our load time into Snowflake per table:
View the Amazon QuickSite video here…
Interested in seeing a live, online demo of tcVISION?
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.
Many medium-to-large size enterprises use mainframe systems that are housing vast amounts of mission-critical data encompassing historical, customer, logistics, etc. information. Each mainframe site is unique and can have decades worth of customizations requiring innovative approaches to establishing data replication on Cloud and open systems platforms. Fortunately for these customers, Treehouse Software has been in the mainframe software market since 1982, bringing deep experience in mainframe, Cloud, and open systems technologies, as well as delivering the tcVISION mainframe data replication product. Today, Treehouse Software is helping many enterprise mainframe customers accelerate digital transformation and successfully leverage Hybrid Cloud initiatives on the IBM Z platform, storing sensitive data on a private Cloud or local data center and simultaneously leveraging leading technologies on a managed public Cloud.
Treehouse Software’s tcVISION solution focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud and open systems-based databases and applications. Changes occurring in the mainframe application data are then tracked and captured, and published to a variety of targets. Additionally, tcVISION supports bi-directional data replication, where changes on either platform are reflected on the other platform (e.g., a change to a PostgreSQL table in the Cloud is reflected back on mainframe), allowing the customer to modernize their application on the Cloud or open systems without disrupting the existing critical work on the legacy system. tcVISION’s bi-directional replication writes directly to the mainframe database, thereby bypassing all mainframe business logic, so this architecture requires careful planning, as well as thorough and repeated testing.
Plan carefully…
The following section offers some real-world customer examples, as well as considerations and recommendations when planning bi-directional replication for any mainframe/RDBMS environments. Bi-directional replication by its nature is a very complicated undertaking, so it is necessary that customers are fully educated in all environments, software, and processes before attempting to write data back to a mainframe database. It is always recommended that customers use a minimally effective measure of bi-directional replication required to accomplish their goal — and no more. An overblown project with unnecessary bi-directional data replication invites undue complexity and delays.
Real-world customer examples…
Treehouse Software has many customers performing bi-directional data replication, and each scenario is vastly different from the others, even if some have the same sources and targets as each other. For example, some customers utilize a Master/Master, collision-heavy proposition, while others use uni-directional one way, then “flip a switch” uni-directional the other way. Another example is a customer who has a “grand circle,” where data hits multiple applications before it finally makes its way back to an RDBMS staging database that tcVISION replicates to the mainframe.
Example of a Treehouse customer’s bi-directional data replication environment using tcVISION:
There are many planning and implementation stages that go into a successful mainframe replication environment, and performance testing is a vital part of a successful project. For example, customers should do performance tests on how long it takes tcVISION to read a database log, transfer data, process data, etc. During testing at one of our reference customer sites we found a significant difference in how long it took for their test and prod LPARs to transmit data to the Cloud, based on whether the mainframe TCP/IP stack used a 32-bit or 128-bit setting.
At another site, where we are helping a large government agency perform bi-directional replication on mainframe data, their original goal was for a significant percentage of mainframe objects to have bi-directional replication. It was determined that it would be impossible to extract business logic from the existing mainframe application for usage in the downstream application. Therefore, they have decided to use a middleware product to perform the “write-back” to the mainframe database. Given the complexity of the mainframe application, this has proven the safest way for them to proceed.
Because of the variety of customer scenarios as described above, before any site can attempt bi-directional data replication, it is crucial that they have a well-tested uni-directional process with operational controls in place for a significant time period. “Operational controls” means processes to restart scripts, evaluation of failed transactions, orchestration of mainframe/non-mainframe DBMS changes, etc.
Please contact Treehouse Software to discuss your Mainframe-to-Cloud and Open Systems modernization plans. We can help put in place a roadmap to modernization success.
Contact Treehouse Software Today for a tcVISION Demo…
No matter where you want your mainframe data to go – the Cloud, open systems, or any LUW target – tcVISION from Treehouse Software is your answer.
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software
Careful planning must occur for a Mainframe-to-Cloud data modernization project, including how a customer’s desired Cloud environment will look. This blog serves as a general guide for organizations planning to replicate their mainframe data on Cloud platforms using Treehouse Software‘s tcVISION.
A successful move to the Cloud requires a number of post-migration considerations and solutions in order to modernize an application on the Cloud. Some examples of these considerations and solutions include:
Personnel Resource Considerations
Staffing for Mainframe-to-Cloud data replication projects depends on the scale and requirements of your replication project (e.g., bi-directional data replication projects will require more staffing).
Most customers deploy a data replication product with Windows and Linux knowledgeable staff at varying levels of seniority. For the architecture and setup tasks, we recommend senior technical staff to deal with complex requirements around the mainframe, Cloud architecture, networking, security, complex data requirements, and high availability. Less senior staff are effective for the more repeatable deployment tasks such as mapping new database/file deployments. Business staff and system staff are rarely required but can be necessary for more complex deployment tasks. For example, bi-directional replication requires matching keys on both platforms and their input might be required. Other activities would be PII consideration, specifics of data transformation and data verification requirements.
An example of staffing for a very large deployment might be one very part-time project manager, a part-time mainframe DBA/systems programmer, 1-2 staff to setup and deployment the environment and an additional 1-2 staff to manage the existing replication processes.
Environment Considerations
As part of the architecture planning, your team needs to decide how many tiers of deployment are needed for your replication project. Much like with applications, you may want a Dev, QA, and Prod tier. For each of these tiers, you will need to decide the level of separation. For example, you might combine Dev and QA, but not Prod. Many customers will keep production as a distinct environment. Each environment will have its own set of resources, including mainframe managers (possibly on separate LPARs), Could VMs (e.g., EC2) for replication processing, and for managed Cloud RDBMSs (such as AWS RDS).
After the required QA testing, changes are deployed to the production environment. Object promotion test procedures should be detailed and documented, allowing for less experience personnel to work in some testing tasks. Adherence to details, processes, and extended testing is most import when deploying bi-directional replication, due to the high impact of errors and difficult remediation.
Rollout Planning
A data replication product is typically deployed using Agile methods with sprints. This allows for incrementally realized business value. The first phase is typically a planning/architecture phase during which the technical architecture and deployment process are defined. Files for replication are deployed in groups during sprint planning. Initial sprint deployments might be low value file replications to shield the business from any interruptions due to process issues. Once the team is satisfied that the process is effective, replication is working correctly, and data is verified on the source and targets, wide scale deployments can start. The number of files to deploy in a sprint will depend on the customer’s requirements. An example would be to deploy 20 mainframe files per 2–3-week sprint. Technical personnel and business users need to work together to determine which files and deployment order will have the greatest business benefit.
Security
For security, both on-premises and to the major Cloud environments, there are several considerations:
Data will be replicated between a source and target. The data security for PII data must be considered. In addition, rules such as HIPPA, FIPS, etc. will govern specific security requirements.
The path of the data must be considered, whether it is a private path, or if the data transverses the internet. For example, when going from on-premises to the Cloud the major Cloud providers have a VPN option which encrypts data going over the internet. More secure options are also available, such as AWS Direct Connect and Azure ExpressRoute. With these options, the on-premises network is connected directly to the Cloud provider edge location via a telecom provider, and the data goes over a private route rather than the internet.
Additionally, Cloud services such as S3, Azure Blob Storage, and GCP buckets default to route service connections over the internet. Creating a private end point (e.g., AWS PrivateLink) allows for a private network connection within the Cloud provider’s network. Private connections that do not traverse the Internet provide better security and privacy.
Protecting data at rest is important for both the source and target environments. The modern Z/OS mainframe has advanced pervasive and encryption capabilities: https://www.redbooks.ibm.com/redbooks/pdfs/sg248410.pdf. The major Cloud providers all provide extensive at-rest encryption capabilities. Turning on encryption for Cloud Storage and databases is often just a parameter setting and the Cloud provider takes care of the encryption, keys, and certificates automatically.
Protecting data in transit is equally important. There are often multiple transit points to encrypt and protect. First, is the transit from the mainframe to on-premises to the Cloud VM instance. A mainframe data replication product should provide protection employing TLS 1.2 to utilize keys and certificates on both the mainframe and Cloud. Second is from the Cloud VM to the Cloud target database or service. Encryption may be less important since often these services are in a private environment. However, encryption can be achieved as required.
High Availability
During CDC processing, high availability must be maintained in the Cloud environment. The data replication product should keep track of processing position. The first can be a Restart file, which keeps track of mainframe log position, target processing position, and uncommitted transactions. The second can be a container stored on Linux or Windows to store committed unprocessed transactions. Both need to be on highly available storage with a preference for storage across Availability Zones (AZs), such as Elastic File System (Amazon EFS) or Windows File Server (FSx).
The Amazon EC2 instance (or other Cloud instance) can be part of an Auto Scaling Group spread across AZs with minimum and maximum of one Amazon EC2 instance.
Upon failure, the replacement Amazon EC2 instance of the replication product’s administrator function is launched and communicates its IP address to the product’s mainframe administrator function. The mainframe then starts communication with the replacement Amazon EC2 instance.
Once the Amazon EC2 instance is restarted, it continues processing at the next logical restart point, using a combination of the LUW and Restart files.
For production workloads, Treehouse Software recommends turning on Multi-AZ target and metadata databases.
Scalable Storage
With scalable storage provided on most Cloud platforms, the customer pays only for what is used. The data replication product should require file-based storage for its files that can grow in size if target processing stops for an unexpected reason. For example, Amazon EFS, and Amazon FSx provide a serverless elastic file system that lets the customer share file data without provisioning or managing storage.
Analytics
All top Cloud platform providers give customers the broadest and deepest portfolio of purpose-built analytics services optimized for all unique analytics use cases. Cloud analytics services allow customers to analyze data on demand, and helps streamline the business intelligence process of gathering, integrating, analyzing, and presenting insights to enhance business decision making.
A data replication product should replicate data to several data sources that can easily be captured by various Cloud based analytics services. For example, mainframe database data can be replicated to the various Cloud ‘buckets’ in JSON, CSV, or AVRO format, which allows for consumption by the various Cloud analytic services. Bucket types include AWS S3, Azure BLOB Data, Azure Data Lake Storage, and GCP Cloud storage. Several other Cloud analytics type services also support targets including Kafka, Elasticsearch, HADOOP, and AWS Kinesis.
Kafka has become a common target and can serve as a central data repository. Most customers target Kafka using JSON formatted replicated mainframe data. Kafka can be installed on-premises, or using a managed Kafka service, such as the Confluent Cloud, AWS Managed Kafka, or the Azure Event Hub.
Monitoring
Monitoring is a critical part of any data replication process. There are several levels of monitoring at various points in a data replication project. For example, each node of the replication including the mainframe, network communication, Cloud VM instances (such as EC2) and the target Cloud database service all can require a level of monitoring. The monitoring process will also be different in development or QA vs. a full production deployment.
A data replication product should also have its own monitoring features. One important area to measure is performance and it is important to determine where any performance bottleneck is located. Sometimes it could be the mainframe process, the network, the transformation computation process, or the target database. A performance monitor helps to detect where the bottleneck is occurring and then the customer can drill down into specifics. For example, if the bottleneck is the input data, areas to examine are the mainframe replication product component performance, or the network connection. The next step is to monitor the area where the bottleneck is occurring using the data replication product’s statistics, mainframe monitoring tools, or Cloud monitoring such as AWS CloudWatch.
A data replication product should also allow the customer to monitor processing functions during the replication process. The data replication product should also have extensive logs and traces that allow for detailed monitoring of the data replication process and produce detailed replication statistics that include a numeric breakdown of processing statistics by table, type of operation (insert, update delete), and where these operations occurred (mainframe, or target database).
CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing customers with a unified view of AWS resources, applications, and services that run on AWS, and on-premises servers. You can use CloudWatch to set high resolution alarms, visualize logs and metrics side by side, take automated actions, troubleshoot issues, discover insights to optimize your applications, and ensure they are running smoothly.
Some customers are satisfied with a basic monitoring that polls every five minutes, while others need more detailed monitoring and can choose polls that occur every minute.
CloudWatch allows customers to record metrics for EC2 and other Amazon Cloud Services and display them in a graph on a monitoring dashboard. This provides visual notifications of what is going on, such as CPU per server, query time, number of transactions, and network usage.
Given the dynamic nature of AWS resources, proactive measures including the dynamic re-sizing of infrastructure resources can be automatically initiated. Amazon CloudWatch alarms can be sent to the customer, such as a warning that CPU usage is too high, and as a result, an auto scale trigger can be set up to launch another EC2 instance to address the load. Additionally, customers can set alarms to recover, reboot, or shut down EC2 instances if something out of the ordinary happens.
Disaster Recovery
IT disasters such as data center failures, or cyber attacks can not only disrupt business, but also cause data loss, and impact revenue. Most Cloud platforms offer disaster recovery solutions that minimize downtime and data loss by providing extremely fast recovery of physical, virtual, and Cloud-based servers.
A disaster recovery solution must continuously replicate machines (including operating system, system state configuration, databases, applications, and files) into a low-cost staging area in a target Cloud account and preferred region.
Unlike snapshot-based solutions that update target locations at distinct, infrequent intervals, a Cloud based disaster recovery solution should provide continuous and asynchronous replication.
Consult with your Cloud platform provider to make sure you are adhering to their respective best practices.
Many organizations lack the internal resources to support AI and machine learning initiatives, but fortunately the leading Cloud platforms offer broad sets of machine learning services that put machine learning in the hands of every developer and data scientist. For example, AWS offers SageMaker, GCP has AI Platform, and Microsoft Azure provides Azure AI.
Applications that are good candidates for AI or ML are those that need to determine and assign meaning to patterns (e.g., systems used in factories that govern product quality using image recognition and automation, or fraud detection programs in financial organizations that examine transaction data and patterns).
The list goes on…
Treehouse Software and our Cloud platform and migration partners can advise and assist customers in designing their roadmaps into the future, taking advantage of the most advanced technologies in the world.
Successful customer goals are top priority for all of us, and we can continue to work with our customers on a consulting basis even after they are in production.
Of course, each project will have unique environments, goals, and desired use cases. It is important that specific use cases are determined and documented prior to the start of a project and a tcVISION POC. This planning will allow the Treehouse Software team and the customer develop a more accurate project timeline, have the required resources available, and realize a successful project.
Your Mainframe-to-Cloud Data Migration Partner…
Treehouse Software is a global technology company and Technology Partner with AWS, Google Cloud, and Microsoft. The company assists organizations with migrating critical workloads of mainframe data to the Cloud.
Further reading on tcVISION from AWS, Google Cloud, and Confluent:
tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both Cloud and on-premises.
tcVISION acquires data in bulk or via CDC methods from virtually any IBM mainframe data source (Software AG Adabas, IBM Db2, IBM VSAM, CA IDMS, CA Datacom, and sequential files), and transform and deliver to a wide array of Cloud and Open Systems targets, including AWS, Google Cloud, Microsoft Azure, Confluent, Kafka, PostgreSQL, MongoDB, etc. In addition, tcVISION can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, and PostgreSQL.
Contact Treehouse Software for a tcVISION Demo Today…
Simply fill out our tcVISION Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.
Treehouse Software was recently invited by Microsoft Azure Mainframe Modernization technical teams to do a presentation and demonstration of tcVISION, our innovative Mainframe-to-Cloud data replication software product.
In this video, we show an overview of the product, then demonstrate replication of mainframe data on Azure SQL:
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.
Treehouse Software was recently invited by AWS mainframe modernization technical teams to do a presentation and demonstration of tcVISION, our innovative Mainframe-to-Cloud data replication software product.
In this video, Chris Rudolph, Treehouse Software’s tcVISION Product Manager shows an overview of the product, then demonstrates replication of mainframe data on AWS RDS for PostgreSQL:
Contact Treehouse Software Today for a tcVISION Demonstration…
No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target – tcVISION from Treehouse Software is your answer.
by Joseph Brady, Director of Business Development and AWS and Cloud Alliance Leader at Treehouse Software, Inc.
Treehouse Software is the worldwide distributor of tcVISION, the innovative software product that allows immediate data replication between an impressive array of Mainframe sources and Cloud and Open Systems targets. This blog focuses on tcVISION‘s support of VSAM mainframe data sources (batch and CICS on z/OS, and CICS on z/OS and z/VSE).
tcVISION performs VSAM Change Data Capture (CDC) either via its own “DBMS-Extensions”, or via IBM’s CICS VR product. tcVISION has separate DBMS-Extensions to capture changes from CICS (using the CICS External Interface) and batch (via a JCL wrapper). All captured changes, regardless of whether they are performed by tcVISION or CICS VR are written to the z/OS Logstream on the mainframe. tcVISION then reads the Logstream and transfers the transactions to a tcVISION server running in the Cloud or on-prem, which is responsible for queueing, transforming, and applying the captured changes to the specified target.
Additionally, when planning VSAM CDC there are a number of operational items to consider, such as volume of batch transactions, data changes that occur during periods of time while the VSAM file is offline, etc.
In this instructional video, tcVISION is shown capturing changes from VSAM on z/OS and writing them to SQL Server on Windows:
Contact Treehouse Software Today for a tcVISION Demonstration…
No matter where you want your mainframe data to go – the Cloud, Open Systems, or any LUW target – tcVISION from Treehouse Software is your answer.
by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.
Many customers embarking on Mainframe-to-Cloud data replication projects with Treehouse Software are looking at high availability (HA) as a key consideration in the planning process. All of the major Cloud platforms have robust HA infrastructures that keep businesses running without downtime or human intervention when a zone or instance becomes unavailable. HA basic principles are essentially the same across all Cloud platforms.
In this blog, our example shows how the AWS Global Infrastructure and HA is architected with Treehouse Software’s tcVISION real-time mainframe data replication product. A well planned HA architecture ensures that systems are always functioning and accessible, with deployments located in various Availability Zones (AZs) worldwide.
The following example describes tcVISION‘s HA Architecture on AWS. During tcVISION ’s Change Data Capture (CDC) processing for mainframe data replication on the Cloud, HA must be maintained. The Amazon Elastic Compute Cloud (Amazon EC2), which contains the tcVISION Agent, is part of an Auto Scaling Group that is spread across AZs with Amazon EC2 instance(s).
tcVISION and AWS overall architecture…
Upon failure, the replacement Amazon EC2 instance tcVISION Agent is launched and communicates its IP address to the mainframe tcVISION Agent. The mainframe tcVISION Agent then starts communication with the replacement Amazon EC2 tcVISION Agent.
Once the Amazon EC2 tcVISION Agent is restarted, it continues processing at its next logical restart point, using a combination of the LUW and Restart files. LUW files contain committed data transactions not yet applied to the target database. Restart files contain a pointer to the last captured and committed transaction and queued uncommitted CDC data. Both file types are stored on a highly available data store, such as Amazon Elastic File System (EFS).
tcVISION and AWS HA architecture…
For production workloads, Treehouse Software recommends turning on Multi-AZ target and metadata databases.
To keep all the dynamic data in an HA architecture, tcVISION uses EFS, which provides a simple, scalable, fully managed elastic file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
More information on AWS HA
Treehouse Software helps enterprises immediately start synchronizing their mainframe data on the Cloud, Hybrid Cloud, and Open Systems to take advantage of the most advanced, scalable, secure, and highly available technologies in the world with tcVISION…
tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data replication for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Cloud, Open Systems, Linux, Unix, and Windows platforms.
Contact Treehouse Software for a Demo Today…
Just fill out the tcVISION Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your tcVISION demonstration. This will be a live, on-line demonstration that shows tcVISION replicating data from the mainframe to a Cloud target database.