What are the Benefits of Replicating Mainframe Data on Cloud or Hybrid Cloud Systems?

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software

Enterprise customers with mainframe systems have begun their movement of data to the Cloud or hybrid Cloud (a mixed computing, storage, and services environment made up of on-premises infrastructure, private Cloud services, and public Cloud) to benefit from new and powerful technologies that deliver significant business benefits and competitive advantage. Compared to the number of mainframe shops that are in the planning stages of their Cloud projects, existing adopters’ numbers are still relatively small.

Today, it is easier than ever for customers to take advantage of cutting edge, Cloud-based technologies, changing the way they manage, deploy, and distribute mission-critical data currently residing on mainframe systems. During the planning phase of a Cloud or hybrid Cloud modernization strategy, some benefits that are quickly discovered include:

Trade Capital Expense for Variable Expense – Instead of having to invest heavily in data centers and servers before customers know how they are going to use them, they pay only when they consume computing resources, and pay only for how much they consume.

Global Deployments – Cloud platforms span many geographic regions globally. Enterprises can easily deploy applications in multiple regions around the world with just a few clicks. This means there can be lower latency and a better experience for customers at minimal cost.

Economies to Scale – By using Cloud computing, customers can achieve a lower variable cost than they can get on their own, because usage from hundreds of thousands of customers is aggregated in the Cloud. Providers such as AWS, Google Cloud, etc. can achieve higher economies of scale, which translates into lower pay as-you-go prices.

Scale of Services – Cloud-based products offer a broad set of global services including compute, storage, databases, analytics, machine learning and AI, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications. These services help organizations move faster, lower IT costs, and scale.

World Class Security – All major Cloud platforms offer advanced and strict security that complies with the most stringent government and private sector requirements.

Extreme High Availability (HA) – Major Cloud platforms span many geographic regions around the world.  By designing services and applications to be redundant across regions, HA is enhanced far beyond a single on-premises data center.

Testing at Scale – Cloud servers and services can be created and charged on demand for a specific amount of time.  This allows customers to create temporary large-scale test environments prior to deployment that are not practical for on-premises environments.  Large scale testing reduces deployment risks and helps to provide a better customer experience.

Auto Scaling and Serverless Deployments – Major Cloud platforms have many serverless and autoscaling options available, allowing for scalable computing capacity as required.  Customers pay only for the compute time they consume – there is no charge when the code is not running. Another example is the ability for a Cloud database to automatically start up, shut down, and scale capacity up or down based on the application’s needs.

Customer Agility and Innovation – In a Cloud computing environment, new IT resources are only a click away, which means that customers reduce the time to make those resources available to developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.

Many companies who haven’t started their modernization journeys yet are looking for tools that allow their legacy mainframe environments to continue, while replicating data – in real time – on a variety of Cloud and open systems platforms. Treehouse Software is the worldwide distributor of tcVISION, a software tool that provides an easy and fast approach for Cloud and hybrid Cloud projects, enabling bi-directional data replication between the hardware source and many targets, including (mainframe): Db2 z/OS, Db2 z/VSE, Adabas, VSAM, IMS/DB, CA IDMS, CA DATACOM, etc. and (Cloud and open systems): AWS, Google Cloud, Microsoft Azure, Kafka, PostgreSQL, etc..

If your enterprise is planning on a Mainframe-to-Cloud data modernization project, we would welcome the opportunity to help get you moving immediately with an online demonstration of tcVISION. Contact Treehouse Software for a tcVISION demonstration today!

Starting a Mainframe Data Replication Project? Consider Your Use Cases Carefully

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software

___Data_Center_To_Targets_Overview

Planning for Real-time and Bi-directional Mainframe Data Replication with tcVISION

A customer’s mainframe data may be utilized by many interlinked and dependent programs that have been in place for several years, and sometimes, decades, so unlocking the value of this legacy data can be difficult. Therefore, careful planning must occur for a mainframe data modernization project, beginning with identifying uses cases for the project and a Proof of Concept (POC) of data replication software, such as tcVISION, the Mainframe-to-Hybrid Cloud and Open Systems data replication product from Treehouse Software.

This blog serves as a general guide for organizations planning to replicate their mainframe data on Cloud and/or Open Systems platforms using tcVISION.

Questions and Considerations for Your Use Cases

A general principle should be to prove out the data replication technology (tcVISION) by using identified use cases. Listed below are some examples of questions and use cases to assist customers in planning and experiencing successful Mainframe-to-Cloud and/or Open Systems data replication projects:

  • What is/are the mainframe source database(s)? Obvious, yes, but the software solution vendor and outside consultants need this information.
  • What are the critical issues you need to test? Are there any areas that you believe that would be challenging for the vendor? Examples would be specific transformations, data types (e.g., BLOB), required CDC SLAs, field/column changes, specific security requirements, data volume requirements, etc. Document all of the critical test items.
  • Select the minimal set of files (generally 3-10) with representative conditions that will enable you to test all of your critical items. If there are multiple source databases, ensure specific test use cases are defined for each source.
  • What are the target databases? Will you be replicating to a Cloud database manager, such as AWS RDS? Are there additional requirements to replicate to S3, Azure BLOB, GCP Cloud Storage, Kinesis, and Kafka? What needs to be tested?
  • What are your bulk or initial load requirements? tcVISION can load data directly from mainframe databases, mainframe unloads, or image copies. What are the data volumes? Do you have sufficient bandwidth between your mainframe manager and on-premises or Cloud VM to handle your volume requirements?
  • What are your Change Data Capture (CDC) requirements?
  • Do you have plans for bi-directional replication in the future. What are your specific requirements? Since bi-directional replication can be complex and greatly lengthen a modernization project, the customer generally will perform one bi-directional use case for conceptual proof. Will this suffice for your organization?
  • What are your specific high availability requirements? Can they be handled by a technical discussion, or is a specific use case required?
  • What are your general security requirements for data at rest and data in transit? Do you have any specific security regulations to follow, such as HIPPA or FIPS? What are your PII / data masking requirements?
  • What are your schema requirements? For example, tcVISION creates a default schema based on your input mainframe data. Major changes to the default schema usually require a staging database.
  • Do you have staff available to perform the required tasks for the project? For example, for the length of a tcVISION POC you will need part-time staff, 2-4 hours per day. A part-time mainframe administrator will generally require 2-8 elapsed hours. Other staff will include Windows/Linux/Cloud administrators. 2-4 hours of project management may also be required.
  • Are business data transformations required? tcVISION handles minor transformation via point and click (e.g., date format transformations). Major transformations can require C++ or product scripting.
  • Are there any triggers or stored procedures? tcVISION performs CDC replication processing using a database that utilizes these database features.

Of course, each project will have unique environments, goals, and desired use cases. It is important that specific use cases are determined and documented prior to the start of a project and a tcVISION POC. This planning will allow the Treehouse Software team and the customer develop a more accurate project timeline, have the required resources available, and realize a successful project. 

More About tcVISION from Treehouse Software…

__Plans_To_Reality

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both Cloud and on-premises.

tcVISION acquires data in bulk or via CDC methods from virtually any IBM mainframe data source (Software AG Adabas, IBM Db2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, and sequential files), and transform and deliver to a wide array of Cloud and Open Systems targets, including AWS, Google Cloud, Microsoft Azure, Confluent, Kafka, PostgreSQL, MongoDB, etc. In addition, tcVISION can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, and PostgreSQL.


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our Product Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

What’s Your Mix?

Real-time and Bi-directional Data Replication Between Mainframes and Virtually Any Target

Cloud004_Swirl

tcVISION from Treehouse Software — Enterprise ETL and Real-Time Data Replication Through Change Data Capture (CDC)

Planning a data replication project between Mainframe, Cloud, and Open Systems platforms? tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both Cloud and on-premises.

tcVISION acquires data in bulk or via CDC methods from virtually any IBM mainframe data source (Software AG Adabas, IBM Db2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, and sequential files), and transform and deliver to a wide array of Cloud and Open Systems targets, including AWS, Google Cloud, Microsoft Azure, Confluent, Kafka, PostgreSQL, MongoDB, etc. In addition, tcVISION can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix and PostgreSQL.

Whatever your mix, Treehouse Software has got it covered with tcVISION.


__TSI_LOGO

Moving the right data to the right place at the right time

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

The Mainframe-to-Hybrid Cloud Wave has Arrived, and Treehouse Software is Helping Customers Begin the Ride!

by Joseph Brady, Director of Business Development / AWS and Cloud Alliance Lead at Treehouse Software and Andy Jones, Certified AWS Solutions Architect at Treehouse Software

There are many pioneering organizations with mainframe systems that have already begun their movement to the Cloud, and are now taking advantage of the new and powerful technologies delivering significant business benefits and competitive advantage, including:

Trade Capital Expense for Variable Expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume. https://aws.amazon.com/pricing/

Global Deployments – The AWS Cloud spans 22 geographic regions globally. Enterprises can easily deploy applications in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost. https://aws.amazon.com/about-aws/global-infrastructure/regions_az/

Economies to Scale – By using Cloud computing, you can achieve a lower variable cost than you can get on your own, because usage from hundreds of thousands of customers is aggregated in the Cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices. https://aws.amazon.com/economics/

Scale of Services – Amazon Web Services offers a broad set of global Cloud-based products including compute, storage, databases, analytics, networking, mobile, developer tools, management tools, IoT, security, and enterprise applications. These services help organizations move faster, lower IT costs, and scale. https://aws.amazon.com/products/

World Class Security – AWS security compliance is second to none and complies with the most stringent government and private sector requirements. https://aws.amazon.com/compliance/programs/

Extreme High Availability (HA) – The AWS Cloud spans 69 Availability Zones within 22 geographic Regions around the world https://aws.amazon.com/about-aws/global-infrastructure/regions_az/.  By designing your services and applications to be redundant across AWS availability zones or regions, HA is enhanced far beyond a single on premises data center. https://aws.amazon.com/marketplace/solutions/infrastructure-software/high-availability 

Testing at Scale – AWS servers and services can be created and charged on demand for a specific amount of time.  This allows customer to create temporary large-scale test environments prior to deployment that are not practical for on premises environments.  Large scale testing reduces deployment risks and helps to provide a better customer experience. https://aws.amazon.com/  

Auto Scaling and Serverless Deployments – AWS has many serverless and autoscaling options available, allowing for scalable computing capacity as required.  For example, AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. Another example is Amazon Aurora Serverless, which is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition), where the database will automatically start up, shut down, and scale capacity up or down based on your application’s needs. https://aws.amazon.com/serverless/

Customer Agility and Innovation – In a Cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower. https://aws.amazon.com/architecture/

Infrastructure as Code – AWS CloudFormation provides a common language for you to describe and provision all the infrastructure resources in your Cloud environment. CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all regions and accounts. This gives you a single source of truth for your AWS resources. https://aws.amazon.com/cloudformation/ 

However, compared to the number of mainframe shops in enterprises that are just beginning to plan their moves to the Cloud, existing adopters’ numbers are still relatively small.

As an example of the current boom in overall Cloud growth, the worldwide public sector Cloud market will be growing to a staggering $331B by 2022 according to Gartner. By the end of 2019, more than 30% of technology providers’ new software investments will shift from Cloud-first to Cloud-only, further reducing license-based software spending and increasing subscription-based Cloud revenue. (Source: Forbes) 

Treehouse Software is a Trusted Partner on Your Mainframe-to-Cloud Journey

Treehouse Software is a well-established company serving mainframe customers since 1982. We are currently developing a strong presence in the emerging Cloud market space related to mainframe data migration, primarily through our partnership with Amazon Web Services (AWS).  AWS is aware that most large enterprises use mainframe systems that are housing vast amounts of data encompassing historical, customer, logistics, etc., and they have helped us bring our tcVISION solution to the AWS Marketplace. tcVISION provides real-time replication between a variety of mainframe and non-mainframe sources, including (Mainframe): VSAM, IMS, Db2, CA Datacom, Adabas, CA IDMS, and Flat Files; and (Non-mainframe): AWS RDS databases, AWS Aurora, AWS S3, AWS Kinesis,  PostgreSQL, MySQL, Kafka, MongoDB, Hadoop, Oracle, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, IBM Informix, SAP Hana, and many more. 

AWS sales and technical leaders within various verticals (GovCloud, Nonprofit, K12/Higher Ed, Automotive, etc.) are also beginning to engage with Treehouse Software to learn more about our unique skills and solution that can help them tap into this potential goldmine of massive amounts of legacy data that needs to be moved to AWS. 

Treehouse Software Helps Customers Begin Moving Their Mainframe Data to the Cloud Immediately

Treehouse Software specializes in providing data replication for enterprise customers who want a fully developed and automated way to move data from their mainframe systems to the Cloud. Treehouse Software’s tcVISION is a low risk option that allows customers to immediately begin moving data to the Cloud while they work on the sometimes massive complexity of application migration. Our experience has shown that projects can become stalled while the application side is being figured out. For example, Treehouse Software recently became involved in a project with a government agency that was facing the complexity of a “big bang” migration, which is slowing the project. We are now presenting them with our tcVISION data replication solution option, where they can replicate data to AWS while maintaining their current environment for modernization and migration of their applications. 

Additionally, Treehouse Software’s decades of experience developing software and working in the IBM mainframe environment, in addition to selling and supporting a comprehensive automated data replication product, is making us a desirable partner for AWS and many Cloud migration companies.  

AWS recently published a blog about tcVISION, our Mainframe-to-Cloud data replication product: https://aws.amazon.com/blogs/apn/real-time-mainframe-data-replication-to-aws-with-tcvision-from-treehouse-software/ 

Additionally, here is a blog about Treehouse Software’s extensive mainframe experience: https://treehousesoftware.wordpress.com/2019/09/12/treehouse-softwares-differentiator-weve-been-helping-enterprise-mainframe-sites-since-1982/

If your enterprise is planning on riding the wave with a Mainframe-to-Cloud migration project, we would welcome the opportunity to help get you moving immediately with an online presentation and demonstration of our tcVISION data replication solution. Contact Treehouse Software today!

Treehouse Software Customer Success: Element Fleet Management Corp. Real-time, Bi-directional Synchronization Between IBM Mainframe Datacom and PostgreSQL

Element_Building

As a leading global fleet management company, Element provides a comprehensive range of fleet services that span the total fleet lifecycle, helping customers optimize their strategies at every stage – from acquisition and financing to program management and vehicle remarketing. These services, together with exceptional tools, technologies and consulting, empower customers to realize extraordinary results.

Business Background

Element Fleet Management has over one million vehicles under management, and serves more than 2,800 customers. The company employs over 2,700 staff members across 12 offices. Transaction sizes range from $500,000 to $40 million. 50 countries are served through the Element-Arval Global Alliance. Top industries served by Element Fleet Management are agriculture, business services, chemical, construction, consumer products, education and non-profit, energy, food and beverage, insurance, manufacturing, pharmaceutical and healthcare, professional services, telecommunications, transportation, and utilities.

Business Issue

Much of Element’s mission critical mainframe data is stored in a legacy IBM mainframe database. The cost to maintain these legacy databases is high and they lack the features required for modernizing the data architecture. The data is utilized by an extensive number of interlinked programs dependent on this legacy data. Element required a solution that would allow the legacy environment to continue while having real time replicated data on a modern database/platform. This enables them to utilize modern programing and data tools to be competitive in their market.

Element_Diagram

Technology Solution

Treehouse software and the customer developed a phased plan to prove out and implement tcVISION. This started with a proof-of-concept to prove out the technology’s effectiveness and technical suitability. This was proceeded by architectural phase to define the architectural solution including, configuration management, environments and promotion, high availability, security, disaster recovery, monitoring, operations procedures, script customization, data replication mapping and training plans. After defining the architecture, the production deployment phase started with incremental sprint-like deployments. Additional files were deployed into production every few weeks.

This project enabled the tcVISION product to synchronize mission critical financial IBM mainframe Datacom data to a modern PostgreSQL database. Bi-directional, real-time data synchronization enables changes on either platform to be reflected on the other platform (e.g., a change to a PostgreSQL table is reflected on mainframe Datacom). This allows their business to modernize the application on PostgreSQL without disrupting the existing critical work on the legacy system.

NOTE: Many modern tools are using the PostgreSQL database, which greatly enhances business agility (e.g., other divisions within Element are already using Amazon Web Services (AWS), so having tcVISION in place now makes mainframe data delivery to the AWS cloud an easy option). See a step-by-step video of Mainframe-to-PostgreSQL on AWS here. 

Successful, real-time mainframe data replication to an open systems application allows web and mobile access and updates. Element indicated their satisfaction with the results and the solution, especially reduced mainframe MIPS costs, and enhanced business ability to respond to business environment changes.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and Cloud-based.

tcVISION_Connection_Overview_Web01

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM Db2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including PostgreSQL, Adabas LUW, Oracle Database, Microsoft SQL Server, IBM Db2 LUW and Db2 BLU, and IBM Informix.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

 

Where Do You Want to Go?

Enterprise ETL and Real-Time, Bidirectional Data Replication Between Virtually Any Source and Target.

Mainframe_to_Cloud

Planning a data replication project between mainframe, Linux, Unix, and Windows platforms? We now have downloadable data sheets that individually focus on supported data replication sources and targets.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and Cloud-based.

___tcVISON_Big_Data_002

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

 

TREETIP: Did You Know About tRelational’s Schema Auto-Generation Feature?

by Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

tRelational, the data analysis, modeling, and mapping component of Treehouse Software’s Adabas-to-RDBMS product set provides three options for developing RDBMS data models and mapping ADABAS fields to RDBMS columns:

Option 1: Auto-generation

Option 2: Importation of existing RDBMS schema elements

Option 3: Detailed definition and manipulation of schema and mapping elements using tRelational

The Auto-generation function can be an extremely useful productivity aid. By simply invoking this function and specifying an Adabas file structure, a fully-functional corresponding RDBMS schema (Tables, Columns, Primary Keys, Foreign Key relationships and constraints) and appropriate mappings are created virtually instantaneously. The table and column names, datatypes, lengths, and mappings/transformations are all automatically tailored specifically for the RDBMS product, platform, and version–the user need not be an expert in the RDBMS.

tRelational’s schema auto-generation simply requires specification of an Adabas file structure and associated options…

tRe_AutoGen

The auto-generated model can be immediately used to generate both RDBMS DDL and parameters for the Data Propagation System (DPS) component. Within minutes of identifying the desired Adabas file or files to tRelational, the physical RDBMS schema can be implemented on the target platform and DPS can begin materializing and propagating data to load into the tables.

It is important to note that these modeling options complement each other and can be used in combination to meet any requirements. Auto-generated schema elements can be completely customized “after the fact”, as can imported elements. Auto-generation can be used at the file level to generate complete tables and table contents at the field level, making it easy to then manually define and map one or more columns within a table, or even to denormalize MU/PE structures into a set of discrete columns.


About Treehouse Software’s tRelational / DPS Product Set

tReDPSMainDiagram01

tRelational / DPS is a robust product set that provides modeling and data transfer of legacy Adabas data into modern RDBMS-based platforms for Internet/Intranet/Business Intelligence applications. Treehouse Software designed these products to meet the demands of large, complex environments requiring product maturity, productivity, feature-richness, efficiency and high performance.

The tRelational component provides complete analysis, modeling and mapping of Adabas files and data elements to the target RDBMS tables and columns. DPS (Data Propagation System) performs Extract, Transformation, and Load (ETL) functions for the initial bulk RDBMS load and incremental Change Data Capture (CDC) batch processing to synchronize Adabas updates with the target RDBMS.

Visit the Treehouse Software website for more information on tRelational / DPS, or contact us to discuss your needs.

3270 Application Modernization Using OpenLegacy, Part 2: Creating the Trail File

by Frank Griffin, Senior Software Developer for Treehouse Software 

This is the second in a series of blog posts concerning the Treehouse partnership with OpenLegacy (See Part 1 here). As discussed last time, OpenLegacy is a way to wrap legacy mainframe applications for presentation to Java, Web, and Mobile clients.

We’ve already covered the conceptual workings of wrapping 3270 applications in general, so this time we’ll examine the first steps of wrapping an actual 3270 application in detail.

If you haven’t already downloaded OpenLegacy, you’ll need to do so in order to follow along in this Proof of Concept. Go to http://www.openlegacy.com and select Resources → Download. You’ll need to provide your name and an email address, which will get you a userid and password for the download site at ftp://ftp.openlegacy.org .

CICS is one of the bulwarks of legacy 3270 applications. These days, it also supports a variety of other communications protocols, but for 3270 it acts very much like a Windows/DOS/Unix command line. You type a transaction name followed by optional arguments, and hit ENTER. CICS looks up the transaction name in a table to find the application program associated with it, and runs that program. The program writes to the terminal, either in line mode or using formatted 3270 screens, and may request further input from the user and provide additional terminal output.

CICS provides a number of utility transaction names with the product. We’ll be looking at two of them today: CESN (sign on to CICS as a known user), and CEMT (general multi-purpose transaction for querying and controlling aspects of CICS). We’ll construct a demo application by connecting to CICS, signing on with the CESN transaction and issuing the transaction CEMT INQUIRE PROG(xxxxxxxx) to obtain information about a specific CICS transaction program. The input to the application will be a userid, password, and application program name. The output will be some of the data associated with that program.

Open the OpenLegacy IDE, and choose New → OpenLegacy Project. Select “Screens” as the backend solution type, “Integration Web” for the frontend solution type, “Mainframe” for the host type, and supply your host’s name or IP address:

OpenLegacy_Screens01

OpenLegacy_Screens02

Now, we need to run an emulation session to “teach” OpenLegacy how to navigate. Right-click on the project and select OpenLegacy → run-emulation. This will set several things in motion. It will start an internal web server within the IDE on port 1512 and launch an instance of your default browser with a URL of http://localhost:1512. When the browser connects to the web server, it will execute a servlet that will use the open-source s3270 scripting emulator to connect to your mainframe and report the 3270 datastream to the servlet, which will convert the screen to HTML and send it to the browser.

The result is that your browser instance will appear to be a 3270 screen, and you’ll be able to interact with it as if it were a real terminal or terminal emulator. As you do, the servlet is sitting in the middle, logging all terminal activity between you and the mainframe. On our mainframe, the first screen you’ll see is the standard VTAM Logon screen:

OpenLegacy_Screens03

We logon to CICS (the VTAM logon screen and the command you use may differ at your site), and depending on how CICS is configured, you may need to type “CESN” explicitly, or CESN may be started for you automatically:

OpenLegacy_Screens04

Fill in a valid Userid and Password, and hit ENTER:

OpenLegacy_Screens05

Clear the screen (ESC key in the browser emulator), as CICS transactions have to be invoked from a clear screen, type “CEMT INQUIRE PROG(DFHZNEP)” (DFHZNEP is a dummy network error program supplied with CICS), and hit ENTER:

OpenLegacy_Screens06

OpenLegacy_Screens07

The CEMT transaction has displayed all sorts of information about the DFHZNEP program, which is what we wanted.

At this point, we could continue with 3270 navigation to exit and signoff from CICS, but for simplicity in this example, we’ll just cut off the emulation here. This will cause OpenLegacy to disconnect the TN3270 session, which will cut us loose from CICS; if this were TSO, we’d have to follow through or risk leaving disconnected TSO sessions active.

Note that depending on the size of your browser window you may have to scroll down to make the “Logoff | Flip” links visible. Now click on “Logoff”, which will disconnect the terminal, shut down the emulator, and create the trail file which describes everything you’ve done:

OpenLegacy_Screens08

The trail file is an XML document which describes in detail your interactions with the mainframe. It will be the input to the next phase of generating the application, which will be the subject of the next post.

Request a Free, Online Demo of Treehouse Software’s Mainframe Real-Time and Bidirectional Data Replication / Integration Products

Did you know that Treehouse Software offers online demonstrations of the most complete and flexible portfolio of solutions available anywhere for real-time, bidirectional data replication and integration between mainframe and LUW data sources?

You can see how Treehouse Software’s popular tcACCESS and tcVISION products efficiently and cost-effectively use ETL, CDC, SQL, XML, and SOA technologies for data replication / integration, in an interactive demonstration with our skilled technical experts.

Integrate mainframe data and applications with LUW data sources…

tcACCESS is a comprehensive software solution that enables two-way integration between IBM mainframe systems and client/server, Web and SOA technologies — without the need for mainframe knowledge or programming effort. A proven platform that facilitates SQL-based integration of mainframe data sources and programs into LUW applications, tcACCESS uses industry standards such as SQL, ODBC, JDBC, and .NET. SQL queries to access mainframe data can be easily created using drag and drop techniques — no programming required.

tcACCESS_Diagram01

tcACCESS is a modular software solution. It consists of a base system that can either be implemented as a CICS transaction or as a VTAM application. The base system provides its own communication modules. The heart of the system is the tcACCESS SQL Engine which allows access to mainframe data sources using SQL statements. tcACCESS offers Listener components on the mainframe and on the client, as well as scheduling and security functions. Batch processors automate the information exchange processes between distributed applications.

Enable ETL and bi-directional data replication between mainframe and LUW platforms…

tcVISION allows the exchange of data between heterogeneous databases, from legacy non-relational mainframe sources to standard RDBMSs, in batch or real-time, via CDC (change data capture). With tcVISION, complex replication scenarios can be implemented with ease–including bi-directional “master/master” replication requirements.

tcVISION_simple

tcVISION considerably simplifies mainframe data exchange processes. The structure of the existing mainframe data is analyzed by tcVISION processors, then automatically and natively mapped to the target. The data mapping information is presented in a user-friendly and transparent format – even for users with no mainframe knowledge.

See for yourself, right at your desk…

DemoRequest

Tell us about your challenges. If you have a project where our mainframe data replication and integration products could be of assistance, our skilled sales and technical staff would be happy to set up a free, online demo. Simply fill out our short Treehouse Software Demo Request Form.

Treehouse Software Customer Case Studies Available Online

CustomerLogos

Read about real world application of Treehouse Software products on our Customer Case Studies web page.

Here, you’ll find out how tcACCESS and tcVISION Data Integration and Replication; and tRelational / DPS and DPSync ADABAS-to-RDBMS Data Migration have been implemented and are being used in some of the largest enterprise sites in the world.

To learn more about how to become another Treehouse Software customer success, contact us today!