TREETIP: tcVISION Supports Data Replication to MongoDB

tcVISONv6

The tcVISION cross-system integration platform is a robust, proven, and mature solution that is constantly under development to meet the requirements of new technologies, including support for MongoDB.

mongodb

MongoDB is among the leading NoSQL databases in the market and has been developed for the needs of today’s information technology. MongoDB supports a data model with dynamic schemata and is especially suitable to store large amounts of data using GridFS. It contains automatic failure protection using an integrated replication. MongoDB also offers native, idiomatic drivers for nearly all programming languages and frameworks.

Find out more about MongoDB here: https://www.mongodb.com

In addition to support for MongoDB, tcVISION features connectivity to other output targets, such as Hadoop (see previous blog about Hadoop support), Adabas LUW, DB2 BLU, and EXASOL. Additionally, new input sources include z/OS VSAM Logstream (CICS and Coupling Facility / Shared VSAM), z/OS VSAM Batch Extension, z/OS DBMS to Logstream, CA IDMS v17, CA Datacom CDC, IMS Active Log, and SMF data.

Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

TREETIP: tcVISION Allows for Surprisingly Innovative Uses

tcVISONv6

Treehouse Professional Services consultants help Adabas / Natural customers in a variety of ways, including DBA services, performance tuning, change management implementation and training, and data replication planning and training. Our experience and long history of service to the Adabas / Natural community helps us create innovative solutions for our customers’ challenges.

Recently Treehouse Senior Technical Representative Chris Rudolph assisted a customer with a tricky data replication problem. The customer uses tcVISION to perform bi-directional replication between Adabas and an RDBMS during the phase-in of a new application. Unfortunately, the new application incorrectly updated certain columns in the RDBMS, which were then replicated to Adabas. The customer attempted to address the issue by running a series of ADASEL reports against the Adabas PLOG and manually checking for “bad” transactions, which was a very time consuming process that pulled the Adabas DBA away from her normal duties.

Chris explained that tcVISION could expedite the process by replicating all transactions for the Adabas file to a journal table capturing the “before” and “after” values of the problematic columns. The developers working on the new application could then identify invalid values, correct the application, and patch the data themselves. This also allowed the Adabas DBA to return to their normal duties.

The journal table now includes columns to display the “before” and “after” values of the corrupted column, Adabas transaction time, end transaction time, operation and Adabas userid. The customer’s developers immediately recognized immense value from being able to query the journal table to find bad data, patch the data, prove that corruption is no longer taking place, and verify that all corrupted instances of the data have been patched. Journal tables have been added for all replicated Adabas files, and the developers now rely on the journal tables for all of their data patches.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

PRODUCT SPOTLIGHT: tcVISION v6 Overview and Updates

tcVISONv6

Several exciting new features and updates are now in tcVISION v6, including new output targets Adabas LUW, DB2 BLU, EXASOL, Hadoop,and MongoDB. Additionally, new input sources include z/OS VSAM Logstream (CICS and Coupling Facility / Shared VSAM),z/OS VSAM Batch Extension, z/OS DBMS to Logstream, CA IDMS v17, CA Datacom CDC, IMS Active Log and SMF data.

Another feature recently announced is the tcVISION “Direct Loader” for BULK_LOAD processing. The function does not require output to a sequential file, and the loader utility for the target DBMS is called via API with data passed directly. Direct loader supports PostGreSQL, Microsoft SQL Server and DB2 LUW / DB2 BLU. The advantage of using the Direct Loader is the elimination of disk access in writing and reading the sequential loader data file. File output is still supported (e.g., where loader data is to be distributed to other machines).

Finally, as mentioned in a previous Treehouse Blog, with tcVISION v6 comes the newly enhanced web statistics functionality and web server. Any standard web browser can access this server (Internet Explorer, Firefox, Opera, Chrome, Safari, etc.)

This valuable feature enables users to view data from the tcVISION Manager Monitor, and statistical and operational information from the tcVISION Manager network.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

Customer Success: tcVISION / Hadoop Integration

_tcVISION_Customer_Hadoop_Header

BAWAG P.S.K. is one of the largest and most profitable banks of Austria with more than 1.6 million private and business customers and a well known brand in the country. The business strategy is oriented towards low risk and high efficiency. Business segments are Retail Banking and Small Business, Corporate Lending and Investments and Treasury Services and Markets. The center of the BAWAG P.S.K. business strategy is the offering of easy to understand, transparent and first-rate products and services which meet the requirements of the customers.

The BAWAG P.S.K. Bank für Arbeit und Wirtschaft and Österreichische Postsparkasse Aktiengesellschaft ” (P.S.K.), Vienna, operates their IT on the z/OS operating system. The corporate data is stored in DB2 databases. ORACLE is the database platform for the Open System environments. In relationship with another project P.S.K already had a client component of tcVISION installed. Magister Markus Lechner, Head IT Applications: “tcVISION was already in use and we had good experiences as far as functionally and support is concerned. Because we were in the planning process for the implementation of another project, we included tcVISION in the list of software solutions. The goal of this project has been the reduction of the load on the IBM mainframe and as a result the reduction of costs. The intention was to offload data from our core database system to a less expensive system in real-time and to provide read access from that system to the new infrastructure. The reasons for this were constantly increasing CPU costs on the mainframe because of the growing transaction load of the Online Banking, Mobile Banking, and Self Service devices. A large percentage of the load was caused by Read-Only-Transactions.” Markus Lechner: “After the tcVISION presentation, we arranged a Proof Of Concept. The important aspects of the POC were not only the functionality of tcVISION within the project but we also wanted to see whether our expectations would be met related to performance and CPU consumption on the mainframe. In addition to tcVISION, we also evaluated another product. All of our expectations have been met to our full satisfaction during the POC and we made the decision to go ahead with tcVISION.”

After a short implementation period, the project is now in production for one year. Markus Lechner describes the project: “The primary objective of the project with tcVISION was the reduction of CPU load on the mainframe to reduce our costs. Our intention was to offload data from our core database system to a less expensive system in real-time and to provide read access from that system to the new infrastructure.

tcVISION_Hadoop

We use tcVISION for the realtime replication and we use Apache Hadoop as a cost efficient system for the storage of the data. In addition to the primary usage scenario we have the benefit to also cover additional use cases. This includes Real-time-EventHandling & Stream Processing, Analytics based upon real-time data as well as the possibility to report and analyze structured and unstructured data with excellent performance. The system can be inexpensively operated on Commodity Hardware and has no scalability limitations. Compared to the savings the costs of replication (CPU consumption) of tcVISION are very low. The support provided  was excellent during the implementation phase and also during the production phase. Inquiries by telephone or E-mail cause prompt reactions. Problems that came up during this period were solved as soon as possible even when the tcVISION software had to be extended.” There are additional plans to extend the use of tcVISION in the future. One is to implement real-time replication from ORACLE into the data lake.

Magister Markus Lechner draws a conclusion: “tcVISION enables us to significantly reduce our mainframe cost through a real-time replication to a less expensive environment. tcVISION performs a very economical log file based replication. In addition we are now in a position to implement numerous application cases based upon the replicated data which would have been too expensive and resource intensive on the mainframe. Realtime-Event-handling, Realtime-Analytics, Realtime-Fraud Prevention are only a few of the use cases that we currently cover.”


tcVISION

_0_tcVISION_Simple_Diagram

tcVISION enables bidirectional replication for DB2, Oracle, and SQL Server running on Linux/Unix/Windows, and synchronizes each data source, first by doing a bulk load from source(s) to target and then by replicating only changes— only committed changes—from source(s) to target. So there can never be ambiguity as to whether a query against the target database involves uncommitted data.

Read other tcVISON customer success stories here.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

TREETIP: tcVISION Supports Hadoop

by Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

tcVISONv6

Hadoop and Big Data are revolutionizing data processing, and because of the increasing digitalization, the Internet, the rising importance of Social Media, and the presence of “Internet of Things”, the data diversity is growing in dimensions that did not exist before.

To process and maintain large and diverse data sets in a meaningful way, new technologies (such as Hadoop) have been developed. What is Hadoop? Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation.

Enterprises with heterogenous IT infrastructures, especially larger corporation of all industry sectors and public institutions, very often include mainframe technology. These enterprises are now facing the challenge to integrate existing mainframe data into a Hadoop platform – in real-time.

Data integration technology also has experienced great evolution over the past decades. Today, a standard ETL solution is not sufficient, and the understanding of data integration must now include the entire data exchange process in terms of replication and synchronization. Data exchange is now a time critical process. Near real-time is more and more the only accepted method to meet the high, up-to-date requirements in an increasing co-existence of mainframe and Hadoop technologies.

The tcVISION Solution

An important part of the added value of modern IT systems is the latency-free data- and process-integration of transactional and analytical areas. The cross-system integration platform from Treehouse Software, tcVISION, is unique, efficient, and reliable. With tcVISION, mainframe data can quickly and easily be integrated in near real-time into Hadoop-based operative applications or Business Intelligence and Analytics.

The tcVISION solution is proven and mature, and is constantly under development to meet the requirements of new technologies, including support for Hadoop in Version 6.

The main focus of the tcVISION integration platform is to allow real-time synchronization to integrate mainframe data into Hadoop based solutions.

hadoop_solution

The tcVISION Technology Components

The tcVISION integration platform consists of a variety of state-of-the-art technology components, which cover much more than simply an ETL process.

  1. Data exchange in the sense of a real-time synchronization becomes a single step operation with tcVISION.
  2. No additional middleware is required.
  3. Modern Change Data Capture Technologies allow an efficient selection of the required data from the source system with focus on the changed data. The data exchange process is reduced to the necessary minimum which results in lower costs for the cross-system data integration.
  4. tcVISION also supports the fast and efficient load of large volumes of mainframe data into Hadoop. In this context the processor costs of the mainframe are low and negligible.
  5. An integrated Data Repository guarantees an overall cross-platform and transparent data management. Mainframe knowledge is not required.
  6. tcVISION include a rule-engine to transform data into a target compliant format or allows user-specific processing via supplied APIs.
  7. The integrated staging concept supports the offload of changed data in “Raw Format” to less expensive processor systems. This reduces mainframe processor resources to a minimum. The preparation of the data for the target system can be performed on a less expensive platform (Linux, UNIX or MS-Windows).
  8. The transfer to and feeding of data into Hadoop is part of the tcVISION data exchange process. No intermediate files are required.
  9. The exchange of large volumes of data between a production mainframe environment and Hadoop can run in parallel processes to reduce latencies to a minimum.
  10. The tcVISION integration platform contains comprehensive control mechanisms and monitoring functions for an automated data exchange.
  11. tcVISION has been designed in a way that Hadoop-based projects can be deployed with total project autonomy and maximum reduction of mainframe resources.

With tcVISION, data synchronization between mainframe and Hadoop pays off

  • Near real-time replication of mainframe data to Hadoop allows actual real-time analytics, or the relocation of mainframe applications (i.e., Internet applications like Online-Banking, e-Government, etc.) to Hadoop with synchronous data on both platforms.
  • Because of the concentration on changed data, the costs of the data exchange are greatly reduced.
  • The utilization of mainframe resources is reduced to a level that minimizes costs for mainframe know how and mainframe MIPS.
  • Data exchange processes can be deployed and maintained with tcVISION without mainframe knowledge, hence costs can be saved and Hadoop projects can be faster developed and put into production.
  • The near real-time replication of tcVISION from mainframe to Hadoop allows the relocation of BI reporting and analytic applications to the more cost efficient and – for these applications – more powerful Hadoop platform.

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs..

Mainframe CDC from Treehouse Software

The globalization of markets, increase of data volumes, and high demand for up-to-date information require new data transfer and exchange solutions for heterogeneous IT architectures, and as many customers have discovered, Treehouse Software has the right product (or combination of products) to meet any conceivable mainframe data migration, replication, or integration requirement. To meet many of these needs, Treehouse Software’s proven and mature tcVISION product moves data – as little as possible – as much as necessary. tcVISION is an innovative software solution that processes changed data in real time, in intervals, or event based.

ChangeDataCapture

The tcVISION solution focuses on changed data capture (CDC) when transferring information between mainframe data sources and LUW databases and applications. Changes occurring in any mainframe application data are tracked and captured, and then published to a variety of RDBMS and other targets.

_0_tcVISION_Simple_Diagram

tcVISION enables bidirectional replication for DB2, Oracle, and SQL Server running on Linux/Unix/Windows, and synchronizes each data source, first by doing a bulk load from source(s) to target and then by replicating only changes— only committed changes—from source(s) to target. So there can never be ambiguity as to whether a query against the target database involves uncommitted data.

Read some tcVISON customer success stories here.


Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

 

TREETIP: Did You Know About tRelational’s Schema Auto-Generation Feature?

by Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

tRelational, the data analysis, modeling, and mapping component of Treehouse Software’s Adabas-to-RDBMS product set provides three options for developing RDBMS data models and mapping ADABAS fields to RDBMS columns:

Option 1: Auto-generation

Option 2: Importation of existing RDBMS schema elements

Option 3: Detailed definition and manipulation of schema and mapping elements using tRelational

The Auto-generation function can be an extremely useful productivity aid. By simply invoking this function and specifying an Adabas file structure, a fully-functional corresponding RDBMS schema (Tables, Columns, Primary Keys, Foreign Key relationships and constraints) and appropriate mappings are created virtually instantaneously. The table and column names, datatypes, lengths, and mappings/transformations are all automatically tailored specifically for the RDBMS product, platform, and version–the user need not be an expert in the RDBMS.

tRelational’s schema auto-generation simply requires specification of an Adabas file structure and associated options…

tRe_AutoGen

The auto-generated model can be immediately used to generate both RDBMS DDL and parameters for the Data Propagation System (DPS) component. Within minutes of identifying the desired Adabas file or files to tRelational, the physical RDBMS schema can be implemented on the target platform and DPS can begin materializing and propagating data to load into the tables.

It is important to note that these modeling options complement each other and can be used in combination to meet any requirements. Auto-generated schema elements can be completely customized “after the fact”, as can imported elements. Auto-generation can be used at the file level to generate complete tables and table contents at the field level, making it easy to then manually define and map one or more columns within a table, or even to denormalize MU/PE structures into a set of discrete columns.


About Treehouse Software’s tRelational / DPS Product Set

tReDPSMainDiagram01

tRelational / DPS is a robust product set that provides modeling and data transfer of legacy Adabas data into modern RDBMS-based platforms for Internet/Intranet/Business Intelligence applications. Treehouse Software designed these products to meet the demands of large, complex environments requiring product maturity, productivity, feature-richness, efficiency and high performance.

The tRelational component provides complete analysis, modeling and mapping of Adabas files and data elements to the target RDBMS tables and columns. DPS (Data Propagation System) performs Extract, Transformation, and Load (ETL) functions for the initial bulk RDBMS load and incremental Change Data Capture (CDC) batch processing to synchronize Adabas updates with the target RDBMS.

Visit the Treehouse Software website for more information on tRelational / DPS, or contact us to discuss your needs.