tcVISION and Amazon Web Services (AWS) – A Winning Combination

Why use tcVISION with AWS?

tcVISION_Mainframe_To_AWS_635

IBM mainframes are in use today at many of the world’s largest corporations.  They often contain key, mission critical data that is often difficult to exploit with the latest technologies, due to the proprietary nature and data formats of their databases.  tcVISION changes this and allows your corporation to exploit this data with the power of the world’s leading cloud platform – AWS.

tcVISION is a software tool that enables data replication between legacy mainframe, cloud, and open systems environments.  Its mainframe database support scope is broad and it supports the key legacy mainframe databases in use today. tcVISION has the intelligence to convert the proprietary data formats of Adabas, Datacom, IDMS, VSAM, IMS, and Db2 using data wizards specific to each database.  Its intelligence handles the required transformations, including EBCDIC to ASCII, endianness, copybooks to columns, embedded dependencies transforming these proprietary formats to databases such as Hadoop, Oracle, PostgreSQL, Aurora, and MySQL.

tcVISION_Connection_Overview_Web01

tcVISION has ETL bulk load functionality and uses native change data capture (CDC) mechanisms, keeping the data between mainframe databases and modern databases in constant real time synchronization.  Its support of bi-directional replication allows changes to be captured and updated in two different environments, including modern databases.  tcVISION is a powerful tool for data and application modernization, allowing legacy mainframe teams to continue their work while modern teams use leading edge technology on the same data.  It enables modern data warehouses and databases to have access to key corporate data residing on legacy systems.

While tcVISION can be utilized in your corporate datacenter, using tcVISION with AWS has several key advantages:

  • Rapid global deployments to 18 key geographic regions
  • Database as a Service– Relational Database Instances (for Oracle, PostgreSQL, MySQL, SQL Server, MariaDB, and Aurora) greatly simplifying administration
  • Powerful, low cost, highly available databases, such as Aurora
  • Extreme high availability using availability zones
  • Numerous infrastructure options to optimize the environment and enhance performance
  • Bursting capabilities to accommodate peak loads
  • Controlling costs through infrastructure optimization
  • Moving capital expense to operational expenses
  • Extreme security at every layer, maintained by world class security experts

“Security, scalability, resiliency, recoverability, and cost of applications in the cloud are better than what almost any private enterprise could achieve on its own”:

https://www.cloudcomputing-news.net/news/2017/may/17/cloud-computing-goes-beyond-tipping-point-financial-services-says-dtcc/

Rapid Global Connectivity

___Global_Connectivity

The AWS Cloud spans 55 Availability Zones within 18 key geographic Regions around the world.  When establishing an EC2 instance (virtual Windows or Linux instance) or a Relational Database instance, the geographic region and availability zone is a parameter.  In minutes, you can establish tcVISION instances with a fully functional database instance in key global business centers, including the Americas, Europe, and Asia.   Once the data is replicated, your global customers can benefit from the low latency of the data center proximity. Global data can be synchronized using the Amazon’s high speed global network using technologies such as Cloud Front and S3 Acceleration. Whatever your company size, leveraging these capabilities allows your customer to gain low latency global access to the key corporate data residing on your IBM mainframe.

Powerful and Cost Effective “Relational Database as a Service” Instances

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud.  Amazon RDS is available on several database instance types – optimized for memory, performance, or I/O – and provides you with six familiar database engines to choose from, including Amazon Aurora, Maria DB, PostgreSQL, Oracle, and Microsoft SQL Server.

tcVISION enables you to migrate or perform real-time replication between your on-premises mainframe and Amazon RDS databases.  Your business can rapidly deploy databases globally within minutes with minimal administration requirements.

There are several advantages of a Amazon RDS databases.  One especially powerful database is AWS Aurora which can be established as either mySQL or PostgreSQL API compatible.  Your business can leverage these key advantages:

  • Serverless Aurora: With serverless Aurora there is no database size pre-allocation required.  The database automatically scales up and down based on load and you pay only for the resources utilized.
  • High availability: HA is built into the architecture of Aurora, with six copies of data maintained across three Availability Zones. That is two copies in each Availability Zone (data center in a region), boosting availability with minor disruption even if an entire Availability Zone is down. In addition, the database is continuously backed up to Amazon S3, so you can take advantage of the high durability of S3 (99.999999999) for your backups.
  • Performance: Amazon Aurora provides 5X the throughput of MySQL and 3X the throughput of standard PostgreSQL running on the same hardware
  • Manageability: Amazon Aurora simplifies administration by handling routine database tasks such as provisioning, patching, backup, recovery, failure detection, and repair.
  • Storage Scaling: Aurora storage scales automatically up to 64TB per database instance, growing and rebalancing I/O across the fleet to provide consistent performance.

Scalable and high-performance

___Scalability

Using AWS tools, Auto Scaling, and Elastic Load Balancing, your application can scale up or down based on demand.

Unlike a traditional data center, with the AWS infrastructure environment, you pay only for what you use, and your company is charged only for resource utilization.  Users can bring up EC2 virtual Windows or Linux instances for any length of time. For example, if the mainframe database bulk loads only take an hour to complete, the tcVISION EC2 instances can be brought up for the two hours needed, and shut down when the processing is finished.  AWS EC2 instances can be dynamically stopped and restarted while retaining its data on AWS Elastic Block Storage. This way, your business pays only for what is needed and the instance sizes (CPU, memory, and I/O bandwidth) can be optimized for the required mainframe data migration. or data replication process.

AWS resources, such as EC2 instances, storage, and databases can be rapidly provisioned and de-provisioned.  The allows your corporation to approach the architecture with an agile mentality.  Once the initial design and architecture is complete, it can be rapidly tested and adjusted as required to optimize all aspects, including security, availability, performance, and operation readiness. The AWS environment allows users to experiment with various architectures and sizing.  For example, a customer can test various databases and EC2 instance sizes and types, and make changes based on test results to optimize sizing as required.

tcVISION CDC replication typically doesn’t need consistently high levels of CPU, but benefits significantly from having full access to very fast CPUs when needed. For example, there are significant spikes in mainframe database update activities, and tcVISION can benefit from very fast CPUs.  T2 instances are engineered specifically for these use cases.

Using AWS Auto Scaling, you maintain optimal application performance and scalability, even when workloads are periodic, unpredictable, or continuously changing.  AWS Auto Scaling continually monitors your applications to make sure they are operating at your desired performance levels.  When demand spikes, AWS Auto Scaling automatically increases the capacity of constrained resources, so you maintain a high quality of service.

Monitoring of environmental health

Support

In traditional data center environments, vendor supplied infrastructure monitoring tools are often deployed to monitor the infrastructure environmental health.  They measure and alert on environmental factors such as CPU utilization, memory usage, network bandwidth, and disk space.

In the AWS environment, the CloudWatch environment monitoring is integrated with AWS resources such as EC2 instances and Relational Database instances. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources. This allows you to set alerts at user specified thresholds and take proactive measures such as notifying individuals and taking automated actions on the AWS environmental resources.  Given the dynamic nature of AWS resources, proactive measures including the dynamic re-sizing of infrastructure resources can be automatically initiated.

World Class Security

___Security_Eye

The AWS environment is enabled for security at every layer.

AWS resources are contained within your Virtual Private Cloud (VPC).  The Amazon VPC enables you to launch AWS resources into an isolated virtual private network that you define.  The Amazon VPC provides features that you can use to increase and monitor security, and security groups act as a firewall for associated EC2 instances, controlling the inbound and outbound traffic.  Network access control lists (ACLs) act as a firewall for associated subnets controlling inbound and outbound traffic at the subnet level.  Flow logs capture information about the IP traffic going to and from network interfaces in your VPC.

AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely.  You can create users in IAM, assign them individual security credentials (access, keys, passwords, and multi factor authentication devices).  You create roles in IAM and manage permissions to control which operations can be performed by the entity, or the AWS service that assumes the role.  If you already manage user identities outside of AWS, you can use an IAM identify provider instead of creating IAM users in your AWS account.  With an identity provider, you can manage your user identities outside of AWS and give these external user identities permission to use AWS resources in your account.

Data can be encrypted at rest or in transit.  Data encryption capabilities are available in AWS storage, database services such as EBS, S3 Glacier, Oracle RDS, SQL Server RDS, Redshift and many other services.  Flexible key management options, including AWS Key management Service, allows you to choose whether to have AWS manage the encryption keys or to enable you to keep complete control over the keys.


__TSI_LOGO

Contact Treehouse Software Today for an Online Demonstration of Mainframe-to-AWS Cloud Data Replication

Demo_Example

Request a tcVISION live demo, where we show tcVISION replicating data from a mainframe to AWS Aurora in the Cloud. Just fill out our demo request form, and a Treehouse representative will be in touch to schedule a convenient date and time.

Treehouse Software Customers are Looking Upwards to Mainframe-to-Cloud Data Replication

The search is on for a mature, easy-to-implement Extract Transform and Load (ETL) solution for migrating mission critical data to the cloud.

Mainframe_to_Cloud

Treehouse Software’s tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects, and enabling data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and cloud-based.

Mainframe-to-Cloud Case Use Example…

BAWAG P.S.K. is one of the largest banks in Austria, with more than 1.6 million private and business customers and is a well-known brand in the country. Their business strategy is oriented towards low risk and high efficiency.

BAWAG was looking to reduce the load on their IBM mainframe and as a result, reduce costs. The project involved offloading data from their core database system to a less expensive system, in real-time, and to provide read access from that system to the new infrastructure. The primary motivator for this data migration was the constantly increasing CPU costs on the mainframe caused by the growing transaction load of online banking, mobile banking, and the use of self service devices.

BAWAG ultimately migrated their online banking application to the cloud using tcVISION. Realtime Event-handling, Realtime Analytics, Realtime Fraud Prevention are only a few of the use cases that the bank’s solution currently covers.

BAWAG_Diagram

The bank decided to use tcVISION to migrate z/OS DB2 data into a Hadoop data lake (a storage repository that holds raw data in its native format). 20 Million transactions were made within 15 minutes.

Cost Reductions Seen Immediately

BAWAG is now seeing a 35-40 percent reduction of the MIPS consumption for online processing during business hours. After hours, consumption is less, because it is mainly batch processing on the mainframe. Currently, a volume of approximately 30 GB changed data (uncompressed) is replicated from DB2 per day.

In addition to the primary usage scenario, BAWAG can also cover additional use cases. This includes real-time-event handling and stream processing, analytics based upon real-time data as well as the possibility to report and analyze structured and unstructured data with excellent performance. The system can be inexpensively operated on Commodity Hardware and has no scalability limitations. Compared to the savings, the costs of replication (CPU consumption) of tcVISION are now very low.

Additionally, BAWAG plans to extend the use of tcVISION in the future, including implementation of real-time replication from ORACLE into the data lake.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Linux, Unix and Windows platforms. This innovative technology offers comprehensive abilities to identify and capture changes occurring in mainframe and relational databases, then publish the required information to an impressive variety of targets, both on-premise and Cloud-based.

_0_tcvision_connection_overview

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

TREETIP: tcVISION now supports IAM Files – a solution of Innovation Data Processing

IAM

“The Innovation Access Method (IAM) is a transparent alternative to VSAM KSDS, ESDS, and (as a cost option) to AIX and RRDS files. The savings achieved over VSAM can vary, but IAM typically uses 40-80% less I/Os, 20-70% less CPU and 30-70% less DASD. Savings in batch elapsed times and online response times can range from 20-80%.“ (INNOVATION Data Processing)

Users of IAM files now have the possibility of replicating data into any DBMS and NoSQL DBMS systems in the distributed systems environment as well as into Cloud and Hadoop platforms in real-time with tcVISION. Thus, the latest technology can easily be used for Analytics and Business Intelligence. tcVISION also supports bidirectional synchronization between IAM files and DBMS in the distributed environment. Therefore data between Oracle, SQL Server, DB2 LUW, Informix, Adabas LUW can be synchronized in real-time without programming effort. The transparent integration of IAM files in a heterogeneous system environment can be easily and quickly implemented.

Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

Treehouse Proof of Concept: Bi-directional Replication Between Adabas and SQL Server

Chris Rudolph and Kevin Heimbaugh, Senior Technical Representatives for Treehouse Software, visited a customer site (a large retail and distribution company) to perform a five-day proof of concept (POC) of tcVISION with bi-directional replication between Software AG’s Adabas and Microsoft SQL Server.

Chris and Kevin initially met with the customer team, consisting of the DBA, Applications Manager, and a technical applications person. The agenda for the week was set to:

  • Import metadata from several Adabas files
  • Bulk load the Adabas data into SQL Server
  • Set up replication from Adabas to SQL Server
  • Add the bi-directional replication back to Adabas

Additionally, there were a few other items the customer wanted the Treehouse team to address, including support for date formats; timestamps for bi-directional replication to avoid update conflicts; using Predict views to define multiple SQL Server tables; and support for MUs and PEs. Chris noted that everything on the customer’s list is easily supported, and there are several options for the update scenarios that can be used.

_0_tcVISION_Adabas_To_SQLServer

After the tcVISION components were installed, the POC began by using tcVISION’s Control Board to define a metadata repository database in SQL Server. Once that was set, the teams moved on to import the first Adabas file’s metadata using tcVISON’s Metadata Import Wizard. As part of this process tcVISION generated Adabas to SQL Server schemas and field-to-column links as well as created target tables in SQL Server. Bulk Transfer scripts were created using a wizard to read the Adabas file on the mainframe, and load the data into SQL Server using the SQL Server bulk loader. Chris created a control script to show how tcVISION can concurrently bulk transfer multiple Adabas files into SQL Server This required increasing the tcVISION Manager’s VSE partition size to successfully test multiple load scripts executing in parallel.

The teams moved on to define the real-time change data capture (CDC) scripts necessary to process the Adabas PLOG. The tcVISION scripts use a two-phase approach to queue captured Adabas transaction on the open platform, then transform and apply the transactions to SQL Server. The scripts were set up to automatically generate detailed logs to track the PLOG transactions captured, SQL statements successfully applied to SQL Server, failed SQL statements, and informational items such as auto-corrected data and transactions rejected due to processing rules.

Now that several tables were defined and loaded, the bi-directional process was set up. SQL Server CDC was enabled for each table to be replicated. The team made a change within SQL Server and verified that the change show up in the SQL Server CDC tables. The SQL Server-to-Adabas mappings were defined in the tcVISION metadata repository, including the “back update check” to ensure only non-tcVISION transactions are captured, and the scripts on both Windows and mainframe were defined to create the LUWs from the SQL Server CDC and apply the changes to Adabas.

CDC from SQL Server to Adabas was successfully tested. Chris then showed the ability to create Journal replication where each change can be captured by replication type. The team spent time creating a few more mappings so multiple file / table updates could be tested, in addition to doing updates while the scripts were stopped to simulate a lost connection. This included setting up a new script to process copied PLOG datasets created by the ADARES utility.

The team defined the remainder of their Adabas files to the metadata repository. Some were set them up for bi-directional replication, and others were setup for unidirectional replication and Journal replication. Everything work as expected at the wrap-up meeting where the team provided a live demonstration to management of tcVISION and the items accomplished. The final tcVISION presentation and demo went very well, and everyone was pleased with the progress made during the week.


Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcvision_connection_overview

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__tsi_logo_400x200

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

tRelational / DPS Adabas-to-Oracle Success in South Africa

by Hans-Peter Will, Senior Technical Representative and Joseph Brady, Manager of Marketing and Technical Documentation at Treehouse Software, Inc.

Recently, Hans-Peter Will, Senior Technical Representative for Treehouse Software, traveled to South Africa to assist our partner Bateleur Software (pty) Ltd. with setting up a large public-sector customer’s data replication implementation using our Adabas-to-RDBMS tool set, tRelational / DPS (Data Propagation System).

Arriving in Johannesburg, Peter met with representatives from Bateleur and the IT Organization, where the key players discussed how tRelational / DPS was going to be used in the project. The customer initially wanted to populate sample data into Oracle, so Peter configured tRelational / DPS to process one of the smaller Adabas files to generate some data. He also recommended running an analysis with tRelational to determine whether the file contained a unique key. Peter took this opportunity to show the customer what other benefits they could realize out of the analysis information. Interestingly, they used a personnel file for analysis, and Peter was immediately able to show that 23 of the records had no gender entry and 180 of records had no surname. The customer was very pleased to see these revelations in the first analysis, and looked forward to identifying other data quality issues before commencing data replication.

Customer Replication Scenario with Treehouse Software Product Set…

_0_tReDPS_Replication_Scenario

The next step was to build the target structure in accordance with the Oracle DBA’s requirements. The DBA had specified that all columns were to be defined as VARCHAR2, except the date information. After the first model was completed, DDL and DPS parameters were generated and a quick materialization of data accomplished the desired result.

At the subsequent kickoff meeting, Peter provided a complete tRelational / DPS overview and discussed the target structure with the attendees and the Oracle DBA. The rest of the day was spent doing Adabas file implementations, analysis and modeling.

Setup was then completed for transferring extracted and transformed Adabas data into the customer’s Windows environment. Adabas Vista is used, so that one logical Adabas file was actually split into two files stored on different Adabas databases, and the customer wanted to combine them into the same target table in Oracle. While there was no unique descriptor, it was discovered that three fields in combination would make a unique key, enabling the model to be created to combine data from the separate physical files into a single Oracle table.

The team proceeded with file implementation, modeling, mapping, DPS executions, and resolving data issues. Various issues that were encountered, like invalid tab characters within the data, negative personnel numbers, duplicates in unique keys (maintained by the application) and the need to add an extra column to the output. These issues were resolved quickly by the customer’s staff.

Within a day, all the files were materialized and the PLOG copy process was modified so that from that point forward, every PLOG copy would automatically be processed through DPS Propagation to update the RDBMS on the target Windows machine.

The next day, Peter was asked by the customer, “How many of the files have been processed so far?”. Peter was pleased to report that every file was processed and was propagating successfully. The happy customer remarked that they never had a project that was completed this far ahead of the deadline.

Throughout the project, Peter never personally laid hands on a customer keyboard, but instead sat with staff, effectively training them and handing over comprehensive knowledge of tRelational / DPS. The customer was very excited to learn that their personnel can now easily use the product set to do any remaining work on their own.

A few days later, we received an e-mail from Bateleur:

“I had a very pleasant meeting with the customer today. They used tRelational to reject the non-unique keys, reran the Materialization, and reran DPS plus update into Oracle. The month-end update of Oracle that was taking nearly three days to complete, now takes five Minutes! Everyone delighted!”

Bjørn (Sam) Selmer-Olsen, Managing Director, Bateleur Software (pty) Ltd


About Treehouse Software’s tRelational / DPS Product Set

tReDPS_DIAGRAM

tRelational / DPS is a robust product set that provides modeling and data transfer of legacy Adabas data into modern RDBMS-based platforms for Internet/Intranet/Business Intelligence applications. Treehouse Software designed these products to meet the demands of large, complex environments requiring product maturity, productivity, feature-richness, efficiency and high performance.

The tRelational component provides complete analysis, modeling and mapping of Adabas files and data elements to the target RDBMS tables and columns. DPS (Data Propagation System) performs Extract, Transformation, and Load (ETL) functions for the initial bulk RDBMS load and incremental Change Data Capture (CDC) batch processing to synchronize Adabas updates with the target RDBMS.

Visit the Treehouse Software website for more information on tRelational / DPS, or contact us to discuss your needs.

TREETIP: tcVISION Supports Data Replication to MongoDB

tcVISONv6

The tcVISION cross-system integration platform is a robust, proven, and mature solution that is constantly under development to meet the requirements of new technologies, including support for MongoDB.

mongodb

MongoDB is among the leading NoSQL databases in the market and has been developed for the needs of today’s information technology. MongoDB supports a data model with dynamic schemata and is especially suitable to store large amounts of data using GridFS. It contains automatic failure protection using an integrated replication. MongoDB also offers native, idiomatic drivers for nearly all programming languages and frameworks.

Find out more about MongoDB here: https://www.mongodb.com

In addition to support for MongoDB, tcVISION features connectivity to other output targets, such as Hadoop (see previous blog about Hadoop support), Adabas LUW, DB2 BLU, and EXASOL. Additionally, new input sources include z/OS VSAM Logstream (CICS and Coupling Facility / Shared VSAM), z/OS VSAM Batch Extension, z/OS DBMS to Logstream, CA IDMS v17, CA Datacom CDC, IMS Active Log, and SMF data.

Find out more about tcVISION — Enterprise ETL and Real-Time Data Replication Through Change Data Capture

tcVISION provides easy and fast data migration for mainframe application modernization projects and enables bi-directional data replication between mainframe, Linux, Unix and Windows platforms.

_0_tcVISION_Simple_Diagram

tcVISION acquires data in bulk or via change data capture methods, including in real time, from virtually any IBM mainframe data source (Software AG Adabas, IBM DB2, IBM VSAM, IBM IMS/DB, CA IDMS, CA Datacom, even sequential files), and transform and deliver to virtually any target. In addition, the same product can extract and replicate data from a variety of non-mainframe sources, including Adabas LUW, Oracle Database, Microsoft SQL Server, IBM DB2 LUW and DB2 BLU, IBM Informix and PostgreSQL.


__TSI_LOGO

Visit the Treehouse Software website for more information on tcVISION, or contact us to discuss your needs.

Treehouse Software Partners with Stone Bond Technologies to offer comprehensive enterprise Data Virtualization

Press Contacts: Joe Brady (+1.724.759.7070 x110; jbrady@treehouse.com); Irma Barrientos (+1 713.300.8882; ibarrientos@stonebond.com)

Pittsburgh, PA; Houston, TX, November 17, 2016. ­ Treehouse Software, Inc., of Sewickley, PA and Stone Bond Technologies, L.P. of Houston, TX today announced a partnership to bring agile integration and data virtualization capabilities to every enterprise. Now mainframe data sources can be combined into virtual schemata with relational databases, ERP data, Cloud sources and social media by combining Stone Bond’s Enterprise Enabler® with Treehouse’s tcACCESS.

Since the mid-1990s, Treehouse has been a global leader in mainframe data migration, replication and integration, offering robust and flexible solutions for ETL, CDC and real-time, multidirectional replication between databases on various platforms. With Enterprise Enabler, Treehouse customers benefit from a single platform for enterprise and Cloud integration to deliver integration projects with up to 90% time savings over other solutions.

Treehouse will provide worldwide product sales, marketing and support for Enterprise Enabler, bringing leading-edge integration capabilities to its hundreds of existing customers and to new customers pursuing initiatives such as Agile BI, Master Data Management, Logical Data Warehouses and Web Services.

One of the fastest-growing data virtualization platforms on the market, Enterprise Enabler has been leading the innovation of data management, data federation and data virtualization for nearly 15 years. To serve its expanding base of customers, Stone Bond has grown its set of AppComm™ connectors to accommodate the widest range of enterprise and line-of-business data sources. Treehouse’s tcACCESS effectively becomes the “Mainframe AppComm”, providing full read/write access to a wide range of mainframe data sources on both z/OS and z/VSE platforms.

Wayne Lashley, Treehouse Chief Business Development Officer, said, “Our reputation for capably solving mainframe data integration challenges now extends to other modern enterprise data sources including SAP, Salesforce.com and Microsoft SharePoint. With the push of a button, our customers can securely expose these disparate virtualized sources to popular BI tools like Tableau and to other applications in standard formats like REST and JSON. Current, past and new customers can look to Treehouse to address any imaginable integration scenario.”

Commented Tom Sieger, Stone Bond SVP, Corporate Strategy and Business Development, “Embracing mainframe data sources bolsters the “enterprise” in Enterprise Enabler. We are excited to work with Treehouse to grow Enterprise Enabler sales into new markets and geographies.”


__tsi_logo_400x200

About Treehouse Software, Inc.
Privately-held Treehouse Software was founded in 1982, and has hundreds of customers worldwide that benefit from the company’s industry-leading products and outstanding technical support. The traditional strengths of the company in performance management, security and software configuration management have been complemented by visionary leadership in mainframe data replication and integration and application modernization.

stonebond_logo

About Stone Bond Technologies, L.P.
Headquartered in the heart of Houston, Texas, Stone Bond Technologies was founded in 2002 with a commitment to innovation that continues today. More than just adding features to existing offerings, the company’s approach to information business systems is to make an enterprise’s data agile: to move data efficiently and easily, without barriers. Stone Bond’s client base continues to expand from its roots in the Energy industry to Financial Services, Life Sciences, Information Technology, Hospitality, Manufacturing and more.