TDT: Much more than a mere “data connector” for Snowflake

by Joseph Brady, Director of Business Development at Treehouse Software, Inc. and Dan Vimont, Director of Innovation at Treehouse Software, Inc.

____0_TDT_Snowflake_Splash

Over the past few months, we have been rolling out information on Treehouse Dataflow Toolkit (TDT), a state-of-the-art, fully automated offering for data transfer from Kafka pipes to Analytics/ML/AI frameworks.  TDT is a set of proprietary microservices that assures highly-available, auto-scalable, and event-driven data transfers to your data science teams’ favorite analytics frameworks, such as Snowflake, Amazon Redshift, Amazon Athena/S3Amazon S3 Express One Zone Buckets, as well as Amazon Aurora PostgreSQL, all the while adhering to AWS’s and Snowflake’s recommended best practices for massive data loading. Make no mistake, TDT is MUCH more than merely a “connector”.

In this blog, we will focus on how TDT handles data transfers to perhaps the most complex environment: Snowflake.  Out of all TDT functions and features, our Snowflake connectivity offers the biggest “value added” to customers, because Snowflake has quickly become a top choice for enterprises looking for a Cloud platform onto which they can mobilize data at near-unlimited scale and performance, and bring advanced ML/AI capabilities.

Snowflake overview video…

Connectivity using Snowflake’s best practices vs. traditional ODBC…

TDT’s innovative Lambda-based (microservices) approach enables faster data flow than any conceivable ODBC-based solution, which is the standard tool used for most “roll your own” approaches, or “we have a connector for that” offerings.  

To load massive quantities of data to a target, TDT uses Snowflake’s (hugely scalable) bulk load utilities—not ODBC. It is vital to note that Snowflake is NOT a relational (OLTP) database, so doing CDC transfers to these targets via ODBC (with update, insert, delete transactions) goes directly against “best practices” advice from Snowflake, and would almost assuredly result in unwieldy bottlenecks.

____0_TDT_Snowflake01

TDT loads data into Snowflake’s “delta tables”, which inherently retain the entire history of source data ever since the source-to-target synchronization began (perfect for time-based trend/predictive/prescriptive analytics). Again, TDT adheres to Snowflake’s best practices recommendation for pulling data from S3 for bulk loading massive quantities of data…

____0_TDT_Snowflake02

Publishing both bulk-load and CDC data to a reliable and scalable framework like Kafka allows you to maintain a broad array of options to ultimately feed your legacy data to any number of JSON-friendly ETL tools, target data stores, and data analytics packages (some of which have not even been invented yet!). 

The “build vs buy” question is put to rest…

The Snowflake-proprietary target DDL/metadata/resources that TDT automatically produces for the staging of data in Snowflake are of such complexity that it is easy to justify the “buy” option in the “build vs buy” conversations customers have. A decision by an enterprise not to use TDT, but instead to build its own Kafka-to-Snowflake solution, could result in any or all of the following:

  • accumulation of technical debt
  • extensive/unpredictable time to production
  • ongoing resource planning to maintain home-grown technologies
  • potential vendor lock for maintenance of custom-made technologies designed and developed by consultants
  • managing a mix of manual and automated functions
  • tracking cobbled together components created by multiple staff and consultants
  • limited agility for future customization and innovation
  • problems adhering to evolving best practices over time
  • higher costs for future growth/scaling
  • potential lack of proper security/ongoing security updates
  • your organization has now become an enterprise software development company, whether or not you intended that, and whether or not you realized that!

Simply put, TDT is a self-contained, turn-key solution that can eliminate months, or years, of research and development time and costs. With TDT, high-speed and massive data movement to Snowflake takes minutes to ramp up.

Download the TDT AWS Partner Solution Brief to share with your team…

DOWNLOAD…AWS_TDT_Product_Brief_Thumb01

Treehouse Dataflow Toolkit (TDT) is Copyright © 2024 Treehouse Software, Inc. All rights reserved.

____Treehouse_AWS_Badges 

Contact Treehouse Software for a Demo Today!

Contact Treehouse Software today for more information or to schedule a product demonstration.

Does your data science team want to accelerate insights and bring advanced ML/AI capabilities to your mainframe data with Amazon Redshift? Sure they do—and Treehouse Software enables that…

by Joseph Brady, Director of Business Development at Treehouse Software, Inc. and Dan Vimont, Director of Innovation at Treehouse Software, Inc.

We are beginning to see a pleasant and welcomed trend with Treehouse customers who are looking to modernize their valuable mainframe legacy data on the Cloud—they are including their data science teams in the important planning phase of architecting new Cloud environments and targets. This is especially vital for customers who want to incorporate advanced analytics and ML/AI in their strategic data usage plans on the Cloud. Who can contribute better understandings of ultimate data usage than your resident data scientists?

____0_Amazon_Redshift

We have heard from many of these data scientists that a primary item on their “wish lists” is for a fully managed, AI powered, massively parallel processing (MPP) architecture to extract maximum value and insights. They specifically mention Amazon Redshift as the Cloud data warehouse (which is much more than a data warehouse) of choice for driving digitization across the enterprise, as well as help personalizing customer experiences. Redshift uses SQL to analyze structured and semi-structured data across data warehouses, operational databases, and data lakes, using AWS-designed hardware and ML to deliver the highest performance at any scale. To this desire/question, we can answer with a resounding, “Yes, Treehouse Software has got you covered with Redshift connectivity!”.

The Treehouse Software solution…

Enterprise customers have come to Treehouse Software, because we bring not only proven mainframe data replication tools, but deep subject matter expertise in mainframe technologies, as well as the know-how to target relevant AWS offerings, such as Redshift, S3 (including S3 Express One Zone – see our recent blog on S3 Express One Zone), etc.

The Rocket Data Replicate and Sync (RDRS) solution allows customers’ legacy mainframe environment to operate normally while replicating data on AWS. The technology focuses on changed data capture (CDC) when transferring information between mainframe data sources and Cloud-based databases and applications. Through an innovative set of technologies, changes occurring in any mainframe datastore are tracked and captured, and ultimately published to Redshift.

____0_Mainframe_To_RedshiftHow does it work?

  1. We start at the source – the mainframe – where an agent (with a very small footprint) extracts data (in the context of either bulk-load or CDC processing).
  2. The raw data is securely passed from the mainframe to RDRS, which speedily transforms mainframe-formatted data into Unicode/JSON and publishes the results to a Kafka topic.
  3. Our efficient, autoscaling microservices take it from there. Treehouse Dataflow Toolkit functions consume the data from Kafka and land it in S3 buckets, where Treehouse’s proprietary crawler technology is used to automatically prepare landing tables, views, and additional infrastructure in Redshift.  Thenthe mainframe data is loaded into Redshift (all the while adhering to AWS’ recommended “best practices” for massive data loading, thus assuring shortest and surest loads).  The inherent reliability and scalability of the entire pipeline infrastructure assure near-real-time synchronization between mainframe sources and Redshift target tables.

Redshift tables and views: something for everybody

Within this framework, the Redshift staging tables (often referred to as “delta tables”) are constantly accruing historical data, ideally suited for data scientists looking to do trend analysis, predictive analytics, ML, and AI work.  For business analysts and others who prefer structured data representations of potentially complex hierarchical data, the Treehouse framework also automatically provides structured user-views, providing the look and feel of a SQL database.

…as innovations move faster along the timeline, keep your options open!

Publishing both bulk-load and CDC data to a reliable and scalable framework like Kafka allows you to maintain a broad array of options to ultimately feed your legacy data to any number of JSON-friendly ETL tools, target datastores, and data analytics packages (some of which may not even have been invented yet!).  In addition to Redshift, the Treehouse Dataflow Toolkit also currently targets Snowflake, Amazon DynamoDB, and Amazon Athena/S3.

Video – Introduction to Data Warehousing on AWS with Amazon Redshift…


__TSI_LOGO

Contact Treehouse Software today to discuss your project, or to schedule a demo of our Mainframe-to-AWS real-time and bi-directional data replication solution. 

Treetip: Treehouse Software can help enterprise mainframe customers accelerate their data analytics, machine learning, and AI journeys by targeting the new Amazon S3 Express One Zone

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

Treehouse Software specializes in helping enterprise customers with Mainframe-to-Cloud, Multi-Cloud, and Hybrid Cloud data modernization projects. Many times, our customers not only discuss strategies for replicating their mainframe data, but also their plans for what they want to do with that data on the Cloud side.  This makes it important to our team to stay current on the latest Cloud offerings that can benefit our customers’ enterprise modernization planning. Consequently, a very exciting announcement caught our attention during the 2023 AWS re:Invent conference—the general availability of a new type of S3 storage service referred to as Amazon S3 Express One Zone Storage Class

For those unfamiliar, Amazon S3 (“simple storage service”) is the basic file storage service of AWS, and as such it forms a foundational pillar of the entire AWS world. Amazon S3 Express One Zone is a new type of S3 bucket called a “directory bucket”, which is purpose-built to deliver consistent, single-digit millisecond data access for an enterprise’s most frequently used data and latency-sensitive applications. The new S3 directory buckets allow customers to store data in a single Availability Zone (AZ) that they specifically select, as opposed to the default of three AZs for standard S3. This eliminates the latency associated with spreading data across multiple AZs, providing applications with lower-latency storage. S3 directory buckets also follow a different request scaling model compared to traditional buckets, and their authentication is based on sessions rather than on a per-request basis. Bottom line… reduction in compute time = greater cost reduction.

S3 Express One Zone is ideally suited for services such as Amazon SageMaker Model TrainingAmazon AthenaAmazon EMR, and AWS Glue Data Catalog to accelerate Machine Learning (ML) and interactive analytics workloads. With S3 Express One Zone, storage automatically scales up or down based on consumption and need, and customers no longer need to manage multiple storage systems for low-latency workloads.

So, why is S3 Express One Zone important to Treehouse mainframe modernization customers?

____0_Mainframe_To_S3ExpressOneZone

Amazon S3 Express One Zone just made the Amazon S3 targeting in the Treehouse Dataflow Toolkit (TDT) potentially much more potent and valuable to our enterprise mainframe customers.  When an enterprise uses TDT to land their mission critical data in Express One Zone flavored Athena/S3 buckets, it becomes more directly accessible and manipulable by the various AWS ML and AI tools. In short, if customers choose, Express One Zone Athena/S3 becomes an intermediate data store for big data processing workloads and advanced analytics.

So, when we are asked, “What should Treehouse Software be doing to respond to the burgeoning interest in ML, Generative AI, etc.?”, the answer is — We are doing exactly what we need to be doing.  AI and ML frameworks are the newest incentive for people to use RDRS (Rocket Data Replicate and Sync — formerly called tcVISION) and TDT from Treehouse Software to replicate their mainframe data on advanced data analytics frameworks, or possibly into super-charged S3 Express One Zone buckets.  

Video – Deep Dive Introduction to Amazon S3 Express One Zone Storage Class:


__TSI_LOGO

Contact Treehouse Software today to discuss your project, or to schedule a demo of our Mainframe-to-AWS real-time and bi-directional data replication solution. 

What is meant by “Regional Data Sovereignty” when replicating enterprise data on AWS?

by Joseph Brady, Director of Business Development and Cloud Alliance Leader at Treehouse Software, Inc.

I have recently been taking some classes in preparation for an AWS certification. In some of these classes, an example scenario has been used that speaks to an issue I’ve often heard mentioned by Treehouse mainframe customers­–that of “Regional Data Sovereignty”. For example, a customer might have government compliance requirements that financial information in Frankfurt cannot leave Germany, and many other countries have similar restrictions and regulatory controls in place.

Fortunately, Regional Data Sovereignty is a critical part of the design of AWS Global Infrastructure. Within this infrastructure, there are AWS Regions which address data that is subject to local laws and statutes of the country in which a Region is located. With the understanding that the customer’s data and application live and runs in various geographical Regions, there are four business factors a customer should consider when choosing a Region:

  1. Compliance. Before any other factors, customers must first look at their regional compliance requirements to determine if data must live within certain geographical boundaries.
  2. Proximity. How close the enterprise is to its customer base is another major factor because of possible latency issues between countries.  Locating a Region closest to the customer base is generally the best choice.
  3. Feature availability. Sometimes the closest Region may not have all the AWS features a business needs. Every year thousands of new features and products specifically to answer customer requests and needs are released by AWS. But sometimes those new services require new physical hardware that AWS has to build, so the service might be available one Region at a time. 
  4. Pricing. Even when the hardware is equal from one Region to the next, some locations are more expensive in which to operate. For example, the same workload in Sao Paulo could be significantly more expensive than if it is run out of Oregon in the United States. 

Additionally, events such as natural disasters, can happen to cause customers to lose connection to a data center, so a High Availability (HA) cutover plan should also be considered. The customer can run a second data center, but real estate prices alone could restrict that when considering all the duplicate expense of hardware, employees, electricity, heating and cooling, and security. Most businesses simply end up just storing backups somewhere, and then hope for the disaster to never come. And “hope” is not a good business plan. I recently covered how Treehouse Software can help provide an HA framework for mainframe customers in another blog.

Let’s take a look at the AWS Global Infrastructure and how its Regions are distributed worldwide…

____AWS_Global_Infrastructure

AWS Regions are built to be closest to the highest business traffic demands, such as in Paris, Tokyo, Sao Paulo, Dublin, and Ohio. Inside each Region, there are multiple data centers that have all the compute, storage, and other services customers need to run their applications. By utilizing AWS Regions for high availability of its business services, customers can be assured of minimal downtime of operations. Regions can be connected to each other through the high-speed AWS Direct Connect, which bypasses the public Internet, and the customer’s business decision maker chooses which Region they want to use. Each Region is isolated from every other Region in the sense that absolutely no data goes in or out of the customer’s environment in that Region without explicit permission for that data to be moved. These elements should be part of all critical strategic and security conversations when planning global distribution and availability of an enterprise’s data on AWS. 

Video – AWS Global Infrastructure explained…


__TSI_LOGO

Contact Treehouse Software today to discuss your project, or to schedule a demo of our Mainframe-to-AWS real-time and bi-directional data replication solution. 

So, you want to bring Snowflake’s advanced ML/AI capabilities to bear on your mainframe data? Treehouse Software enables that…

by Dan Vimont, Director of Innovation at Treehouse Software, Inc. and Joseph Brady, Director of Business Development at Treehouse Software, Inc.

The exploding popularity of advanced data analytics platforms such as Snowflake, where an ever-expanding array of machine learning and artificial intelligence tools are available to generate vital insights from your enterprise’s data, has quickly transformed the world of data processing.  Your data science teams are sitting there at their Snowflake consoles, eagerly awaiting the arrival of critical data from your mainframes to supercharge their predictive analytics and generative AI frameworks.

They’re waiting…

So, what’s the hold-up?

Oh yeah, getting legacy data out of ancient mainframe datastores and into Cloud analytics frameworks is HARD, right?

Um, no, actually — it’s not.

The Treehouse Software solution…

____0_Mainframe_To_Snowflake01

How does it work?

  1. We start at the source — the mainframe — where an agent (with a very small footprint) extracts data (in the context of either bulk-load or CDC processing).
  2. The raw data is securely passed from the mainframe to MDR (Treehouse Mainframe Data Replicator powered by Rocket® Software) which speedily transforms mainframe-formatted data into Unicode/JSON and publishes the results to a Kafka topic.
  3. Our efficient and autoscaling microservices take it from there. Treehouse Dataflow Toolkit functions consume the data from Kafka, automatically prepare landing tables, views, and additional infrastructure in Snowflake, and then land the data in Snowflake (all the while adhering to Snowflake’s recommended “best practices” for massive data loading, thus assuring shortest and surest loads).

Snowflake tables and views: something for everybody

Within this framework, the Snowflake staging tables are constantly accruing historical data, ideally suited for data scientists looking to do trend analysis, predictive analytics, ML, and AI work.  For business analysts and others who prefer structured data representations of potentially complex hierarchical data, the Treehouse framework also automatically provides structured user-views.

… and the world keeps on changing, so keep your options open!

Publishing both bulk-load and CDC data to a reliable and scalable framework like Kafka allows you to maintain a broad array of options to ultimately feed your legacy data to any number of JSON-friendly ETL tools, target datastores, and data analytics packages (some of which may not even have been invented yet!).  In addition to Snowflake, the Treehouse Dataflow Toolkit also currently targets Amazon Redshift, Amazon DynamoDB, and Amazon Athena/S3.


__TSI_LOGO

Contact Treehouse Software today to discuss your project, or to schedule a demo.