Treehouse Software is Helping Higher Education Customers Modernize Long-standing Mainframe Data on the Cloud

by Joseph Brady, Director of Business Development / Cloud Alliance Lead at Treehouse Software, Inc.

___Higher_Ed_Data

The Business Issue

Many higher education institutions have large volumes of mission critical and historical data stored in legacy mainframe databases (Db2, Adabas, IMS, IDMS, Datacom, VSAM, etc.). The cost to maintain these databases is high and they lack the features required for modernizing the data architecture. Additionally, the data is utilized by an extensive number of interlinked programs dependent on this legacy data.

Colleges and universities are searching for a solution that allows them to unlock their mainframe data within a Cloud-based data store, such as Amazon Simple Storage Service (Amazon S3), where they can use a wide array of analytics and machine learning services for easy access to all relevant data, without compromising security or governance.

Once mainframe data is on AWS, an institution can innovate quickly by creating new functions with Cloud speed, such as mobile users via Amazon API Gateway, or voice devices such as Amazon Alexa.

Additionally, data security is one of the biggest challenges facing most higher education organizations. Beyond the certifications and best practices that are part of having data reside on the AWS Cloud platform, there are also many security features and services designed to help an organization stay compliant with industry best practices and regulations.

The Solution: Mainframe-to-Cloud Data Replication 

Treehouse Software recently helped a large university with a requirement for a solution that allows their legacy mainframe database to continue while replicating data in real time on AWS. By using Treehouse Software’s tcVISION Mainframe-to-Cloud data replication product, the university was able to immediately utilize some of the most advanced Cloud tools and services in the world.

___tcVISION_AWS_Overall_Architecture

tcVISION enables the university to synchronize mainframe data to Amazon RDS for PostgreSQL. Furthermore, bi-directional, real-time data synchronization will enable changes on either platform to be reflected on the other platform (e.g., a change to a PostgreSQL table is reflected back on the mainframe database). This allows the university to modernize the application on PostgreSQL without disrupting the existing critical work on the legacy system, and modern tools can now be used in the PostgreSQL environment, greatly enhancing business agility.

Moving Forward…

Having on-demand, Cloud-based services available can now help IT teams build secure environments for mission-critical applications for the University, freeing them to focus on student success and plan for growth or increased seasonal demand.

tcVISION provides the quality of service required by enterprise data workloads for security, availability, and scalability, and university staff and students can look forward to quickly and affordably accessing Cloud compute, storage, and application services.

Replicating mainframe data on the Cloud can happen within days during a tcVISION Proof of Concept (POC)…

tcVISION_Overall_Diagram_General_Cloud

An online tcVISION POC is approximately 5-10 business days, with the customer providing use case and goals for the POC. A Treehouse Software consultant will assist in downloading and installing tcVISION and conducting a limited-scope implementation of a tcVISION application. This application uses customer data and executes on customer facilities, usually in a non-production environment. A document is provided beforehand that outlines the requirements and agenda for the POC.

By the end of the 10-day POC, customers can begin replicating mainframe data to their Cloud target database.  It can happen that fast!

Further Reading…

_0_AWS_Logo

Treehouse Software is an AWS technology partner, and the AWS Partner Network published a blog about tcVISION, our Mainframe-to-Cloud data replication product, which describes how tcVISION allows legacy mainframe environments to continue, while replicating data on highly available and secure Cloud platforms:

https://aws.amazon.com/blogs/apn/real-time-mainframe-data-replication-to-aws-with-tcvision-from-treehouse-software/


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our Product Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

Customers are discovering that they can quickly begin replicating their mainframe data to the Cloud during tcVISION Proof of Concepts (POCs)

by Joseph Brady, Director of Business Development / Cloud Alliance Lead at Treehouse Software, Inc.

___Data_Flow_Cloud

Modernizing long-standing mainframe systems has become a strategic imperative at many government, education, healthcare, financial, and retail organizations. As a result, these organizations are looking for solutions that allow their legacy environments to continue, while replicating data, in real time, on Cloud-based platforms, such as AWS, Microsoft Azure, Google Cloud, etc. This “data first” approach allows organizations to quickly take advantage of advanced Cloud technologies, such as big data analytics, artificial intelligence (AI), rapid global database deployments, high-level security, etc., while keeping the mainframe and Cloud sides synchronized.

With new IT modernization initiatives in the forefront, Treehouse Software is seeing a significant upswing in requests for online demonstrations and POCs of tcVISION, our Mainframe-to-Cloud data replication product.

You can start moving your mainframe data to the Cloud within days during a tcVISION POC…

tcVISION_Overall_Diagram_General_Cloud

An online tcVISION POC is approximately 5-10 business days, with the customer providing use case and goals for the POC. A Treehouse Software consultant will assist in downloading and installing tcVISION, and conduct a limited-scope implementation of a tcVISION application. This application uses customer data and executes on customer facilities, usually in a non-production environment. A document is provided beforehand that outlines the requirements and agenda for the POC.

By the end of the 10-day POC, customers can begin replicating mainframe data to their Cloud target database.  It can happen that fast!


__TSI_LOGO

Contact Treehouse Software for a tcVISION Demo Today…

Simply fill out our Product Demonstration Request Form and a Treehouse representative will be contacting you to set up a time for your requested demonstration.

High Availability Requirements for Mainframe Data Modernization — Running tcVISION in Global Availability Zones

by Joseph Brady, Director of Business Development / Cloud Alliance Lead at Treehouse Software, Inc.

Many customers embarking on Mainframe-to-Cloud data replication projects with Treehouse Software are looking at high availability (HA) as a key consideration in the planning process. The goal with HA is to ensure that systems are always functioning and accessible, with deployments located in various Availability Zones (AZs) worldwide. Having an HA architecture in place protects against data center, availability zone, server, network, and storage subsystem failures to keep businesses running without downtime or human intervention.

In this blog, we will give a high-level overview of tcVISION HA Architecture, using AWS as an example. However, HA basic principles are essentially the same across all Cloud platforms.

Example of the tcVISON HA Architecture on AWS

During tcVISION’s Change Data Capture (CDC) processing for Mainframe-to-Cloud data replication, HA must be maintained in the AWS environment. The Amazon Elastic Compute Cloud (Amazon EC2), which contains the tcVISION Manager, is part of an Auto Scaling Group that is spread across AZs with Amazon EC2 instance(s).

___tcVISION_AWS_HA_Architecture

Upon failure, the replacement Amazon EC2 instance tcVISION Manager is launched and communicates its IP address to the mainframe tcVISION Manager. The mainframe tcVISION Manager then starts communication with the replacement Amazon EC2 tcVISION Manager.

Once the Amazon EC2 tcVISION Manager is restarted, it continues processing at its next logical restart point, using a combination of the LUW and Restart files. LUW files contain committed data transactions not yet applied to the target database. Restart files contain a pointer to the last captured and committed transaction and queued uncommitted CDC data. Both file types are stored on a highly available data store, such as Amazon Elastic File System (EFS).

For production workloads, Treehouse Software recommends turning on Multi-AZ target and metadata databases.

To keep all the dynamic data in an HA architecture, tcVISION uses EFS, which provides a simple, scalable, fully managed elastic file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.


Treehouse Software can help organizations immediately start moving their mainframe data to the Cloud and take advantage of the most advanced, scalable, secure, and highly available technologies in the world with tcVISION

tcVISION_Overall_Diagram_General_Cloud

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Cloud, Open Systems, Linux, Unix, and Windows platforms.

View the Unequalled List of Environments Supported by tcVISION Here


__TSI_LOGO

___AWS_Select_Partner_Badge ___Google_Cloud_Partner_Badge

Contact Treehouse Software for a Demo Today…

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your tcVISION demonstration. This will be a live, on-line demonstration that shows tcVISION replicating data from the mainframe to a Cloud target database.

Now, more than ever, enterprises with mainframes are looking to modernize their legacy systems

by Joseph Brady, Director of Business Development / Cloud Alliance Lead at Treehouse Software, Inc.

Rapidly changing global health, economic, and political conditions are making fast access to the most current information more important than ever for official agencies and the public.  As a result, modernizing information systems is taking center stage and top priority, especially for organizations with critical mainframe data residing on a variety of long-standing databases, often still updated by COBOL applications! These databases include Db2, VSAM, IMS/DB, Adabas, IDMS, Datacom, or even sequential files. Unlocking the value of this important data can be difficult, because the data can be utilized by numerous interlinked and dependent programs that have been in place for many years, and sometimes decades.

Many organizations are now looking for modernization solutions that allow their legacy mainframe environments to continue, while replicating data in real time on highly available Cloud-based platforms (AWS, Google Cloud, Microsoft Azure, etc.). With a “data-first” approach, immediate data replication to the Cloud is enabling government, healthcare, supply chain, financial, and a variety of public service organizations to meet spikes in demand for vital information, especially in times of crisis.

Treehouse Software can help organizations immediately start moving their mainframe data to the Cloud and take advantage of the most advanced technologies in the world with tcVISION

tcVISION_Overall_Diagram_General_Cloud

Whether an enterprise needs to take advantage of the latest Cloud services, such as big data analytics, artificial intelligence (AI), rapid global database deployments, high-level security, etc., or move data to a variety of newer Cloud or Open Systems databases, the transition doesn’t have to be a sudden big bang.

A Phased Approach

Treehouse Software has extensive mainframe experience and subject matter experts to help organizations incrementally replicate their mainframe data to the Cloud and other modern systems, while keeping both sides synchronized.

Treehouse Software’s expert technical representatives help customers develop a phased plan that includes installation and implementation of the tcVISION mainframe data modernization product, script customization, data replication mapping, high availability, security, monitoring, training, etc..

After defining the architecture, the production deployment phase begins with incremental sprint-like deployments. Additional files are then deployed into production regularly.

This phased plan enables tcVISION to synchronize critical mainframe data to a Cloud / Open Systems database. Bi-directional, real-time data synchronization allows changes on either platform to be reflected on the other platform (e.g., a change to a PostgreSQL table is reflected back on mainframe). The customer can then modernize their application on the cloud, open systems, etc. without disrupting the existing critical work on the legacy system.

Additionally, tcVISION customers see drastically reduced mainframe MIPS costs, and increased ability to quickly respond to business environment changes.

Enterprise ETL and Real-time and Bi-directional Data Replication Through Change Data Capture with tcVISION

tcVISION uses an intuitive Windows GUI interface for administration, mapping and modeling, script generation, and monitoring. The product focuses on changed data capture (CDC) when transferring information between mainframe data sources and modern databases and applications. Through an innovative technology, changes occurring in any mainframe application data are tracked and captured, and then published to a variety of targets.

tcVISION – Supported Sources and Targets

tcVISION supports a vast array of integration scenarios throughout the enterprise, providing easy and fast data migration for mainframe application modernization projects and enabling bi-directional data replication between mainframe, Cloud, Open Systems, Linux, Unix, and Windows platforms.

View the Unequalled List of Environments Supported by tcVISION Here


__TSI_LOGO

___AWS_Select_Partner_Badge ___Google_Cloud_Partner_Badge

Contact Treehouse Software for a Demo Today…

Just fill out the Treehouse Software Product Demonstration Request Form and a Treehouse representative will contact you to set up a time for your tcVISION demonstration. This will be a live, on-line demonstration that shows tcVISION replicating data from the mainframe to a Cloud target database.

Can OpenLegacy help connect PHP with DB2 on the mainframe?

Recently, a poster to a mainframe technical discussion forum asked the question: How can we connect PHP (a popular server-side scripting language designed for web development) with DB2 on the mainframe? Treehouse Senior Software Developer Frank Griffin replied, describing how OpenLegacy could be the answer.  

You have some options here. If all you want is access to the raw DB2 data, JDBC or ODBC access will work fine for you, although you will have to write either C ODBC or Java JDBC code that can be called from PHP to do the deed.

If you already have mainframe code accessible via the network (CICS/IMSDC/TSO via TN3270, CICS via TCP/IP) that accesses your data and adds business logic to the mix, you can use the FOSS* OpenLegacy project to mate this logic to your PHP app.

The simplest approach involves using OL to navigate through the green screens of (CICS, IMS/DC, TSO, or whatever) to get to the data you want. This is done via the OSS** s3270 scripting 3270 emulator which creates an XML “trail” file which documents the interactions between yourself and the legacy application over the TN3270 connection. Once you’ve navigated through the series of screens that exposes the data you want to find, you trigger a portion of OpenLegacy which analyzes those screens, identifies logon sequences and unprotected fields on those screens which are linked to client-supplied input data, and generates Java classes which can at some later time use the “trail” file to re-drive the emulator to access the legacy application using client-supplied values for some of the input in order to obtain transaction output associated with those input values.

If your mainframe apps are better modularized, i. e. if you have separated the business logic from the display logic, and the business logic can be invoked through a CICS COBOL program which is designed to obtain its input and provide its output via a CICS COMMAREA, OL can analyze the source code and generate Java classes that invoke those programs directly without screen-scraping.

And, if what you really want is JDBC, OL can generate a series of Java classes that do all of the JDBC work for you and provide you with methods that simply provide the legacy data with no hint as to where it came from.

OL can also layer additional access software. Once the fundamental Java classes that access the legacy data are in place, OL can generate Java apps that use those classes on behalf of clients using all sorts of modern APIs, including SOA/SOAP, REST/JSON, and Mobile. All of this happens with the push of a button.

All of this is FOSS. You can download the OL code and start using it immediately, and it can do all that I’ve described, out-of-the-box. OL makes their money from selling an Enterprise Edition that includes support and some security and management pretties.

If the only access path from PHP to Java is direct invocation of Java code, you’d have to write a Java stub to interact with the OL classes, but this s going to be a *lot* simpler than trying to write JDBC or ODBC applications on your own.

*FOSS = Free Open-Source Software

**OSS = Open-Source Software


__OpenLegacy_Logo

OpenLegacy is the world’s first and only light-weight, non-intrusive solution for automated legacy modernization and enterprise application integration. With its standards-based, open-source platform, OpenLegacy enables enterprises to rapidly extend legacy systems to mobile, web and cloud applications; delivering risk-free, high-impact results that solve immediate business needs.

__OpenLegacy_High_Level

OpenLegacy‘s standard tools rapidly extract the services and information from within legacy systems into an editable format that puts the power of integration into the enterprises’ hands without the expensive handcuffs of vendor lock in. Once a business process is exposed — which can be done in minutes — the output can automatically be transformed into stand-alone mobile, web, and cloud applications; and connected with other solutions. Most importantly, no changes are required to the legacy system in order for OpenLegacy to work — the process is risk-free.

Contact Treehouse today for more information!

Treehouse Software Partners with OpenLegacy

Treehouse Software Partners with OpenLegacy

By Wayne Lashley, Chief Business Development Officer for Treehouse Software

Mainframe legacy systems, be they Software AG Adabas/Natural, CA IDMS/ADS, CA Datacom/Ideal or VSAM/COBOL, have several attributes in common:

  • They represent an enormous investment and body of proprietary business knowledge and process that are not easily replaced;
  • They are mission-critical, reliable and well-managed;
  • As originally conceived, they do not lend themselves readily to today’s common technology standards and practices: Java, REST, RPC, Web Services and mobile and Web apps — not to mention whatever is coming next.

The last point is a principal rationale for the rise of the legacy modernization industry in recent years. Various ways and means of “modernizing” legacy applications have emerged, and these are well-known and well-documented in the industry, and even here in the Treehouse Software blog and its predecessor, the Treetimes newsletter. After a few years of shakeout and consolidation, it would seem that there is little new under the sun in terms of legacy modernization practices and practitioners. But that’s not so.

It was back in 2013 when we first encountered OpenLegacy, a new entrant in the field and one with a novel approach. Nobody else in the modernization biz seems to be offering a standards-based, open-source toolkit/platform that can open up any IBM z or i legacy environment to provide Web and mobile interfaces, Web services and APIs—without migrating the legacy application or changing its code. With OpenLegacy, the value of the legacy environment can be fully leveraged in today’s technologies — and in whatever comes next.

We’ve been watching the company and its offerings evolve over the months, and Treehouse Senior Software Developer Frank Griffin participated in a technical evaluation and went through OpenLegacy training. We were all pretty impressed—so much so that we recently inked a partnering agreement with OpenLegacy to help market and deliver their solutions in North America. In the post below, Frank discusses one aspect of how OpenLegacy can be used to open up mainframe 3270 applications. Stay tuned for more posts from Frank as our OpenLegacy journey continues.


__OpenLegacy_Logo

Treehouse Software is now partnering with OpenLegacy to provide access to legacy IBM mainframe applications for a range of non-mainframe devices, including mobile phones and tablets as well as “heavier” clients like Service-Oriented Architecture (SOA) consumers.

Unlike many modernization approaches that require a commitment to migrate the legacy application in some way or at the very least require legacy application code changes, OpenLegacy adds more modern potential client populations to the existing application clients with no required changes to legacy code.

OpenLegacy is open-source, and is built using standards-based protocols and other open-source components wherever possible. The starter version is free to download, so you can get started with your testing immediately.

OpenLegacy can interact with legacy applications in several ways, but for simplicity in this initial post I’ll concentrate on just one: access to 3270-based applications.

We’ll assume that you have a 3270 application which can be made to display data required by a non-legacy client by navigating through one or more screens. The data need not all appear on a single screen, but can be spread over several screens. All you have to know is how to navigate through the application to get the data you want displayed.

OpenLegacy development starts with a developer environment which allows developers to describe:

(a) any input parameters needed to display the desired data

(b) the navigation process to arrive at a screen containing data to be captured

(c) the location on that screen of the data to be captured, as well as a description of that data

The OpenLegacy development suite runs as a plugin for the Eclipse Integrated Development Environment (IDE) or a self-contained IDE installation, which has several advantages. It allows OpenLegacy to piggyback on the rich GUI which Eclipse provides, and provides a familiar IDE environment for customer developers.

The OpenLegacy plugin opens several custom windows within Eclipse:

OpenLegacyScreen001

In the upper right, you can see a window with the current state of the 3270 emulator, which initializes at the standard initial logon screen for VTAM applications. In the upper left, you can see the directory structure which is automatically created for you when you use the OpenLegacy wizard to create a project. In project creation, you specify the target mainframe host, so OpenLegacy can open a TN3270 session to that host.

Navigation directions are entered by having OpenLegacy open a browser-based 3270 emulator which uses a captive copy of the s3270 emulator to interact with your mainframe system under OpenLegacy’s control. You use the browser-based emulator exactly as you would a 3270 terminal to log on, enter requests and input data, and do whatever a 3270 user would do to get to a screen of interest. However, because the 3270 emulator is running under OpenLegacy’s control, OpenLegacy is recording every keystroke needed for the navigation.

You can see the browser-based emulator in the following screenshot:

OpenLegacyScreen002

The browser application captures your keystrokes and feeds them to the 3270 emulator, also capturing the input and output from the emulator. These traces are referred to by OpenLegacy as “trails”, and are saved as XML files. When your project actually executes, this file is used instead of browser keystrokes to provide input to the emulator and scrape data from the emulator output.

When you’ve arrived at a screen containing data of interest, you switch from the browser window back to the OpenLegacy Eclipse GUI, and let OpenLegacy analyze the screen contents, identify the fields on the screen, and allow you to select those of interest and identify them with names and datatypes.

This navigation/selection cycle continues until you’ve identified all of the data you wish captured. At that point, you literally just push a button, and OpenLegacy will generate a complete rich web Java application which will accept whatever input is needed for the navigation through the legacy application, and then use the captive terminal emulator behind the scenes to log into the application, navigate to the screens of interest, and capture the fields of interest.

For Java developers (depending on what you’ve asked for), OpenLegacy also generates front-end web/Java code to use this application to accept input and return output using a number of mobile and web protocols, including SOA, HTTP GET/POST, or REST/JSON, making the legacy application immediately accessible to a wide range of modern devices.

Once compiled and exposed to the client community via a servlet container, these web/mobile applications accept requests in the specified input protocol, extract the input argument data, call the core Java code to drive the legacy application filling in fields and simulating keystrokes as needed, and collecting the desired output from screen fields as previously directed. The collected output is then formatted by the front-end application according to the requirements of the protocol involved, and returned as a response. Your device has no idea that it is interacting with a 3270 application, and your 3270 application has no idea that it is interacting with anything other than a 3270 terminal.

Oh, and did I mention that the Eclipse container includes a servlet container, and (since it was designed to allow developers to write/generate code and immediately compile and test it) a few additional mouse-clicks compile the generated code, create JARs and WARs and deploy them to that servlet container ? The result is that in as little as an hour from starting this process, you can be interfacing to your legacy application from your phone for testing. All you need is someone who knows how to navigate the legacy application and someone who knows how to navigate the OpenLegacy Eclipse GUI–two completely separate skill sets.

Regardless of what you are considering for the future for your legacy applications, OpenLegacy provides an immediate way to vastly increase your client population without touching a line of legacy code.

If this just seems like screen-scraping to you, look more closely. The differentiator here is the transparent creation of those front-end interfaces with no need for you to understand those technologies. Development using OpenLegacy doesn’t require you to know anything other than how to use the existing application.

In future posts, I’ll discuss other ways to access legacy applications using OpenLegacy via mainframe protocols that bypass 3270 emulation.

Contact Treehouse today for more information!

50th Anniversary of the Mainframe – “IBM 360 Announcement Era”

By George Szakach, President of Treehouse Software

I don’t specifically remember any grand announcement from IBM about the 360 mainframe in 1964. If I heard about it, I probably thought, “Oh well, another computer from IBM”. I’m not even sure we called them mainframes back then, because that’s all there was, i.e., there were no “non-mainframes” (yet).

I didn’t have time to ponder it much. I was busy converting IBM 705 and 1401 programs from assembler and COBOL into Burroughs COBOL at the time. For some of the assembler programs, the company (Westinghouse Transformer Division) didn’t even have the source. We had to somehow interpret the object deck (punched cards of course), and figure out the logic and then make something COBOLish out of it. I did that for two years and automated various aspects of the conversion process, which led some of the older and smarter staff to encourage me to go away and do some work for the Burroughs compiler development division in either Pasadena or Paoli near Philadelphia.

So I sent out my resume and immediately got several responses, the most interesting one being from UNIVAC in Blue Bell, PA outside of Philly. I went to an interview and I was interesting to the boss, because I had been stationed in the army at Fort Huachuca, AZ (where I worked on an IBM 709) and where he also had been stationed. I noticed a memo on the secretary’s typewriter that had me somehow 90% done with the first phase (named Scan/Scramble) of the COBOL Compiler for the UNIVAC 490/494, even though I didn’t yet work there. They offered me $10 more per month than I wanted. Hired!

Image

That was back in the era when UNIVAC dwarfed IBM. But IBM was growing. And they had this new 360 thing. Of course, UNIVAC had to have a competitive product, and they announced the 9200 and 9300 and eventually a 9400. I was made the leader of the group to develop the 9300 COBOL Compiler. The 9300 instruction set, I found out later, was about 99% identical to that of the IBM 360.

One day in 1966, UNIVAC had everybody (hundreds of staff) assemble in the auditorium where they announced this new 9300 technology. I remember most people freaking out because we were going to have to speak in this new language called Hexadecimal. I thought that would be cool, twice as cool as Octal. People were still blown away with Octal where you don’t get to use an 8 or a 9. And binary was old by now. You can’t get too creative with 1s and zeros. So I thought the idea of adding the letters A, B, C, D, E, and F to the number sequence, following 0-9, would be really neat. 10 hex would mean 16 decimal. Awesome. I think we lost many employees over that. They were not ready for 16 “digits”, and 8-bit bytes instead of 6.

We did accomplish the compiler. It took two years. It took a zillion punched cards. We had a partially working computer (if one worked, they’d sell it to a customer and make a new one). We made trips to out of town sites to get the thing done, such as Bethlehem Steel and the UNIVAC 1108 development office in Minneapolis.

The 9300 had a card reader, card punch, printer, 16k bytes of memory, and four tape drives. There was not yet any spinning disc storage device, and we viewed with amusement a nearby computer with a spinning drum. There were no terminals for another 5-10 years, so we carried cards and huge listings around.

I led 22 different people, 15 max at any time, and 9 for the bulk of the effort. The building was about a half mile long, and was quickly maxed out with staff, so they moved the compiler people to an old “Kmart type of place” nearby, called the Atlantic Thrift Center. Hamburgers across the street cost $1.25, so I’d usually get the grilled cheese and ham sandwich, which was only $.75. Most people had beers with lunch back then. Add 25 cents. An attractive young lady named Emilie also worked there at UNIVAC. Hmmm.

It was in that place that I remember working with (or near to) Dr. Grace Hopper, a delightful person that I understood to somehow be employed by UNIVAC while still being in the Navy. She liked me, responding positively to some of my memos on ideas for automating the production of compilers. Some years later, I visited her in the Pentagon when switching jobs, to ask her advice. She did help me find that next job in the DC area. As most people know, Dr. Hopper is no longer with us.

Footage of Dr. Grace Hopper explaining nanoseconds the way only she could…

That’s a little bit of what I remember about that “IBM 360 announcement era” or, in my case, the “UNIVAC 9300 announcement”.

By the way, it was a few years later when one of the other “mainframe biggees”, RCA, merged with or got bought out by Burroughs. Whatever, that outfit then got bought out by UNIVAC. They decided they needed a new name, and that is when UNISYS was created. Oh, and IBM got bigger.