The year was 1993, and that meant one thing: Old Iron was finally ready for the scrap yard retirement after nearly fifteen years of faithful service to the university. Technically, the MVS-based mainframe had been well past its prime for quite a many years, but since it was used primarily as a data repository for research projects, no one seemed to mind. But what they would mind, however, was any sort of downtime in the transition to the new, UNIX -based research computer, so it was up to Todd M. Lewis to figure out how to ensure things went smoothly.

In order to give researches the opportunity to learn their way around Unix and adjust their processes for the new environment without disrupting ongoing work on Old Iron, Todd set up a migration process that would pull MVS data sets from the backup system (as not to interfere with “live” data sets users may be using) and copy them to an archive on the UNIX server. From there, users could check out copies from this archive and work with them under UNIX to hone their processes. If they screwed up the data, they could just check it out again from the archive.

Back in those days, the university was a large SAS shop then, and most of the research projects used SAS’s data management and statistical capabilities quite extensively. As such, it seemed only logical to use SAS on the mainframe to manage the migration itself. Not only would SAS be able to track which data sets had been transferred, which ones were scheduled to be transferred, and so on, but SAS had a communications module that could transfer the actual data sets themselves.

Following the trend of other universities, the UNIX computer was named “gibbs” in honor of the father of physical chemistry himself, Josiah Gibbs. Using the SAS connection module, it’d take all of three lines to establish a connection from Old Iron to gibbs.

  %let gibbs='74.50.106.245';
  options remote=gibbs comamid=tcp;
  signon gibbs;

That worked like a charm, and zipped data right over. But shortly after developing the code, Todd realized something: using 74.50.106.245 meant going through the public interface, which was a mere 10Mbit connection that everybody across the university shared. Since both Old Iron and gibbs lived four feet away from each other in the same room, it seemed only logical to connect the two machines through their unused fiber interfaces. That way, the data could be transferred without taking network bandwidth from users.

Todd found a fiber link, plugged gibbs directly into Old Iron, configured their respective fiber interfaces, and changed the value of the "gibbs" variable to the fiber's private IP address.

  %let gibbs='10.0.3.44';
  options remote=gibbs comamid=tcp;
  signon gibbs;

A quick test confirmed that the data sets migrated to their new home without a hitch, so Todd turned on the SAS migration management application, which would transfer data for 24 hours a day, 7 days a week. Whenever MVS users would update their data, the system would pick up the change and copy the new version of the data set into the UNIX archive. Life was good.

Soros Universitas

A little more than a year later, Todd got a call from Doug, his counterpart at sister university, who had found himself in a unique situation: they had an ancient MVS-based mainframe named Rusty Iron that they wanted to migrate to a newer UNIX-based server. The kicker was that they couldn’t have any downtime

It was Doug’s lucky day. Not only did the universities just happen to have been running the same outdated version of SAS, but they had the identical backup systems. And as an added bonus, Todd had developed the UNIX-code to be POSIX-compliant, so the porting would be a non-issue.

A few weeks later, when everything seemed ready to go, Todd made a single tweak to the migration routine – change the “gibbs” variable to 192.168.4.33, the UNIX system’s IP – and then ran the script. But instead of a successful run, an error was returned.

gibbs: invalid userid or password

He verified the credentials, manually logged in, and then ran the script again. Then he changed the permissions, and ran it again. Then the account, and ran it once again. And again. And again. No matter how he did it, the SAS routine just refused to connect to 192.168.4.33.

Out of frustration and lack of any other ideas, Todd decided to change the name of the variable.

  %let die_gibbs_die='192.168.4.33';
  options remote= die_gibbs_die comamid=tcp;
  signon die_gibbs_die;

But that time, it ran just fine. He did a double take and ran it again. Changing the variable name back to gibbs, however, caused the same connection error.

What's in a Name

Without any websites or help to consult for the strange behavior, Todd turned to the SAS documentation, which stated:

Do not choose a macro name that is also a valid host name
on your network. SAS first attempts to reach a network host
with the value of the REMOTE= option (in this example, MYNODE).

Todd did another double take. While it certainly explained the problem at hand, it also meant that if somebody registered a host with a name the same as a macro variable in production code, that code would suddenly stop working (at best) or start connecting to the wrong machine (at worst).

And then he realized something else. Because the gibbs server at his university had been resolving to 74.50.106.245, the transfer routines had been running for longer than a year with a perfectly good, dedicated fiber lying under the floor between Old Iron and gibbs, having never sent a single bit through it.

Even though there were only a few days before Old Iron was to be shut down for good, he renamed their "gibbs" variable and finally moved a few data sets across the fiber. Not that it really mattered by that point.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!