One of the advantages to working at a large organization is that they're very serious about the integrity of their "mission-critical" systems. By the time a code change makes it through the Development, Integration, Testing, Quality Assurance, and Staging environments, it's practically guaranteed to be bug-free. And if a defect does manage to sneak in to Production, no single person can be blamed: it's the fault of The Process for allowing the problem to occur.

A few years back, Josh joined a Certain Telecommunications Company that depended on a "mission critical" system for their day to day operations. He expected to work with a three-, or at a minimum, a two-stage production deployment process.  Instead, he found himself amidst the classic Zero-Stage Deployment Process.

After a developer would finish writing his code on his workstation computer, he would cede his cubicle to a tester for testing. Once the tester approved the changes, the developer would move the new code directly into production. Bugs would often ensue, requiring a rapid repeat of the cycle and a very small pool of people to blame for the mishap: the developer and/or the tester.

Shortly after Josh started, management gave in and agreed to build a Quality Assurance environment. They were very serious about the new QA environment and wanted to use it not only to test the code changes before production, but to usher in an "Aura of Quality."  And the best way to do this was to make sure that QA was rigorously clean.

The first step in the Clean QA Environment initiative was to build a new network. They didn't want any part of the existing (and potentially unclean) network interfering with QA's operations and allowed no sharing of any kind: no bridges, VPNs, or even a connection to the Internet. This meant that deployments had to be burned to a CD, carried over to the QA lab, and then installed on the network.

They even went so far as to install a "white room" that required fingerprint and optical scans from two executives, a careful dodging of laser-beam alarms, and an acrobatic dance across a pressure-sensitive floor just to access the mainframe console. At least, I assume they did; no server room is truly complete without that.

Of course, through all this, the production environment remained untouched and accessible from anywhere. The global LDAP server that handles all authentication was set up so that everyone had all access: if one could check his email, he had shell access to any production box. This was only "temporary" at first but became "indefinite" as the years passed. As far as the new "Aura of Quality," I think a lot of people feel it with the new Zero-Point-Five-Stage Deployment Process that came out of all of this: developers push their code to production, make sure it works, and then go through the rigors of bringing it to QA.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!