If there's one thing that the new development manager has, it's tenacity. Joshua has been maintaining his company's overly complex software for a while now, and found Dave's eagerness and dedication to learn admirable.

Dave's training was fast and brutal. On his first day he was thrust into a task that only the QA lead had done before — deploying the latest software to the build server. He had to learn it, as he'd function as the backup when the QA lead was out (and, coincidentally, he was out on Dave's first day).

Fortunately, the QA lead had recorded some high-level instructions for deployments in the company's internal wiki:

To build server:

  1. Determine what version the new server is
  2. Run buildserver.bat VERSION_NO
  3. Make sure the build succeeded. Run automation tests.

To deploy server onto QA

  1. Log into QA server
  2. Take down server
  3. Run necessary SQL commands
  4. Copy over new jar files
  5. Bring server back up

To deploy server onto production

  1. Log onto production server
  2. Take down server
  3. Run necessary SQL commands
  4. Copy over new jar files
  5. Bring server back up

When one of the developers came to Dave saying "I just made a massive checkin that changes how virtually everything is cached," Dave was ready to zip into action. The developer went on to explain that there was a high probability that the changes would break existing functionality on several pages and that they'd need at least a week to test it all. With a smile and a can-do attitude, Dave said he'd get it ready and email him when it was done.

Several hours later, the production server went down. Everyone was scrambling to figure out what had gone wrong, and the same developer that had asked Dave to deploy the changes went to make sure everything was ok.

Without missing a beat or showing the slightest hint of worriedness, Dave cheerfully said "OK, just give me one sec... OK, try it again." The developer excused himself and verified that the produciton server was, in fact, online again.

Anxieties eased, production back to normal, the developer had the time to catch his breath and investigate the source of the problem. He returned to Dave's office again and began "I'm really sorry to keep bothering you today, but something isn't sitting quite right with me. When you were deploying my changes to QA-"

In the middle of asking the question, it dawned on him. "W-when you were deploying, w-which environments did you d-deploy to?"

Dave retained his smile, but his eyes narrowed. Through his smile, he asked "what do you mean? I followed the documentation."

What Dave hadn't done, though, was stop after the "To deploy server onto QA" section. He'd totally ignored the headings and followed every step in the plan. The potentially-everything-breaking change? After some panicked smoke testing in production, it appeared that everything was working OK. They had gotten lucky.

Everybody won. The developer's change worked. The business didn't have to run its usual expensive testing cycle. Dave executed the deployment perfectly (aside from jumping the gun on production). And Dave took the opportunity to update the documentation (and increase the font size of the headings).

The moral? Always deploy changes straight to production without testing them.

Wait, that can't be right.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!