- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Say we have a large, complex production environment running. On day D it all runs a beautiful, immutable, monolithic build where every little bit of code was compiled in one big operation from one precise point in version-control code.
On day D+1 it's suddenly revealed that the VP of Sales has committed the company to being able to interact with a particular out-of-house web service. And the web service turns out to be incompatible on some deep level with the existing code -- most of the required middleware is already there, but the web service insists, for perverse reasons known only to its developers, that it will sometimes send a "305 Use proxy" response, and the HTTP engine deep down at the bottom of the stack is not prepared to deal with that.
Now some developer spends a number of days recoding the HTTP engine such that it can react properly to 305. This involves rather a lot of hand-refactoring of code that underlies 75% of everything directly or indirectly, and who knows what little corner case may have accidentally been broken in the process. There is no way a complete rebuild based on that change will end up in production without at least a month of meticulous QA.
But the salesman promised that we'd have the web service integration up and running by tomorrow. What to do? Take just the server that provides contains the new feature, compile it against the bleeding-edge semi-trusted source tree, and then install a single instance of that in production, where it will interact with the existing, tested servers from the old build. The new server is still high risk, of course, but at least if it fails the failure will be contained; we can configure everything such that existing customers without insane web services will not be handled by that instance at all. Then start working a fresh complete build through QA at a saner pace.
In that particular case I posit that such a piecewise deployment is the best one can make of a bad situation. However, it does mean that Production now runs a chimera of software, where different components comes from different times of the source control history. It's not a single monolithic build anymore.
If I understand you correctly, your very vocabulary will refuse to deal with the resulting situation at all, so the process you advocate (whether it is based on whiteboards, or spreadsheets, or specialized tracking software) will be unable to help, not even with managing the task of getting Production back to a purer, less confusing sate.
(The problem is of course not limited to production environments. For example, if one has the time to spare, one would want to do some integration testing of the chimera configuration before letting loose in production so we'll need to speak about running a chimera in a staging environment).
Admin
For f%^*'s sake, shoot the salesman. Now.
Admin
To summarize the scenario you described: a "highly experimental" feature needs to be introduced in production to solve the problem for a known subset of users.
This is not an uncommon scenario, and is actually a SOP at some places. There are two good ways to address this (isolation by configuration and isolation by instance) and both are compatible with this process.
The scenario you described is isolation by instance, and requires multiple instances of the software running (load balanced environment, downloaded software, etc). Some instances become "stable", others become "highly experimental"; this yields two, distinct production environments as well. To do this, two versions are created from two different source trees:
Eventually (after sufficient testing), the changes are merged in and the "highly experimental" instances can be shut down. From a deployment standpoint, builds of "highly experimental" releases can be promoted through a different set of environments ("highly experimental dev", "highly experimental testing", etc) or juts be promoted straight to "highly experimental prod". The latter is obviously reserved for emergencies.
The other (and slightly easier to manage) isolation looks like this.
This allows "highly experimental" features to be enabled/disabled/testing through configuration instead of through a release process. Of course, it also requires a relatively sane code base to pull off, which is why isolation by instance is used just as often.
Isolation by instance requires some mad release management skillz, but you end up with a "dashboard" that looks like this:
Admin
Rolling out to 27 sites, each with its own customized configuration and set of features, we always had an installer to keep contention at bay.
His name was Doug.
Admin
In your own words, a build might be something that "can’t be built in the first place". Have you considered that the problem might lie in your own confusing nomenclature, rather than just your colleague's insistence in using the more usual definition?
Admin
We don't have builds, and all those components, is that bad? only one big executable that comes with everything needed compiled in...
Admin
Send $10 to the first person on the list, and $5 to the second. THIS IS ENTIRELY LEGAL! Remove the first person, and add yourself to the end of the list. THIS IS ENTIRELY LEGAL!
Admin
Whoooooosh!
Admin
Anonymous_coward said that dumping a zipfile of the entire filesystem in production is the easy way, not that it is the correct way.
Admin
I work in a small environment where we do everything ourselves, so I'm really having trouble following your stages: “integration” - Writing the code? “testing” - Unit tests “staging” - No idea “production” - Making it live
Can anyone explain this for me please.
Admin
And put it in job applications.
You know what I hate? Going through every acronym in a job application that I'm not familiar with, just to find out it's something I've been doing all my life. Am I too old?
Admin
If there was an appropriate time for Nagesh to be crowing about CMM Level 5, that time would be now.
Admin
I just threw up a little in my mouth.
Admin
Integration: This is where you put everyone's little pieces together, and realize that they don't fit. Testing: This is where you prod it and poke it until it falls apart. Then, when it's fallen apart, you realize that it won't fit together again. Staging: This is where you make sure the gaffa tape holds, pretend to the customer that it falling apart is a feature if it doesn't. Production: The customer gets to keep all the pieces when it breaks.
Admin
You forgot the last step: PROFIT!
Admin
Blogging done right
Admin
Or, mapping concepts to your likely environment
Integration: Build all projects that make up your product (may be only one). Run the unit tests. Testing: Take that build and try it out Staging: You likely don't have that. Production: Put it on the live server
Better would be Integration: Run an automated build of everything on the build server, including a run of all unit tests. Testing: Take that build and do whatever manual testing you want to get in before hitting customers. Run automated tests (that are not unit tests) if you have them. Staging: Maybe have an extra server to let select customers try the new version before general deployment. Production: Put it on the live server
Admin
Thanks Mark I understand now. Anonymous Cow-Herd I kind of got yours, apart from the staging bit, but now it makes more sense. Cheers.
Admin
The part I hate is going in for the job interview. The culture around FRESH has not only reinvented the technology of RELIC, but reinvented the (spoken) language around it as well. So how do you convince these FRESH-fluent interviewers that you understand the concepts, when you don't know how to speak their language?
I can be a good bullshitter from time to time, but never when it really counts.
Admin
Admin
Whimsy aside,
That's a pretty sensible set of definitions. If you're doing something customer-specific, staging might be where the customer does user acceptance testing before approving it for production.Admin
Very strange article.
In my line we are heavily audited/examined (we have Deloitte and various state/federal entities looking through our work item tracking/source control/releases/QA process/etc at least once a year) and need to have an accountable system.
When a release goes through initial QA and a defect is found, we do another build. A release may have 20 builds or 2000.
However, once the release makes it past a certain checkpoint, defects found in a release are addressed in subsequent release(s). This is during the final QA and deployment process and may occur before or after a release is deployed to customers. Normally, low impact defects found in this period are not show stoppers and the release proceeds forward. Sometimes, with a significant defect, a release is finished but never deployed and a new (minor or point) release which addresses the defects is created - in which case go back to the top of the previous paragraph and start over.
So imagine you are having lunch with me. My phone rings. I answer. You hear half the conversation ... “Fine! I guess we’ll just do a new release for QA.” ... Alex's head explodes ....
Take a chill pill dude.
Admin
Admin
Admin
As long as it's apples and apples. Throw them a pear and they'll die of hunger.
Admin
I'd just like to thank you for this article - nothing much else to say, I just wanted to give you some feedback and there is no "like" button ;-)
I spent countless hours of meetings and more annoying "if you have a minute" meetings interrupting my zone, ending in more confusion on this subject, starting the conversation from scratch over and over again.
Admin
I advocate using the ARGH process
Apply Random Guesswork Hueristics
Admin
Admin
This is an excellent article, explaining in a concise way the basic concepts of release management. With a little tweaking, it could, and maybe should, be an introductory chapter to any explanation of release management.
Admin
You do the experimental work on a Branch in your source control system, and give that customer a release from the Branch.
Our version numbering system consists of two numbers: An external, ever incrementing version that marketing increment as they see politically fit to keep customers happy (e.g. do we go from 3.5 to 3.5.1, 3.6 or 4.0? Depends on how we want customers to perceive the new release). An internal version: major.branch.subbranch.build. Uniquely identifies the release in a meaningful way, and we can query source control for the exact version of code files which produced it (first step of the build machine? Add a label to every file with the version of the build we're doing.)
Admin
There's nothing inherently wrong in using different terminology between release management and configuration management. But they'd better not be so different that the terminology itself invites confusion ...
But if the crazy new way requires changes to basic framework/utility code which things that are not supposed to crash, you will need to copy-paste clone all of that code in order to be sure that DO_IT_NORMALLY() indeed does it exactly normally. That results in a source tree with rampant code duplication in it, which entails maintenance problems all of its own.Admin
How and where to branch is a source-control issue, which is distinct from release management (though they obviously inform one another). The point here is that getting the experimental feature to work at all require deep source changes that threaten the stability of the existing non-experimental features. Because everything that serves non-experimental customers has remain stable, there must be a run-time interface between code from the stable build and an experimental build somewhere.
Admin
Good point; from a configuration mangement standpoint, they don't need to be separate environments. I guess in this case, I would set the deployment plan like...
Obviously for later environments, you could simply use artifacts for that build instead of compiling.
Indeed. There are a ton of architectural patterns that can help, but they need to be in place early on. IOW, you shouldn't add a factory pattern for "emergency" isolations - that came come later (and done carefully).
Admin
If there is so much confusion between the contextual definitions of an ambiguously defined word, then the best solution isn't to write a lengthy article discussing the difference between the meanings. It is to assign a DIFFERENT word the less common / more formal definition. No confusion thereafter.
My brain was constantly confused by the article. I liked it, but I had to read it twice. Just because my already trained mind kept assigning the default definition to each and every instance of the word "build" - the common place vernacular one, the (I get the feeling) "wrong" one. But that's just what we've used since we advanced from Qbasic to C, way before high school and college.... that's the way it is for most of us my age...
Admin
Every SVN commit should be a release. You should release 50 times a day! How else can you get immediate feedback from customers on what you're working on?
Don't believe me? We've done it for years! http://engineering.imvu.com/ If you actually get your ENTIRE code base under test, it works, and it's fantastic!
Admin
The common-place vernacular sense of "build" has to do with assembling pieces of timber and fired clay into a house. Any other meaning you might want is idiosyncratic jargon, and you're not going to make yourself understood by waving phrases like "the default definition" about without actually specifying which meaning you're going for.
Admin
omg, do you work where I work? wait no, we aren't fixing our deployment process.
Admin
I suggest you have a look at the book on Continuous Delivery:
http://continuousdelivery.com/
the build pipeline pretty much addresses most of the points you talk about by introducing visibility in the release process and tying all these aspects together: building, testing, deploying.
Admin
this is one of the best articles i've read in a long time, very well written and full of meat, not just all talk. thanks!
Admin
I like this article, any thoughts on using tools such as Plutora for release management? http://www.plutora.com/release-management-software/