The billing application was slow. And not slow in the taking-30-seconds-to-start-up sense, but slow in the ridiculously-freaking-slow sense. Loading an invoice took between ten and fifteen minutes. Updating a line item on an invoice took up to a minute. And saving the invoice back to the database took even longer than loading it in the first place. Clearly, things couldn't stay this way – a minimum of 25 minutes to update a single invoice was completely unacceptable. They needed an expert. They needed... The Optimizer.

Kent had earned his reputation by making a simple but dramatic improvement in another application's performance. A database server had been set up with the default cache size, 64KB. Seeing that it could have been using a full 16MB, he changed the setting and performance improved considerably. Simple as it was, he was immediately hailed as a hero, and because of that, the powers that be wanted to put him on the billing application.

For Kent, this was going to be a much bigger challenge. The billing system was not only known for its slowness, but for its bugginess. Throwing more hardware at the problem would only take them so far, and some invoices loaded quicker than others. Furthermore, and even more troubling, some of the prices just didn't work. Most of the prices were "marketing rounded" ($599.99 as opposed to $600.00), but for certain numbers that didn't work, they had workarounds. For $700, for example, they used $699.95 because $699.99 didn't work right. The highest price they could have was $999.99, because as the previous developer had told them, the hardware wasn't powerful enough for a thousands digit.

Kent knew where to start – in the database. Searching by variable name in the application code was usually an exercise in how soon he'd be reaching for some aspirin; the database is where the answers are. It didn't take him long to find the columns DOLLARS and SENSE [sic]. OK, sure, using floating point numbers can occasionally have its pitfalls, so maybe this was an intentional choice. The data certainly looked ok. Nothing in DOLLARS was above 999, and nothing in SENSE was above 99. Now that he had his column names, Kent could start looking at the actual code.

His search led him to some scary code comments. "Some prices are wrong, I don't know why," "Fix this" and "OH GOD WHAT HAVE I DONE." OK, the last one was an exagerration, but the comments were very troubling, and worse, Kent couldn't make heads or tails of the code that accompanied the comments. His descent took him further and further into the twisty passages of the codebase which, as it happened, was written in assembly. To make the whole system run fast, of course.

Kent eventually stumbled upon a file that stored basically a giant two dimensional array. In pseudocode (as the original assembler would be too painful), the file would've looked like so:

decimal[1000, 100] prices =
    {{ 0.00, 0.01, ..., 0.98, 0.99 },
     { 1.00, 1.01, ..., 1.98, 1.99 },
     { 2.00, 2.01, ..., 2.98, 2.99 },
     { 3.00, 3.01, ..., 3.98, 3.99 },
     { 4.00, 4.01, ..., 4.98, 4.99 },
     ...
     {999.00, 999.01, ..., 999.98, 999.99}};

OK, Kent thought, scratching his head. Why would every valid price be in a giant array? The array took nearly 80KB of memory, which partly explained why the binary was so large, but still it seemed like it was much larger than it should be... Kent peeked at the last assembler log and learned that the array isn't declared once and used repeatedly – it's included everywhere it's used. Which was over 100 times.

As for how this price lookup was used, following is some more pseudocode:

 

pointer=&array;  // Location of the start of the array
offset=0;
price_offset=price_in_dollars + price_in_cents;
price_offset=(price_in_dollars * 100) + price_in_cents;
for(x=0; x<price_offset; x++) {
    offset++;
}
return *(pointer + offset);

As far as I'm concerned, the ability to write assembly code requires some kind of superhuman intelligence. Reading this code, however, leads me to believe otherwise. Kent's predecessor seemed to struggle with it, as he had apparently never heard of labels (which allow you to give a name to a memory address) and instead tracked memory locations of each line of code. This, in turn, lead to lots of dupilicate and nonsensical lines.

For example, in the snippet above "offset" is set twice. The reason that "offset=price_in_dollars + price_in_cents" was left in was because Kent's predecessor couldn't always delete code. Removing an instruction would shift the memory addresses of all subsequent instructions, which would obviously have an impact on those hard-coded addresses.

In some cases — specifically, when the length of the instruction(s) he wanted to remove was divisible by two — he was able to effectively perform no operation yet still take up two bytes: he could simply jump to the address of the next instruction. But for those pesky instructions that would leave a remainder of one byte, the programmer just couldn't figure out a no-operation instruction that took up a single byte. Apparently, he never bothered to look up NO-OP, a one byte instruction that does just that.

As it turned out, it never occurred to the original programmer that the two-byte no-op he used (relative jump of one byte) could also be used to jump any number of bytes forward. And this is where the loop in the preceding code comes in. To convert two integers (one for dollars, one for *ahem* cents) into a single decimal — or, more specifically, a pointer to the decimal number stored in the aforementioned array — the programmer incremented once for each penny. So, the second-worst-case scenario of $999.99 would result in nearly 1,000,000 increment instructions. Plus all the other nearby instructions. Plus the overhead of incrementing a pointer not stored in a register. Even in assembler, this is damn slow.

What's the worst case scenario, you may be wondering? A price that is "invalid." For instance, as mentioned earlier, $699.99 is invalid. Some prices didn't work because the array's values had been entered by hand and some had been missed. That's right, all 100,000 (0-999 dollars, 0-99 cents) entries in the array were entered manually by the previous developer.

Saving prices back was done in a way similar to the lookups. It'd again loop over every cent trying to find a match in the huge array of prices, then split it into DOLLARS and SENSE. And if it was an invalid/missing number, it'd enter an infinite loop. Because of this, the employees guarded their "bad prices" list fiercely.

With an understanding of how it worked, it took all of an afternoon to fix the slowdown. It now ran dramatically faster, supported prices up to $999,999,999.99 (which would be ideal if they started selling yachts), and didn't even have any "bad" prices that would cause infinite loops. Oh, and the binary dropped to less than 50KB in size as a result of the changes as well.

Looking back, Kent is proud of the work he'd done. After over three months turning his predecessor's code into something sane, the only negative side effect is the occasional recurring nightmare in which Kent is hired at another place where his predecessor had built the application he'd be maintaining.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!