- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)
Admin
Years ago I heard one of the best pieces of programming advice I've ever heard: A co-worker said that his mother always told him, "Never walk through a room without picking something up." He applied this principle to programming: Every time he had to make a change to a program, he would make a little extra effort to do some clean-up, so that the program was now more efficient, more maintainable, or otherwise generally better than it was before.
The beauty of this approach is that you don't have to devote hundreds of man-hours to a dedicated clean-up project. You just tack a few extra percent onto every project, probably an amount that would be lost in the general estimation errors anyway.
Unfortunately, I've been in many jobs over the years where management forbids this approach, on the reasoning that fixing anything not directly related to the current change introduces the risk that you will create new bugs. This is true, of course, but the inevitable result of that philosophy is that you doom yourself to an entropy spiral: Every change makes the system a little worse, which means that every change becomes more difficult to make as the structure becomes more rickety and unstable, until the whole thing collapses.
Admin
Admin
Let's please stick to the topic at hand: Commodore PET BASIC and VIC-20 architecture.
Admin
For crying out loud, you could be going bankrupt while you're optimizing your code.
Admin
Answer #1: Buy a magic silver stake to kill the zombie. (Oh, wait, my wallet was stolen, I don't have any money. Bummer.)
Answer #2: All the computer stores in the world are out of stock on memory chips. What do you do now?
Admin
It's the principle of the thing though. 2 years from now, when somebody who didn't write the code is going to have to o maintain code that is full of crap. they're going to have a lot of trouble finding/fixing errors. Even if its 'not that big a deal' to add a couple gigs of ram to the box, it's much better just to get the job done right the first time, and not have to deal with issues caused by bad code later.
Admin
Anyone who said there was nothing in the article indicating a memory leak is WTF. Go learn how to debug software.
The server needed to reboot regularly, and with increasing frequency. When it got to reboot everyday, they decided to do something about it.
Hey, guys, if there isn't a memory leak, then rebooting doesn't help. Period.
The only thing that reboot does is releasing unneeded resources. If the server can run through the day, including, of course, the peak hours, and after a while -- originally a few days -- gets to the point it needs to be rebooted, and rebooting helps, then there are unneeded resources in memory. Ie, a memory leak.
So, to all of you who think buying memory would have solved the problem and there was no evidence of memory leaks... well, LEARN something. It might help you one day.
Admin
Bah!! An 8051 has 128 -Bytes- (not KB) of memory. Some of this is unavailable because it is memory-mapped to chip registers. Try fitting the heap and stack of your fancy C program into one of these.
Admin
Admin
It could just as easily be a memory leak as a scalability problem.
Admin
Ok well then I pick the $60 and immediate fix.
Oh wait, it's not immediately fixed. It just got some duct tape on it.
Ok so it's been another 5 months now it's crashing daily again. Oh crap, what now? $60 more and another immediate "fix" ?
Ok, 5 months later and it's crashing daily again...
Repeat ad nauseum.
Sorry but the 5 month fix is going to be worth way more in the long run. The app will run better, customers will be happier and you won't have to worry about the server crashing. We might even be able to create another site and put it on the same server because of the extra resources we saved.
As for boxed apps vs web apps, I still think it's relevant. Both the boxed app that requires the end user to have 8 GB of RAM and the web app that barely functions is going to piss off the end user. The result is the same: users who don't like and don't use your product.
So now you are the manager. Which do you pick: income flowing from end users or nobody using your website and getting laid off when the company "restructures" ?
Admin
But it only cut the memory in half. So when the old version would require 4GB, the old version would require 2GB. Not a lot of difference. And I suspect that once they reach that, they could probably use a second server as well.
Admin
I agree. A business is about making money. It may be a bad implementation, but if they can fix the problem with a cheap purchace, it is the right thing to do. It is hard for us to realize that, since we have pride in our work. But sometimes the cost of perfection is too high.
Admin
Buy a second server? You can buy a lot of servers with 25 months of paid work. You forget that the memory saved was under 50%. If the application lasts for 5 years and you have a total of 4 servers, the improved version would still require 2 servers. Would you rather spend 25 months rewriting the code, or spend a month's pay on a couple of servers?
Admin
I did not say that nothing should be done as a stop gap while the real fix takes place.
Admin
If you have a boss that doesn't understand the difference between a stopgap and a permanent solution, make sure that he'll take your word or go get another boss.
Admin
Actually, they probably won't... cause they won't have anymore problems so he'll get forgotten. A pity really.
Admin
Regarding just getting it working the cheapest way possible...
My brother is a CPA. He has a small office with ~10 employees. Each PC has it's own licensed copy of the varioius accounting programs, but all are mounted from the network server.
Everything was running win-98. Over the years, each annual upgrade of the accounting software would suck up more and more system resources until the only way to do two things was to exit the current program and start the other one, and then switch back when done.
I warned him numerous times to upgrade his hardware gradually, but he complained that it would "cost a lot of money", and that the programs he used weren't guaranteed to work with the new OS.
He put me in touch with the place that made the accounting software. I chatted with the lead developer, who essentially told me that they didn't care about ram usage; they just had to get the software out the door for tax season each year, and that they just couldn't worry about their customers' PCs. If someone left the software running too long and it ran out of memory, the solution was to tell the customer to reboot. They were also extremely far behind the curve (3 years) in supporting each new OS, which made upgrading your OS when purchasing a new PC something of a challenge.
Then it finally hit the wall. Most of the apps sucked up so much ram that they wouldn't even load. He wound up buying new machines. But wait, Win-98 didn't have drivers for the newer hardware, so he had to upgrade to XP. But wait, the apps wouldn't run correctly under XP. Ok, compatibility mode. But wait, printing didn't work quite right. Ok, get just the right printer drivers. But wait, the network file server wouldn't talk to Win-XP correctly. Ok, upgrade the server and the OS to a server version of windows.
And so it went for 3 weeks. During tax season. When the work was piled high and nobody could get anything done because of all the problems.
It wound up costing him more than $100K in lost business because he didn't want to do $20K worth of hardware/os upgrades because the apps he needed were getting kludgier by the day.
There's nothing he can do about the apps being behind the OS curve, but he now upgrades 1/3 of his hardware annually - it's essentially become a fixed annual cost.
There are real costs associated with putting out crappy software, and taking shortcuts just pushes the cost onto someone else.
Admin
Economics: you fail it!
Management spent 2 man-years worth of time that could have been used on other projects. Those man-years aren't free.
Admin
Can you imagine how.... um.... strained working conditions would be with Wilhelm following that meeting....
The real WTF IMHO is .... why didn't anyone else including "bob" think of that before said meeting....
Admin
How about the $60 fix, then fix the memory leaks in chunks. Release every month or so until it's stable and nice again (I guess about 2-3 months).
Admin
64k 32k 16k 1k
pah, luxury!
Admin
As for the article, I'm going to make an assumption of my own, and then back it up with logic: 2 gigs of ram would have allowed them to survive for at least 20 months. If the problem is purely memory leakage, that is, the entirety of the 250 megs they saved was from memory leaks, then that's 250 megs per day lost. 2 gigs = 250 megs app overhead + (250 megs per day) * 7 days. Put in a crom job to restart every sunday morning, and they survive indefinetly. I'd wager that it was, in fact, closer to 500 megs of overhead, and only a few megs of leaked memory, but that only improves the figures. On the other hand, if it's a scalability problem, more pages and more users means more memory, then my simple assumption is that in 5 months it took to work in the fixes, the frequency of reboots did not increase drastically. If we assume that the entirety of the 500 megs was in "cached" data (ie, the app had no overhead at all, with each page/user taking up a portion of that full 500) then the user base can continue to grow linearily for 4 times as long (20 months) before bumping into the edge of RAM. If they took a few hours to sit down and work out the prospects, they might decide on 4 gigs instead and schedule a server upgrade in 3 years to something that can handle more if they need it. I would be surprised if a web server continued to grow indefinitely like that, however. My experience is that they tend to plateau eventually.
Admin
It depends on the application. It all depends on the complexity and the number of users. At my old work, we once ran a small J2EE server on a desktop that was used by someone else to program, browse the internet and email. The server ran in the background while it was in development, and was actually used quite a bit before we moved it to a dedicated server. The machine was running Windows 2000 with 256MB RAM!
You are talking about the memory limit for a single PHP process. The numbers above are for all the processes combined, including the actual java based web server.
Admin
Naw, he's an M$ OS developer...
Admin
Too many companies today take the quick and dirty way out for fixing problems that are core to their business. The fix maybe works for a short while but if they don't address the real problem then it usually comes back - often in a bigger way. Code cruft happens in the real world, but eventually it needs to be cleaned up. Badly architected code needs to be refactored. The sooner the better.
Building more and more badly crafted code on top of previously crafted code is not the solution that will work in the long run, but it is how too many companies work nonetheless.
I agree that they should have upgraded the RAM as a temporary fix, then gone in and fixed the code as possible. Then the code should have been cleaned up as was possible. The problem is that most companies would not have let the code clean up proceed; they would have said "the extra memory fixed the problem, why should we invest in cleaning up the code?"The server would have had to be rebooted once a week instead of once a day and as they added functionality it would have just gotten worse and worse.
So, it is a good thing that the code was cleaned up (not sure about the methodology, or the time spent, but it was better than just letting it get worse).
There is a balance between doing what is necessary in the short term and what will keep you in business in the long term. Any company that doesn't address both short and long term issues will eventually go out of business.
Admin
Management skimped on those 2 years during the initial write with every intention of spending the time doing things correctly once the app was deployed. They made a conscious choice to sacrifice qualify for speed to market with the hope that the amount of customers turned off by a shitty product wouldn't out weigh the number of customers who wanted to get their hands on the project.
Admin
This is going too far...
I've only one bit of memory in my brain, you insensitive clod !
1
0
1
impressed ? eh !
Admin
Listen, lad. I built this kingdom up from nothing. When I started here, all there was was swamp. Other kings said I was daft to build a castle on a swamp, but I built it all the same, just to show 'em. It sank into the swamp. So, I built a second one. That sank into the swamp. So, I built a third one. That burned down, fell over, then sank into the swamp, but the fourth one... stayed up! And that's what you're gonna get, lad: the strongest castle in these islands!
(But I don't want any of that...)
Admin
I find it particularly hard to believe that they did (new SomeClass()).method(foo, bar); which, while surely a possible WTF, can't cause out of memory errors either (and the gc overhead typically is really minimal for such cases).
-edit- Dang. I just repeated this: http://thedailywtf.com/Comments/That-Wouldve-Been-an-Option-Too.aspx?pg=3#206640
Admin
Triple bah! Real programmers cut out or installed jumper wires to modify the logic of a "program"!
Admin
It is not uncommon to have requests take 5 months to come back signed and approved. That's if it's approved. Otherwise you have to repeat the request, and it may take a little longer the second time.
Yes, I worked in a smaller company (50 employees), where if I called the CEO (he had the money, but officially wasn't even my boss -I was an independent entity) to tell him of the purchase. Yes, that's one minute.
Admin
Yea, but if it works you just saved yourself tens of thousands of dollars.
Sure, we'd all like to run nothing but perfect code, but in the real world that's not always an option. I dealt with a system once, real piece of...work...And the only real solution was to shoot it in the head, bury it in a shallow grave, and start from scratch.
The company that was wedded to the unfortunate system was unwilling to do this, and everyone they called in to fix it told them the same thing. So, eventually they call me in, and I say, "Yea, it's a dog. You should toss it."
They say, "Is there no other possible solution?" I say, "Well, have you tried throwing more hardware at it?" Blank looks all around
So I went out and spent 10,000 dollars on a new huge kickass server, and the goddamn thing ran like a champ. Yea, it sucked up a ton of memory, yea it was an unmaintainable piece of crap. But I saved them a quarter million dollars, bought them five years time to shop a new solution, pocketed a fat fee, and left a happy customer behind.
When the only answer you're willing to give someone's question is, "Pay me to rewrite it from scratch" you're just trying drum up business for yourself. There is always another way.
Admin
Admin
Hah! Try 768 bytes (512 12-bit words), not of memory but of instruction space! That was just last year, too.
Admin
I wouldn't want to work anywhere where i couldn't. (and I haven't)
Admin
3.5k!? That would've been heaven! I started with a whopping 1k on my ZX-81.
Admin
There was a post on Slashdot a couple of years ago specifically asking the community about open source solutions to write-only storage. For some federal reporting reasons (SOX?) it was required. Logs had to be kept that were write-only. And the system had to be certified, so no home brew garage built solutions. They wouldn't stand up in court.
"I really, really hope you're kidding..."
No, there's more to life than what we expect. We get surprised every day.
Admin
Today. What about tomorrow? Now. if this is an application that doesn't matter and has a short life cycle or is already beyond hope, then sure apply some duct tape. However, if this is a mission critical application with a long expected lifetime, then by spending the money required to fix the core problem now will likely save you many fold in the long run. Of course, it all depends on the actual situation, but the story gives the impression that this was an early life application that made up the core of their business and thus really should not have a fundamental problem fixed with duct tape.
A stitch in time saves nine.
It's not about wanting to run perfect code. It's about applying the right solution to the problem. Sometimes that solution is duct tape, sometimes it's not.
This is exactly what is being suggested. Apply the duct tape AND start working on a real solution. TFA implies that only the application of duct tape was warranted. That's what is objectionable.
Admin
???
Totally depends on: lines of code complexity experience of the developers
Care to share those details with us? Because I don't remember the article stating anything except for that it was complex spaghetti code
Admin
It's still relevant if you think $60 is the total cost of simply throwing in the memory without fixing the underlying problem. It's in-house, so no, you won't have customers having their time/resources wasted when things screw up again (a possibility you should assume is quite likely if you never bother to fix the crappy code) -- you'll just be wasting your own company's time and resources when it screws up again. Why should those costs not also be considered when deciding not to try to fix the underlying issues? The cost of the RAM is $60. The trick is that the cost of not making any improvements to the software is not $0.
All those costs may still end up being cheaper than a real fix. In most cases, they'd probably be cheaper than this particular redevelopment (5 guys, 5 months). Or, this company may have been large enough, and this app important enough to them, with a long enough expected lifespan, that the thorough overhaul was a good use of their resources (except then they still should've added the RAM to get it running better in the meantime right up front). But that's basically beside the point: there are choices between "do nothing to fix it" and "devote five people to five months of working on reducing memory usage full-time." But the risks of completely ignoring the underlying problem after applying the short-term fix are way out of proportion to the cost of, say, at least looking into what the apps' biggest problems are, and then having the information you need to be able to make a reasonably informed decision about whether or not it's worth fixing some of those problems. Otherwise you're just hoping that your program, which is already exhibiting unexpected behavior, will not exhibit any worse unexpected behavior if things change in the future. 90% of the time this may be the case, but think of it like insurance: spending more money than you "have to" in order to limit the chance of total disaster.
Dealing with risk is important -- it's not something that a business can wisely ignore (well, at least if they're not in the sort of monopoly position where it doesn't matter how badly they screw themselves over, their revenue is pretty much guaranteed). Bad, unpredictable code is uncertainty. Uncertainty is risk. If you're just trying to buy yourself some time to replace it, at least do some actual legwork to verify that it should by as much time as you expect and give yourself plenty of margin for error.
Admin
On a similar note, a few years back a friend had written a proxy server for rendering foreign character sets (Chinese/Japanese IIRC) using images (back when Unicode support was very limited, if that good). Being an experienced programmer who knew about high performance HTTP daemons (having just completed the same course as the guys who started Zeus), he'd written his server using select() and non-blocking I/O, and everything was fine on all the servers across the globe his server ran on - except one Solaris box in Australia, which kept failing under load. Yep, resource limit on file descriptors... He was very happy to learn of the ulimit command!
(Captcha: sino. Since this post's about a Chinese application, it seems somehow appropriate!)
Admin
I think it was Stalin who said: Quantity has a quality of its own.
Solving the problem is the first priority: the site needs to be up. If adding $60 in RAM would make it stay up, then do that.
Then look at actual memory usage and refactoring and such. If the increase in RAM gives you breathing room, maybe a re-design or purchasing another package is better for the company.
First, figure out what the goal is. Next, pick the strategy. Duct tape and chicken wire was the appropriate material for Operation Quicksilver.
Admin
I don't usually complain about the stories that go on to here, I don't even really complain about Manslavery Feck Day (tho they are starting to get better now)... but the fact is, if you are using over double the amount of RAM you need because of poor application management... you need to have an overhaul of how memory is managed.
Infact if the server only had 1GB, doubling it too 2GB would probably have only reduced the occurance of the problem as their were likely unknown memory leaks within the system that would have just taken the application up to using 2GB of RAM anyway.
William did the right thing, the system needed an overhaul. However the system could have been upgraded to 2GB at the time as well to improve customer service for the mean time until the problems were actually resolved.
I don't see any WTF in what william did and just following more and more quick fixes the system would likely have just got worse and more unmanagable as time proceeded.
Admin
Yeah, but if the original problems were caused by a memory leak then increasing memory would have only delayed the problem, it would still eventually exceed the 2GB available.
Admin
Nope. Just watched the Moon landing stuff on Discovery Science last week, the main guidance computer used about 72k of memory. Of course, that was for Data, Programs AND the OS. Which they they of course had to write themselves. The program was "written" by knitting copper wires thru and around memory core rings. If the copper went thru the ring, it was a 1, if it passed outside the ring, it was a 0. The resulting knitted cable was then snaked around and attached to boards that were installed into the spacecraft.
the programmers weren't even the engineers that designed the program, they contracted women to do the knitting because their hands were smaller, more dexterous and used to doing that kind of work in the 1960's.
Fascinating bit of history.
Cheers!
Admin
Admin
While that was probably an embarrassing moment, I wouldn't consider it a wasted investment. Bob made a point, but that's not the right attitude to have. Leaks are leaks, and adding more RAM is just a band-aid. Just because hardware is cheap and powerful, doesn't mean developers can afford to get lazy and write sloppy code.
That's like saying Vista's problems will disappear if you throw enough RAM at it.
Admin
4k of 16 bit magnetic core memory on a Varian 620-L.
Bitswitches on the front, to load the bootstrap loader.
ASR-33 with a tape reader and punch. 110 baud.
I had to disassemble and rewrite the assembler just to get the editor AND assembler into memory at the same time. I had to create new assembly language mnemonics to implement commands the designers never thought of. Transfer (no source) incremented to B. Saves an entire 16 bit byte over Clear B, Increment B.
Real Programmers Use Bitswitches.