- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Well, that will greatly reduce the curriculum of any CS or software engineering course.
Q : What can you do to solve any error? A : Throw in more hardware.
PASS
Admin
@Fedaykin,
Seconded. Wilhelm can be a programmer at our site whenever he wishes. With the advent of GUI based development tools people have to deal with too much crapola like this on production systems. Functionally it usually works, but without code-optimization by hand: say bye-bye to your performance. I've seen an example that hardware to the tune of $100K was rolled into a datacenter. However when one of our DBA's took a quick look at the app configuration it was quickly spotted that a lot was amiss. He next adjusted the number of cursors and tweaked quite some DB indexes. Result: app is running now on an IBM X3650 while serving several millions of queries per day. CPU and IO load on average: less than 10%. The new $100K hardware? It stood idle for a month or two until a DataWarehouse app came along which really needed it.
Admin
Would extra memory have fixed the "unreleased resources", and "unclosed connections"?
Admin
Luxury. When I started programming, users had to write down the memory values and enter them back in when the computer needed them. And we couldn't afford complete binary, so they had to do it using zeros only. With a screwdriver, wires, and duct tape.
Then our dad and our mother would kill us and dance about on our graves singing hallelujah.
Admin
Can't agree more. If there is a leak, it leaks.No matter how much hardware you throw into it. Developer should never assume that his code has infinite amount of resources at runtime.
Admin
Uphill both ways?
Admin
Allright....
When I started programming, in -1.7 nBytes mind you, programmers had to chisel their byte codes with their jaws (then known as bite-code) on rock tablets, while viewing results in our monochrome displays (as in one colour - black).
We used to wake up 1 hour before dawn, spend 34 hour day shifts and get back to sleep while eating broken glass (with SPAM of course).
Our father would chop of our heads, sip wine of our skulls before killing us and singing hallelujah whilst dancing over our graves...
Admin
I started with 80 bytes of RAM.
Admin
I find the answers gospeling the RAM increase as a valid fix quite interresting. The app were leaking resources and not just memory. Sooner or later it will hit a nice, solid wall of having bound up too much resources. Adding more ram wont fix that.
Tossing in more hardware is a good way to buy more time to fix the real problem, as have been pointed out a few times here already. However, as have been seen on this very site on numerous occasions, the real fix wont happen as long as the system works reasonably well.
Admin
TRWTH is obviously to increase physical server memory !
Admin
Sounds like they ended up with a better system to me....
Admin
That's not nessesarily true. Only when there are in fact (big) memory leaks. If your application simply needs 500Mb, it's silly to try to run it with less with some complicated and unmaintainable hacks. Specially when RAM is so cheap.
Admin
Admin
Why bother typing any code when I have my super-duper IDE do all the thinking for me?
Hell, why do we all still force children to go to school to learn something, I mean, we could have robots do everything for us so that someone's head doesn't hurt when they're tying their shoelaces or find a place on a map.
Admin
Precisely. Too many people forget that there's no such thing as a "temporary solution".
Admin
Sort of remind me of a story a friend of mine told me.
It was a company that sold a service used by travel firms to book air tickets.
They had performance problems, they had found out that it was due to the databases being implemented by "morons", they should roll their own database to show the world how its done.
The management was convinced, after all they always strived to hire the most talented programmers so this would be awesome:
It was very important that it was ready to roll before summer as well as people went on vacation then which meant that.
4 programmers, 9 months = 3 man years.
The end result:
But they just had to roll the new version as they hadn't worked on the old version at all.
Many angry customers later it turned out that the server park consisted of 1 computer that ran the webserver and the databaseserver on standard disks and low-end CPU.
Admin
The time to explore the "shove more memory in" option is before this guy is called in. The wrong person has been exposed to ridicule. Unfortunatly this sort of garbage can happen when costs are micromanaged so you have zero extra hardware budget but some sort of software budget.
Admin
I'm surprised more people aren't recognizing the inefficiency of spending 25 man months to only net a 60% savings.
On one of my early career programming gigs, I was tasked with reducing the memory footprint of a particular app. The "server"'s memory was maxed, it was the top of the line model from the local computer store (and yes, this did mean it was a desktop running an end-user version of Windows, but that's not the point here) so could not be further upgraded, and the application fundamentally had to work on just one machine.
When I got the task, the process was "working", but it was taking two weeks to crunch all of the data - the system was effectively running on swap.
In a week's time, I had the process down to using 10% of the computer's memory, and it ran in an hour. The next week, I'd solved some analysis bugs, and further reduced the memory to 5% of the computer's memory, and a 10 minute run time.
Note: I'm a unix guy. My solution did not involve using a unix system at all - although I did bring perl into the process, for some initial data cleanup.
Also note: the original program loaded the entire dataset into memory at one time, and then processed it into a database. The larger the dataset grew, the more memory it would take. The final version I produced performed all of its calculations as it streamed it into the database, never storing more than one row of data in memory at a time, so its memory needs would not increase with a larger dataset, unless the length of the row grew.
Final note: that was an extreme example. Much more typical optimizations are a 90% memory reduction in a man month. If the memory reduction is not from either a reduction of memory leaking or an overall scalability improvement, I feel unhappy and frustrated at the end of that month.
Admin
In some programming languages. In others, GC time is relative to the number of objects which need to be freed per unit time, or relative to the number of objects which do not need to be cleaned. Also note that not all garbage collectors freeze the whole application (although I am unaware of any that allow memory to be allocated while the garbage collector is running.)
Of course, GC only helps if the application knows the memory is ready to be freed - and in a memory leak situation, the application does not know the memory is ready to be freed. (In the classic case, the programming language could have determined it, but the languages didn't have detection capabilities. Today, since most of the languages do have said capabilities, memory leaks generally are not of that variety.)
I personally prefer languages which free memory whenever it's clearly no longer needed, and only run the garbage collector when high memory conditions exist or the program is otherwise idle. It's much nicer not having pauses due to GC passes.
Admin
1 KB is like an endless sea... Motorola MC68HC705J1A microcontroller, has 64 bytes of built-in RAM. Now say, you're Real programmer.
Admin
Well, managements Credo is: If it can be fixed for $60 than do it. If it holds for more than 1 year it's good. Who knows for which company I will work in one year time? When this stupid thing breaks another time I'll be long gone.
Admin
I must be old. My first programs were on a Sinclair XZ-81 with 1K of RAM. Yes 1K. And that was shared with the "frame buffer", which was dynamic. You could get OOM errors just by PRINTing or DRAWing on too much of the screen!
(Initially the "frame buffer" was a list of CR chars, just 23 bytes. It increased in size as put stuff on screen.)
Admin
As a temporary solution more memory (in other apps => faster cpu, more hdd, ...) is ok. But if the design is bad it'll only get worse since you're building on quicksand. A review with severe fixes on the worst places like the old dude is the only long-term solution for this problem.
The real WTF here is communication since he couldn't sell his solution.
Admin
TRWTF is that this non-WTF made it on this site.
Admin
After reading the first page of comments, I wonder why there are only the two extreme positions of "just add memory" and "Wilhelm was right".
My approach would be to go after the memory leaks, because those can exhaust any amount of RAM over time, but ignore the rest of the inefficiency.
Admin
Most computers come with plenty of WOM on board, but if you fill it up your /dev/null stops working and you have to send your motherboard back to the factory to get it drained. See e.g. the datasheet for the Signetics 25120 9046xN WOM part.
Admin
Nahh, I didn't, it didn't matter anyway because all the old CBM machines running MS BASIC used to tokenize all keywords down to a single byte in the $80-$ff range anyway. Removing the whitespace did make a difference though, you're right there.
Admin
Later on I got the 512 byte expansion memory and 300baud cassette tape interface. Now that were real luxury, that were!
Admin
Later I got the 512-byte expansion memory, which was just huuuuge, so I was really glad I got a cassette interface with it so I could save and load programs instead of entering them by hand every time I switched it on. :-)
Admin
Luxury! I started with the MITS Altair which had 256 bytes of memory. And it was uphill both ways.
Admin
Who says the problem needs to be fixed? Not all software needs to be perfect. If it's leaking memory just a bit at a time, a weekly or monthly reset script is by far the cheaper, easier solution. It's a sad state that that is the way business needs to operate, but... that's the way business needs to operate. In the ideal world you would have all the time and resources to fix it, but in the real world you can't always afford that luxury. 5 developers spent 5 months working on old code that arguably didn't need fixing when they could have been working on new code and patches that did need to be done. The WTF is that no one, not even the boss, did any sort of cost/benefit analysis and asked for other solutions.
Admin
So really what is the WTF supposed to be in this in this meaningless story? That interns can be clueless? Of course they didn't re-architect the whole codebase only to reduce the memory footprint. They did it cause they wanted stable, maintainable code, that didn't crash the server every single day. The only thing Bob did was to embarrass the everyone at the presentation and come out to everyone as a clueless jerk.
Admin
3.5k???? I started with 1k on a ZX-81, thankyouverymuch.
Admin
In 2001 128MB of PC1600/PC2100 cost about £200. 7 years later it costs less than $10.
So in 7 years time I would expect 2GB RAM to be easily less than $20 as it is currently around $100.
Check out http://www.willus.com/archive/timeline.shtml for the numbers. And think about what was a common computer in use 7 years ago. On the kind of developer machines I was using at work in 2001 then 256MB was common and seen as a large amount.
Admin
Only if the memory footprint was increasing while running. If the memory footprint was pretty stable, say going up to 512MB and staying there, then get more memory. If, OTOH, its going up at a steady rate, there's surely a fix needed.
Admin
I remember them too - some of that 1K was actually given over to screen memory.
Admin
Admin
I understand where you're coming from, there has to be a line drawn somewhere, but I'm confused as to how you can't see the benefit of the investment in the given story. First off, $60 is the cheapo end user version. If it's a server, chances are memory pricing will be significantly higher.
But let's disregard that. This is a corporation, it's expected they pay for the RAM and the optimization. In 5 months' time, their software is optimized, can be distributed to their clients through a patch, 4 programmers have had some seriously old-school, high impact real-time training, and the software will no doubt be much easier to upgrade now that everything has been commented, streamlined, etc. The heap of the work is already finished.
Code from scratch in 5 months, on a whim? I'm not so sure. Even if you could come up with a schedule to release it as such, there are always issues cropping up pushing said deadline back.
If nothing else, think of the rapport. All their clients using said software suddenly have this amazing upgrade that doesn't cost them anything, fixes their stability issues and speeds everything up.
Word of that kind can't be easily bought and it spreads fast.
Admin
For an application that someone is selling then that is a valid argument, especially if it has a lot of users where each user needs to buy RAM, or where it is targetted to a limited platform such as mobile phones. But reading between the lines of the post you were replying to it was talking about an application that is running in-house on one machine, and the person who would get the code improved is the same person who needs to buy & install new boxes.
In an ideal world we would all have beatifully tuned software that needs minimal RAM & CPU but often there are other priorities.
Admin
This from someone who has yet to learn how to use the quote button. Impressive.
Admin
So why not put the memory in so it at least runs better for the 5 months that all the developers are working on the optimisation.
But I am a little concerned by the, "2GB of memory for fifty, sixty bucks", in a day when a production web server has 512MB. Doesn't quite make much sense.
Admin
So this is a "submitter WTF", right? We're constantly reading here about unmaintainable software that desperately needs rework, yet the pointy haired management types insist that a cheap hardware upgrade will do the job. And so management short-sightedness delays a whole bunch of maintenance overheads that are surely going to rear their ugly head sooner or later and the WTF is complete. But here we have a team that do the right thing and improve the software rather than bolt in more hardware as a temporary band-aid.
Yet, someone thought that this in itself was a WTF and submitted it here. So the writers have crafted a very clever "anti-WTF" where the WTF is actually that someone thought this was a WTF and submitted it.
Right?
Admin
"Wilhelm was given four developers to help with the task."
None of whom asked the same question.
Admin
It's obvious that most people here have no business training whatsoever.
Right. No doubt you'd argue that a patient slashed from shoulder to groin just needs a bandage to fix the problem, right? It's the cost-effective solution, conserves on the time of those ridiculously expensive doctors, and frees up beds for the gunshot wounds. Besides, our legal department can handle any blowback from the possible bad outcomes due to the treatment, and we did provide a help center in India to deal with any concerns by the victim (let them do some work for a change!).
Sounds like the right option to me. I don't have any business training, but I know our stockholders will be pleased.
Admin
Nate: you're paying WAY TOO MUCH for your ram
Kingston 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC Registered DDR2 400 (PC2 3200) Dual Channel Kit Server Memory - Retail
http://www.newegg.com/Product/Product.aspx?Item=N82E16820134267
$140
Admin
Here is the opposite story (true, I swear): Several years ago, I assisted a company building a internet web-app for order entry for a telco. The architecture was active record + business layer + template engine. Eventually, the system grow to 600 forms and was used by several hundred concurrent users. With growing application size and user number, the system run out of memory quickly, and as a quick fix they bought IIRC 2GB RAM. Which was pretty expensive back then, 2000 USD or so. Then I learned about the problem, analysed it and saw that the problem came from the template engine that kept the parse trees of the templates per user. Changing that to a global template cache dramatically reduced the memory footprint... and that fix took only a few hours.
Admin
I think the WTF on Wilhelm's side is that he didn't even consider the possibility of the hardware fix. As the story is told, he was so stuck in the past that 'no webserver should need more than 512k', even though webservers can easily have GBs today.
His team gets kudos for actually fixing all those problems though. In my experience, being allowed to do that almost never happens. Kludge and band-aid until the whole system is practically a mummy, and hopefully by then there will be a new application available that can completely replace the doddering heap that is left.
Admin
Admin
Lyle could've done it with 4K.
Admin