- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
oblig:
no, real programmers use butterflies.
http://xkcd.com/378/
Admin
An Xbox360 or Playstation 3 has 512MB.
Admin
BAH! - you don't know what real memory conservation is - my first computer, a SWTP 6800 computer had 0K memory. The motherboard would support up to 4 memory boards (which drew 6 amps), each board of which would support 4K. When you purchased a board, you had to tell them to completely fill it otherwise it would come with only 2K and room for the other 2K - so my system had a massive 2K!
Admin
For web apps, the server's RAM requirements have absolutely no correlation to the end users RAM requirements -- even if the code is buggy and poorly designed. Because Google runs on clusters with dozens of GB of RAM and Terabytes of disk, doesn't mean soccer mom's across the globe have to buy more RAM for their computers to use google maps to find the nearest coffehouse.
Cheers!
Admin
I have to admit I agree with the quick-and-easy memory solution. But then I work for a company which would not exist if employees were allowed to go on five-month pursuits of perfection.
While I'm sure that a lot of work went into making that app better, I suspect that actually it was only a few changes in crucial spots that really made any difference.
Redeclaring functions to be static? I'm not convinced those five months were wisely used.
Admin
I don't disagree. As I've previously clarified, there's no problem with adding more ram to keep things going until the real solution is done. It's the implication that adding more ram is the only thing that should be done that is the problem
Admin
This is a very good point. We had some contractors working with us that specialized in our particular content management platform, and they secured a contract with one of our clients to sort of pitch in with the client's overzealous time schedule they committed themselves to. this particular contractor was one of the ilk that had a farm of Indians ready and willing to whip up some low-grade "it works if you don't blow on it too hard" code for cheap cost to them so they could maximize their profits.
So anyway, one app this contractor supplied had the purpose of querying a database for globe origin records joined to destinations which fell in a certain radial distance to the origin. When we got the thing and used it on production data, it took upwards of 10-30 minutes for a query to complete. When the contractor was asked about it, they insisted "it worked fine for us" and that our database server must just be misconfigured or underpowered. A quick examination of their code revealed that their query was simply too taxing - the working set of records was easily several million rows because of the number of cross joins, lack of join conditions, and the fact that the where clause included a calculation with 6 trigonometric equations. The temp table created was inhibitively enormous. It took myself and a colleague three hours to brainstorm and re-engineer the app to limit the working set by reading some MySQL docs, optimizing the table join order, and predetermining a coordinate box that would minimally contain the radial area of interest as a join condition for the destination records. This 3 hour investment cost us nothing (my colleague and I are salaried), and query times sped to 1-10 seconds, roughly 2 orders of magnitude faster. An investment in a new webserver would have cost tens of thousands of dollars, much more time and effort for deployment and testing of the new system - not to mention time wasted waiting for its arrival - and the application still would not have been able to scale.
We've since dropped the contractor from our own contracts and we'll never do business with them again after seeing how they operate - this story was one of many similar problems we had in which their applications failed to scale on even a remotely reasonable level, if they operated at all. So did they get anything out of their "be sloppy, the hardware will fix it" approach? Arguably, maybe. They got their initial contract money, but they lost easy contract renewals from people ready to throw gobs of money at them, so in the end they lost out.
Admin
Or (more likely) suffer an early death at the hands of a velosaorapter.
Admin
Hey, I'd like to side with you out of principle, but if duct tape and chicken wire works without long-term drawbacks, then that's what you use.
...unless of course you're the type to buy an overpriced Mac for the shiny finish on the case.
Admin
That has to be Peachtree by SAGE.
Admin
A few comments on optimization by some inarguably "real programmers":
More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason – including blind stupidity. -- William. A. Wulf 1972
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald E. Knuth 1974
We follow two rules in the matter of optimization: Rule 1. Don’t do it. Rule 2 (for experts only). Don’t do it yet – that is, not until you have a perfectly clear and unoptimized solution. -- M. A. Jackson 1975
Admin
That has to be Peachtree by SAGE.
(Hooray for learning how to quote.)
Admin
No, it could NOT be a scalability problem. If it was a scalability problem, the problem would happen at peak times. Go back and read again on the problem and on what I said. The symptoms are very clear.
Admin
If the problem was scalability, there would be no point in rebooting the server. It would crash at peak times, ok, but once the load was down, there would be no point in rebooting it. Read again what I said, it's all there. Or, if you want, try to think up a scenario where rebooting the server would help. A non-leaking scenario.
Admin
Your wife isn't into it and Irish Girl turned out be a guy. Now what do you do?
Admin
The real WTF is that Wilhelm didn't have a quick answer as to why adding memory to fix a memory leak was a bad idea.
Admin
Admin
Amazing how we geeks boast about how small we can make things. Is this some sort of reverse-phallic-dilemma? Would Freud have a field day with us or what...
T-
Admin
You are an idiot
Admin
Surely the cheapest, easiest way to reduce the memory needs of the application would be to remove all the comments and unnecessary whitespace from the source code.
Admin
Admin
Reminds me of someone I used to work with.
We were talking about refactoring the code, because is was slow, her solution "Who cares about re-factoring when you can just get faster processors". Evidently, this was a common response from the other Java devs too. (We had a .NET and a Java team).
On another note
I have a bacground in both programming and infrastructure, and beleive me, if this was a high demand app that was running on any descent server, 2GB of ECC ram for those sort of servers, is not $56, its more likely to cost over $1500, so yeah. $1500 now, and every few months after because the memory leak is still there.Admin
My car seems to be only getting half the milage from a tank of fuel than it should. Acording to the "Add more RAM" theory, the solution is to add a bigger fuel tank right?
Admin
Real programmers need 1 byte of RAM. And that one is virtual.
Admin
Time = Money. Technically they DID throw money at the problem. A lot more money than just some more RAM would have cost.
Those programmers could have been working on real bugs, leaks, or new functionality that saved time and money or generated revenue.
Admin
Yay Great Circle distance formula.
Admin
Ypu can go a long way duct-taping people to chicken wire.
Admin
you write that like it's a BAD thing....
A serious aside: There was a middle ground here where RAM was immediately added as a stopgap and the memory leaks could have been fixed. Period. Heavy duty memory optimization was likely at least a month of their time and probably more.
Nevertheless, if the programmers are sticking around, they probably did learn things that will be useful in the future to the company (aside from the 'add more memory' bit). Think of it as hideously expensive training.
Admin
Admin
CoderHero:
You, sir, are in the wrong profession.If you only want to solve problems that have good numbers, and not the important problems, you should be an economist. They have cornered the market in that.
Admin
Admin
No, but since Google runs thousands of servers, they should watch their RAM and CPU requirements in their software, since they can either buy lighter servers or buy fewer servers. Since electricity and rent are two big factors for them, either is a win.
Admin
An interesting situation. If the company shipped the code with specifications that recommended (say) a 512MB machine then the people purchasing it would reasonable be irked to be told that they had to add more memory or purchase more hardware to "really" make it work.
Of course, there are other intangible costs, like the reputation of the source, and tangible costs, like the cost of fielding calls to the help desk and lawsuits.
Not that this ever happens ...
Admin
Ask you why you can't spell velociraptor ;-)
Admin
Except the $60 wouldn't have fixed the problem, it would have just made the restart every 4 days rather than every day. A few more months and they'd be back to every day again.
Meanwhile, since the app was being patched, hacked, and patched some more and management was willing to assign a team to it, it was probably important to the business.
Willhelm's solution actually fixed the resource leak and paid off a lot of IOUs the developers had been writing through all of those quick hacks.
As a bonus, 4 programmers probably became better through the mentoring process and are now very familiar with the codebase so hopefully they can keep it clean this time.
Admin
Adding memory and having a script that bounces the process every so often is not the best fix for a memory leak, but it is certainly pragmatic. It really depends on how much it costs to track down the leak and fix it. Or more to the point, there should be a decision as to how much the organisation is willing to spend tracking down the bug.
There's a reason that the first question tech support ask is "have you tried rebooting the computer"? It's cheap!
Admin
Heheh truncating the command PRINT to PR to save three bytes. Removing all whitespace. Good times.
Admin
A far better solution would be to make a robot to press the server's reset button every night.
Admin
"The computer had 512MB of ram. What production computer has 512MB?"
One day I will have to tell the story of a disaster on a production system while it was being used publicly by the President of the United States. The ultimate cause of the problem? Memory exhaustion. The system had an amount of memory that was ridiculously small for the time.
I think it was 16MB at a time when typical desktop machines has 64MB bare minimum. Maybe it was 64MG when 256MB was bare minimum.
Admin
Hey, I found out the full story is public. http://www.boredom.org/cnn/statement.html This is not our side of the story, of course. but it will give you the idea of what I'm talking about.
Admin
It's obvious that most people here have no business training whatsoever.
5 months (someone estimated this at $200,000) to fix the problem, compared to buying a RAM module for $60? How is this even a discussion? How can you possibly argue that the extra $199,940 that was spent was worth it? Hell you might as well rewrite the app from scratch.
I've been programming my whole life and while I love the idea of optimising the hell out of something and coming away with a positive result (50% improvement is very impressive), the business side of me could never, ever justify such a huge disparity in time and cost.
I know the extra RAM doesn't actually solve the problem and it's a temporary fix. As such, while the code is being maintained through its normal course, small periods of refactoring / optimisation should be conducted to incrementally improve the application.
But for gods sake, blocking off 5 months of time to work on something like this is financial suicide. For a public company, it would be borderline criminally negligent to waste so much money so cheaply.
What if it took 12 months to fix this? That's $480,000. How about 2 years for almost a million dollars? When do you say "ok, that $60 RAM bank is looking pretty good now"?
Hint: the correct answer is not "we will never say that, we should spend all the time necessary to fix the application, no matter what the cost is". If you disagree with this statement, then I'd advise you never to start your own business, because you will be bankrupt within a year.
Get some perspective please my fellow geeks. I know it's cool to hate on "business decisions", and "managers" but this one isn't close. This time, the geeks have it wrong. Trust me.
Admin
Weak sauce!
1K.
Timex-Sinclair.
If an array or code got too big, it would overwrite the video memory...
Admin
Don't forget that his WHOLE team learned how to do it right. This was "everything would be cleaned up later" time.
"His team still complained about some of the work"? I guess learning how to do it right is too hard... Let the next guy do that... ...Wait, you are outsourcing our work elsewhere? But, we can do that too! When? If you would only let me show you on my next projec... ...OK, then maybe I can do it right at my next job...
Admin
Yes, adding extra memory is arguably bit of a patch up job.
However if the application is creating temporary memory structures which are properly destroyed then you should add more mmory. That way you are not going to be starting the application on a daily basis (risk of data loss or corruption here) but instead weekly or fortnightly. You then have reduced the heat that you are copping from customers and management.
Admin
Or two women 3 months.
Admin
Nope. 4K of RAM and 4K of ROM - and a 250 baud cassette drive for storage. (Model I Level I TRS-80).
Admin
Ha. Proves the intern is still wet behind the ears.
Obviously they were on a tight budget and cannot afford to procure more hardware.
Admin
ha! I'll see your pitiful VIC-20 and raise you a Sinclair ZX81 with 1k of RAM! And I called myself lucky!!
Admin
We had a situation a while back with a report that kept throwing Out Of Memory exceptions as well. The programmer who coded it insisted that it was a "Complex" report and just needed more memory to generate. (This was happening on systems with 1 gig of RAM trying to load around 1000 records.)
When another programmer finally looked at it we quickly found the source of the problem:
He was using static arrays... five to be exact... of long... set to 20000000. He was "Planning for future growth". (At that time our largest client had about 20000 records for that report)
Admin
Yeah, I was being sarcastic a tad. But still consider that they went to the MOON with 72K of memory and we're talking about a web app to sell widgets being underpowered at 2GB.