• AC (unregistered) in reply to RGB - my real initials

    oblig:

    no, real programmers use butterflies.

    http://xkcd.com/378/

  • Alan (unregistered) in reply to Anonymous

    An Xbox360 or Playstation 3 has 512MB.

  • George Weller (unregistered) in reply to anonymous

    BAH! - you don't know what real memory conservation is - my first computer, a SWTP 6800 computer had 0K memory. The motherboard would support up to 4 memory boards (which drew 6 amps), each board of which would support 4K. When you purchased a board, you had to tell them to completely fill it otherwise it would come with only 2K and room for the other 2K - so my system had a massive 2K!

  • Too much TV (unregistered) in reply to SomeCoder
    SomeCoder:
    CoderHero:
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

    Ok, so let me pose a different question to you: What is the end user's time worth to you?

    My time is also expensive. If I have to sit and wait for an app because it requires 4 GB of RAM and I have 1 GB, that's wasting MY time. Sure, your time was saved but which time is more valuable? In this case, I'd say it was the end user's time because if your app thrashes and requires a lot of RAM, he's going to want to use the competing product that doesn't. Or he'll go to a competing website.

    My computer is also expensive. Hardware is relatively cheap but it's not THAT cheap for the average user. You think the soccer mom who uses your app likes to go out and buy sticks of RAM just because some app is bloated? You think she likes to wait while your server crashes because the app leaks memory?

    I don't mean this post as a flame at all, but I think that if more programmers would start caring about the end user's time more than their own time, we'd have a lot more quality apps out there.

    For web apps, the server's RAM requirements have absolutely no correlation to the end users RAM requirements -- even if the code is buggy and poorly designed. Because Google runs on clusters with dozens of GB of RAM and Terabytes of disk, doesn't mean soccer mom's across the globe have to buy more RAM for their computers to use google maps to find the nearest coffehouse.

    Cheers!

  • Chris (unregistered)

    I have to admit I agree with the quick-and-easy memory solution. But then I work for a company which would not exist if employees were allowed to go on five-month pursuits of perfection.

    While I'm sure that a lot of work went into making that app better, I suspect that actually it was only a few changes in crucial spots that really made any difference.

    Redeclaring functions to be static? I'm not convinced those five months were wisely used.

  • Fedaykin (unregistered) in reply to richardchaven
    richardchaven:
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem,...

    I think it was Stalin who said: Quantity has a quality of its own.

    Solving the problem is the first priority: the site needs to be up. If adding $60 in RAM would make it stay up, then do that.

    Then look at actual memory usage and refactoring and such. If the increase in RAM gives you breathing room, maybe a re-design or purchasing another package is better for the company.

    First, figure out what the goal is. Next, pick the strategy. Duct tape and chicken wire was the appropriate material for Operation Quicksilver.

    I don't disagree. As I've previously clarified, there's no problem with adding more ram to keep things going until the real solution is done. It's the implication that adding more ram is the only thing that should be done that is the problem

  • (cs) in reply to phil
    phil:
    But if you want to sell your software to users, can you really reasonably ignore those "vague and fuzzy numbers"? And it's not how much the user's time is worth to them so much as how much the user's time is worth to you -- in other words, if they think your app is slow and bloated and see a better alternative, how much money are you losing compared to if you had built a better program and they were still doing business with you?

    This is a very good point. We had some contractors working with us that specialized in our particular content management platform, and they secured a contract with one of our clients to sort of pitch in with the client's overzealous time schedule they committed themselves to. this particular contractor was one of the ilk that had a farm of Indians ready and willing to whip up some low-grade "it works if you don't blow on it too hard" code for cheap cost to them so they could maximize their profits.

    So anyway, one app this contractor supplied had the purpose of querying a database for globe origin records joined to destinations which fell in a certain radial distance to the origin. When we got the thing and used it on production data, it took upwards of 10-30 minutes for a query to complete. When the contractor was asked about it, they insisted "it worked fine for us" and that our database server must just be misconfigured or underpowered. A quick examination of their code revealed that their query was simply too taxing - the working set of records was easily several million rows because of the number of cross joins, lack of join conditions, and the fact that the where clause included a calculation with 6 trigonometric equations. The temp table created was inhibitively enormous. It took myself and a colleague three hours to brainstorm and re-engineer the app to limit the working set by reading some MySQL docs, optimizing the table join order, and predetermining a coordinate box that would minimally contain the radial area of interest as a join condition for the destination records. This 3 hour investment cost us nothing (my colleague and I are salaried), and query times sped to 1-10 seconds, roughly 2 orders of magnitude faster. An investment in a new webserver would have cost tens of thousands of dollars, much more time and effort for deployment and testing of the new system - not to mention time wasted waiting for its arrival - and the application still would not have been able to scale.

    We've since dropped the contractor from our own contracts and we'll never do business with them again after seeing how they operate - this story was one of many similar problems we had in which their applications failed to scale on even a remotely reasonable level, if they operated at all. So did they get anything out of their "be sloppy, the hardware will fix it" approach? Arguably, maybe. They got their initial contract money, but they lost easy contract renewals from people ready to throw gobs of money at them, so in the end they lost out.

  • Matt (unregistered) in reply to anon
    anon:
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    I'm a velosorapter, what do you do now?

    Convince you to self-administer a Voight-Kampff test. You will realize that you are a fake velociraptor and wander into a dark cavern (where you will be eaten by a grue).

    Or (more likely) suffer an early death at the hands of a velosaorapter.

  • HonoredMule (unregistered) in reply to Fedaykin

    Hey, I'd like to side with you out of principle, but if duct tape and chicken wire works without long-term drawbacks, then that's what you use.

    ...unless of course you're the type to buy an overpriced Mac for the shiny finish on the case.

  • BentFranklin (unregistered) in reply to snoofle

    That has to be Peachtree by SAGE.

  • Dave (unregistered) in reply to Wilhelm

    A few comments on optimization by some inarguably "real programmers":

    More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason – including blind stupidity. -- William. A. Wulf 1972

    We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald E. Knuth 1974

    We follow two rules in the matter of optimization: Rule 1. Don’t do it. Rule 2 (for experts only). Don’t do it yet – that is, not until you have a perfectly clear and unoptimized solution. -- M. A. Jackson 1975

  • BentFranklin (unregistered) in reply to snoofle
    snoofle:
    Regarding just getting it working the cheapest way possible...

    My brother is a CPA. He has a small office with ~10 employees. Each PC has it's own licensed copy of the varioius accounting programs, but all are mounted from the network server.

    Everything was running win-98. Over the years, each annual upgrade of the accounting software would suck up more and more system resources until the only way to do two things was to exit the current program and start the other one, and then switch back when done.

    I warned him numerous times to upgrade his hardware gradually, but he complained that it would "cost a lot of money", and that the programs he used weren't guaranteed to work with the new OS.

    He put me in touch with the place that made the accounting software. I chatted with the lead developer, who essentially told me that they didn't care about ram usage; they just had to get the software out the door for tax season each year, and that they just couldn't worry about their customers' PCs. If someone left the software running too long and it ran out of memory, the solution was to tell the customer to reboot. They were also extremely far behind the curve (3 years) in supporting each new OS, which made upgrading your OS when purchasing a new PC something of a challenge.

    Then it finally hit the wall. Most of the apps sucked up so much ram that they wouldn't even load. He wound up buying new machines. But wait, Win-98 didn't have drivers for the newer hardware, so he had to upgrade to XP. But wait, the apps wouldn't run correctly under XP. Ok, compatibility mode. But wait, printing didn't work quite right. Ok, get just the right printer drivers. But wait, the network file server wouldn't talk to Win-XP correctly. Ok, upgrade the server and the OS to a server version of windows.

    And so it went for 3 weeks. During tax season. When the work was piled high and nobody could get anything done because of all the problems.

    It wound up costing him more than $100K in lost business because he didn't want to do $20K worth of hardware/os upgrades because the apps he needed were getting kludgier by the day.

    There's nothing he can do about the apps being behind the OS curve, but he now upgrades 1/3 of his hardware annually - it's essentially become a fixed annual cost.

    There are real costs associated with putting out crappy software, and taking shortcuts just pushes the cost onto someone else.

    That has to be Peachtree by SAGE.

    (Hooray for learning how to quote.)

  • Daniel (unregistered) in reply to CoderHero
    CoderHero:
    Daniel:
    Anyone who said there was nothing in the article indicating a memory leak is WTF. Go learn how to debug software.

    The server needed to reboot regularly, and with increasing frequency. When it got to reboot everyday, they decided to do something about it.

    Hey, guys, if there isn't a memory leak, then rebooting doesn't help. Period.

    The only thing that reboot does is releasing unneeded resources. If the server can run through the day, including, of course, the peak hours, and after a while -- originally a few days -- gets to the point it needs to be rebooted, and rebooting helps, then there are unneeded resources in memory. Ie, a memory leak.

    So, to all of you who think buying memory would have solved the problem and there was no evidence of memory leaks... well, LEARN something. It might help you one day.

    It could just as easily be a memory leak as a scalability problem.

    No, it could NOT be a scalability problem. If it was a scalability problem, the problem would happen at peak times. Go back and read again on the problem and on what I said. The symptoms are very clear.

  • Daniel (unregistered) in reply to CoderHero
    CoderHero:
    Daniel:
    Anyone who said there was nothing in the article indicating a memory leak is WTF. Go learn how to debug software.

    The server needed to reboot regularly, and with increasing frequency. When it got to reboot everyday, they decided to do something about it.

    Hey, guys, if there isn't a memory leak, then rebooting doesn't help. Period.

    The only thing that reboot does is releasing unneeded resources. If the server can run through the day, including, of course, the peak hours, and after a while -- originally a few days -- gets to the point it needs to be rebooted, and rebooting helps, then there are unneeded resources in memory. Ie, a memory leak.

    So, to all of you who think buying memory would have solved the problem and there was no evidence of memory leaks... well, LEARN something. It might help you one day.

    It could just as easily be a memory leak as a scalability problem.

    Actually, ignore my previous answer, it's correct, but there's a strong counter-argument.

    If the problem was scalability, there would be no point in rebooting the server. It would crash at peak times, ok, but once the load was down, there would be no point in rebooting it. Read again what I said, it's all there. Or, if you want, try to think up a scenario where rebooting the server would help. A non-leaking scenario.

  • Frunobulax (unregistered) in reply to snoofle
    snoofle:
    Nick:
    > Oh this is easy - I'd begin a 5 month optimisation effort.

    There are only 4 months left before the end of the world. Now what do you do?

    Since getting a divorce (around here) takes way longer than 4 months, I'd ask my wife and Irish Girl for a threesome...

    Your wife isn't into it and Irish Girl turned out be a guy. Now what do you do?

  • ChiefCrazyTalk (unregistered) in reply to Phantom Watson
    Phantom Watson:
    And here I thought I could be the clever one that makes the first 'Wilhelm scream' reference.

    My vote is that the real WTF is that Wilhelm never realized that upgrading hardware was an option.

    The real WTF is that Wilhelm didn't have a quick answer as to why adding memory to fix a memory leak was a bad idea.

  • ChiefCrazyTalk (unregistered) in reply to Frunobulax
    Frunobulax:
    snoofle:
    Nick:
    > Oh this is easy - I'd begin a 5 month optimisation effort.

    There are only 4 months left before the end of the world. Now what do you do?

    Since getting a divorce (around here) takes way longer than 4 months, I'd ask my wife and Irish Girl for a threesome...

    Your wife isn't into it and Irish Girl turned out be a guy. Now what do you do?

    I think I would still do the Irish girl.

  • Tim (unregistered) in reply to anonymous

    Amazing how we geeks boast about how small we can make things. Is this some sort of reverse-phallic-dilemma? Would Freud have a field day with us or what...

    T-

  • ha (unregistered) in reply to Fedaykin

    You are an idiot

  • QwikFix (unregistered)

    Surely the cheapest, easiest way to reduce the memory needs of the application would be to remove all the comments and unnecessary whitespace from the source code.

  • WizardStan (unregistered) in reply to Daniel
    Daniel:
    If the problem was scalability, there would be no point in rebooting the server. It would crash at peak times, ok, but once the load was down, there would be no point in rebooting it. Read again what I said, it's all there. Or, if you want, try to think up a scenario where rebooting the server would help. A non-leaking scenario.
    The server maintains each session for an extended period of time, say 12 hours, before dropping the object. A few users have no problem. A bunch of users cause larger memory usage, but 12 hours later it drops back down. Over the course of 12 hours, many users log on, a lot of session objects are created, giving the appearance of a memory leak, until server dies. If there had been just a little more memory to handle the next few calls, sessions would have started expiring and memory would be freed. Without any further information on the problem, I don't think that's the case here, (especially since default timeouts are usually minutes, not hours) but it is an entirely likely scenario.
  • Nate (unregistered)

    Reminds me of someone I used to work with.

    We were talking about refactoring the code, because is was slow, her solution "Who cares about re-factoring when you can just get faster processors". Evidently, this was a common response from the other Java devs too. (We had a .NET and a Java team).

    On another note

    "So why not pick up another 2GB of memory for fifty, sixty bucks and install that in the server?"
    I have a bacground in both programming and infrastructure, and beleive me, if this was a high demand app that was running on any descent server, 2GB of ECC ram for those sort of servers, is not $56, its more likely to cost over $1500, so yeah. $1500 now, and every few months after because the memory leak is still there.

  • Nate (unregistered)

    My car seems to be only getting half the milage from a tank of fuel than it should. Acording to the "Add more RAM" theory, the solution is to add a bigger fuel tank right?

  • (cs) in reply to silent d

    Real programmers need 1 byte of RAM. And that one is virtual.

  • I don't read Dilbert because I live it (unregistered) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    Time = Money. Technically they DID throw money at the problem. A lot more money than just some more RAM would have cost.

    Those programmers could have been working on real bugs, leaks, or new functionality that saved time and money or generated revenue.

  • (cs) in reply to halber_mensch
    halber_mensch:
    the where clause included a calculation with 6 trigonometric equations

    Yay Great Circle distance formula.

  • ChickenGod (unregistered) in reply to Fedaykin

    Ypu can go a long way duct-taping people to chicken wire.

  • Fetrow (unregistered) in reply to Fedaykin

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    you write that like it's a BAD thing....

    A serious aside: There was a middle ground here where RAM was immediately added as a stopgap and the memory leaks could have been fixed. Period. Heavy duty memory optimization was likely at least a month of their time and probably more.

    Nevertheless, if the programmers are sticking around, they probably did learn things that will be useful in the future to the company (aside from the 'add more memory' bit). Think of it as hideously expensive training.

  • WizardStan (unregistered) in reply to Nate
    Nate:
    My car seems to be only getting half the milage from a tank of fuel than it should. Acording to the "Add more RAM" theory, the solution is to add a bigger fuel tank right?
    Bad analogy. Better analogy: You can only go half as far as you want to on a full tank. You can either stop every so often to fill up (reboot), tear out and rebuild the engine to try and make it more efficient (rewrite), or install a bigger tank. With the cost of gas, a more efficient engine would be better, but imagine if it ran on something with virtually no cost, like water. The cost to rebuild and improve the engine may far outweigh the costs of carrying and using twice as much water.
  • Ec 101 (unregistered) in reply to CoderHero

    CoderHero:

    I don't know about you, but I prefer to take measurements of things I can measure, or reasonably estimate, rather than vague and fuzzy numbers.
    You, sir, are in the wrong profession.

    If you only want to solve problems that have good numbers, and not the important problems, you should be an economist. They have cornered the market in that.

  • quarterbyte (unregistered) in reply to ClaudeSuck.de
    ClaudeSuck.de:
    Real programmers need 1 byte of RAM. And that one is virtual.
    and time-shared.
  • wesley0042 (unregistered) in reply to Too much TV
    Too much TV:
    For web apps, the server's RAM requirements have absolutely no correlation to the end users RAM requirements -- even if the code is buggy and poorly designed. Because Google runs on clusters with dozens of GB of RAM and Terabytes of disk, doesn't mean soccer mom's across the globe have to buy more RAM for their computers to use google maps to find the nearest coffehouse.

    Cheers!

    No, but since Google runs thousands of servers, they should watch their RAM and CPU requirements in their software, since they can either buy lighter servers or buy fewer servers. Since electricity and rent are two big factors for them, either is a win.

  • Billy Bob (unregistered) in reply to Fedaykin

    An interesting situation. If the company shipped the code with specifications that recommended (say) a 512MB machine then the people purchasing it would reasonable be irked to be told that they had to add more memory or purchase more hardware to "really" make it work.

    Of course, there are other intangible costs, like the reputation of the source, and tangible costs, like the cost of fielding calls to the help desk and lawsuits.

    Not that this ever happens ...

  • Nitpicker (unregistered) in reply to anon
    anon:
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    I'm a velosorapter, what do you do now?

    Ask you why you can't spell velociraptor ;-)

  • Pyro (unregistered) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    Except the $60 wouldn't have fixed the problem, it would have just made the restart every 4 days rather than every day. A few more months and they'd be back to every day again.

    Meanwhile, since the app was being patched, hacked, and patched some more and management was willing to assign a team to it, it was probably important to the business.

    Willhelm's solution actually fixed the resource leak and paid off a lot of IOUs the developers had been writing through all of those quick hacks.

    As a bonus, 4 programmers probably became better through the mentoring process and are now very familiar with the codebase so hopefully they can keep it clean this time.

  • SurturZ (unregistered) in reply to Fedaykin

    Adding memory and having a script that bounces the process every so often is not the best fix for a memory leak, but it is certainly pragmatic. It really depends on how much it costs to track down the leak and fix it. Or more to the point, there should be a decision as to how much the organisation is willing to spend tracking down the bug.

    There's a reason that the first question tech support ask is "have you tried rebooting the computer"? It's cheap!

  • SurturZ (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    Heheh truncating the command PRINT to PR to save three bytes. Removing all whitespace. Good times.

  • Sergey Stadnik (unregistered) in reply to Fedaykin

    A far better solution would be to make a robot to press the server's reset button every night.

  • JoelKatz (unregistered) in reply to Anonymous

    "The computer had 512MB of ram. What production computer has 512MB?"

    One day I will have to tell the story of a disaster on a production system while it was being used publicly by the President of the United States. The ultimate cause of the problem? Memory exhaustion. The system had an amount of memory that was ridiculously small for the time.

    I think it was 16MB at a time when typical desktop machines has 64MB bare minimum. Maybe it was 64MG when 256MB was bare minimum.

  • JoelKatz (unregistered) in reply to JoelKatz

    Hey, I found out the full story is public. http://www.boredom.org/cnn/statement.html This is not our side of the story, of course. but it will give you the idea of what I'm talking about.

  • Dave G. (unregistered)

    It's obvious that most people here have no business training whatsoever.

    5 months (someone estimated this at $200,000) to fix the problem, compared to buying a RAM module for $60? How is this even a discussion? How can you possibly argue that the extra $199,940 that was spent was worth it? Hell you might as well rewrite the app from scratch.

    I've been programming my whole life and while I love the idea of optimising the hell out of something and coming away with a positive result (50% improvement is very impressive), the business side of me could never, ever justify such a huge disparity in time and cost.

    I know the extra RAM doesn't actually solve the problem and it's a temporary fix. As such, while the code is being maintained through its normal course, small periods of refactoring / optimisation should be conducted to incrementally improve the application.

    But for gods sake, blocking off 5 months of time to work on something like this is financial suicide. For a public company, it would be borderline criminally negligent to waste so much money so cheaply.

    What if it took 12 months to fix this? That's $480,000. How about 2 years for almost a million dollars? When do you say "ok, that $60 RAM bank is looking pretty good now"?

    Hint: the correct answer is not "we will never say that, we should spend all the time necessary to fix the application, no matter what the cost is". If you disagree with this statement, then I'd advise you never to start your own business, because you will be bankrupt within a year.

    Get some perspective please my fellow geeks. I know it's cool to hate on "business decisions", and "managers" but this one isn't close. This time, the geeks have it wrong. Trust me.

  • Sean Tomlinson (unregistered) in reply to anonymous

    Weak sauce!

    1K.

    Timex-Sinclair.

    If an array or code got too big, it would overwrite the video memory...

  • Rick Lively (unregistered)

    Don't forget that his WHOLE team learned how to do it right. This was "everything would be cleaned up later" time.

    "His team still complained about some of the work"? I guess learning how to do it right is too hard... Let the next guy do that... ...Wait, you are outsourcing our work elsewhere? But, we can do that too! When? If you would only let me show you on my next projec... ...OK, then maybe I can do it right at my next job...

  • Ginger Beer (unregistered) in reply to Fedaykin

    Yes, adding extra memory is arguably bit of a patch up job.

    However if the application is creating temporary memory structures which are properly destroyed then you should add more mmory. That way you are not going to be starting the application on a daily basis (risk of data loss or corruption here) but instead weekly or fortnightly. You then have reduced the heat that you are copping from customers and management.

  • Mik (unregistered) in reply to taylonr
    taylonr:
    Do you do it now, while it's manageable, or 2 years down the road where it takes 5 men 12 months to rewrite?

    Or two women 3 months.

  • N Morrison (unregistered) in reply to anonymous

    Nope. 4K of RAM and 4K of ROM - and a 250 baud cassette drive for storage. (Model I Level I TRS-80).

  • (cs)

    Ha. Proves the intern is still wet behind the ears.

    Obviously they were on a tight budget and cannot afford to procure more hardware.

  • morkk (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    ha! I'll see your pitiful VIC-20 and raise you a Sinclair ZX81 with 1k of RAM! And I called myself lucky!!

  • Karl V (unregistered)

    We had a situation a while back with a report that kept throwing Out Of Memory exceptions as well. The programmer who coded it insisted that it was a "Complex" report and just needed more memory to generate. (This was happening on systems with 1 gig of RAM trying to load around 1000 records.)

    When another programmer finally looked at it we quickly found the source of the problem:

    He was using static arrays... five to be exact... of long... set to 20000000. He was "Planning for future growth". (At that time our largest client had about 20000 records for that report)

  • (cs) in reply to Too much TV
    Too much TV:
    webhamster:
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    I build an Apollo Command Module and a Lunar Module and fly to the moon. That's a whole lot more memory than those systems had...

    /kids today

    Nope. Just watched the Moon landing stuff on Discovery Science last week, the main guidance computer used about 72k of memory. Of course, that was for Data, Programs AND the OS. Which they they of course had to write themselves. The program was "written" by knitting copper wires thru and around memory core rings. If the copper went thru the ring, it was a 1, if it passed outside the ring, it was a 0. The resulting knitted cable was then snaked around and attached to boards that were installed into the spacecraft.

    the programmers weren't even the engineers that designed the program, they contracted women to do the knitting because their hands were smaller, more dexterous and used to doing that kind of work in the 1960's.

    Fascinating bit of history.

    Cheers!

    Yeah, I was being sarcastic a tad. But still consider that they went to the MOON with 72K of memory and we're talking about a web app to sell widgets being underpowered at 2GB.

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #206782:

« Return to Article