• Leon (unregistered) in reply to Wilhelm
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    Your interviewer is an undead zombie and cannot be killed. Also he has no wallet. Now what do you do?

  • Edward Pearson (unregistered) in reply to Huh

    You would pay a TEAM of people for FIVE months, to do something that could be achieved in <15 minutes and 60 dollars?

    Pray you're never put in charge.

  • Henry (unregistered) in reply to Anon

    OOM Exception....

    The memory is virtual these day's...

    So it's leaking or was too big to fit in the virtual memory..

    So putting more memory in the server is the WTF (this only prevents swapping, you do not get more virtal memory).

  • Wilhelm (unregistered) in reply to poke X, peek (X)
    poke X:
    No, there was a full 64K of RAM. For instance: you could copy the BASIC ROM into the RAM memory underneath it, page it out, and have a RAM copy of BASIC that you could muck about with. Which usually meant "turning the error messages into swear words".

    Oh okay I didnt realise that. So you could ditch BASIC, and just have a big 64k machine code program instead?

    I seem to remember that you could write to a particular chunk of memory and this would be writing to the video memory, but reading from there didn't work or something. Is that another BASIC-specific thing?

  • Wilhelm (unregistered) in reply to Leon
    Leon:
    Your interviewer is an undead zombie and cannot be killed. Also he has no wallet. Now what do you do?

    Oh this is easy - I'd begin a 5 month optimisation effort.

  • Ayup (unregistered)

    I use the company's Amex. Why do you think I'm footing the bill?

  • (cs) in reply to Salami
    Salami:
    snoofle:
    Salami:
    5 months work for 5 guys vs $60.
    Sorry dude, but adding memory only prolongs the inevitable. ...

    I'd agree with you if the server had a normal amount of memory (which would be at least 2 GB). The first thing to do is to bump up the memory and then see what that gained you. You don't spend 2 man-years on a problem without exhausting all easier options first.

    Maybe I'm just biased because I've always worked for huge monsterous organizations, but (at least in my little corner of the world), getting permission to buy anything, regardless of how cheap it is, takes (almost) an act of God.

    That said, investigating the cheap quick fix is usually preferred, but a quick profile with jProbe and a few standard unix utils can tell you if it's thrashing or leaking like a sieve. The former justifies ram, the latter the rewrite. Having some hard facts to back up your recommendation invariably improves your chances of getting what you need.

    As an aside, if you know that the code is a world class wtf, and the manager has to constantly deal with elongated deliverables, even if the ram can solve it today, you can still justify the rewrite.

  • spinfire (unregistered)

    A smaller memory footprint is always better due to cache effects.

    Intern Bob is inexperienced so we'll forgive him.

  • Nick (unregistered)

    Oh this is easy - I'd begin a 5 month optimisation effort.

    There are only 4 months left before the end of the world. Now what do you do?

  • Wilhelm (unregistered) in reply to Nick
    Nick:
    > Oh this is easy - I'd begin a 5 month optimisation effort.

    There are only 4 months left before the end of the world. Now what do you do?

    I'm not sure... do you look cute?

  • Alex B (unregistered)

    worst article, ever

  • SomeCoder (unregistered) in reply to Edward Pearson
    Edward Pearson:
    You would pay a TEAM of people for FIVE months, to do something that could be achieved in <15 minutes and 60 dollars?

    Pray you're never put in charge.

    So what happens when your web site goes from thousands of hits per day to millions and millions per day? Then all that extra RAM you bought fills up and you are stuck with the server crashing again.

    I can think of quite a few bloated apps that you must have been in charge of.

  • (cs) in reply to Nick
    Nick:
    > Oh this is easy - I'd begin a 5 month optimisation effort.

    There are only 4 months left before the end of the world. Now what do you do?

    Since getting a divorce (around here) takes way longer than 4 months, I'd ask my wife and Irish Girl for a threesome...

  • Joe (unregistered)

    The real WTF is that they purchased a production server with only 512mb of ram. A cheap PC from Wal Mart comes with 2 gigs.

  • (cs) in reply to Kyle
    Kyle:
    Emotionless? Wilhelm is always screaming.

    HOOOGAN!!

  • (cs) in reply to DJ
    DJ:
    That is the problem with these old school programmers. Most of them are not that well versed with what is going on besides what they actually know. They don't really want to admit that someone else can come up with a better solution that what they can come up with. They are good to have on your team but they are pain in the A$$ at the same time.
    Yeah, those asshole people that waste all their time harping on "memory management" and "properly documented code" and "runtime efficiency" are always getting in the way of me playing Halo in my cube too. They're always bitching at me because my code is crashing the web servers - DUH! BUY NEW ONES YOU LAZY BUMZ0R3Z! Look, you could have called CDW in the time it took for you to whine at me. Now leave me alone so I can do some REAL work on my MySpace page.
  • jtl (unregistered)

    While optimizing code is all well and good, management cares about the bottom line. If the memory usage slowdown ends up being negligible after a cheap RAM upgrade you look pretty stupid for wasting time and money when a quickfix would have kept the trains on time. You say eventually that wouldn't be enough, but then maybe they buy a new server with twice that much ram for 100 bucks. You don't know for sure, and worst case they end up optimizing later but maybe they can get away with it. If the hardware is cheap, then don't blow money optimizing the software first.

    I know you all live in Utopias though where it's okay to waste money on non-noticeable improvements, but the intern just pointed out that Wilhelm possibly wasted a lot of money.

    Results are all that matters in business.

  • (cs) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    One of the main problems here is: if it is a memory leak, you will continue to lose memory (Duh right?). So, by throwing more ram at the machine all you are doing is prolonging the time between server restarts, not preventing them.

  • CoderHero (unregistered) in reply to SomeCoder
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

  • Phantom Watson (unregistered)

    And here I thought I could be the clever one that makes the first 'Wilhelm scream' reference.

    My vote is that the real WTF is that Wilhelm never realized that upgrading hardware was an option.

  • mauhiz (unregistered)

    From what I read in this article there was no memory leaks, just excessive memory usage. In this case adding memory is by far the best solution - even if the coding standards will have to wait. 512meg is very few for a J2EE app.

    That may depend on the project life expectancy, but I think the WTF is what the writer of the article meant it to be.

  • Anon (unregistered) in reply to FredSaw

    No, real programmers started with 4k.

    2GB would have increased the time between reboots. While I think a web application should have a good amount of memory for caching purposes, memory leaks and wasted memory usage do need to be fixed.

    That, of course, is the difference in opinion between an engineer and a programmer.

  • anonymous (unregistered) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

  • Harry (unregistered)

    TRWTF is that unreleased resources could be file handles, database connections, sockets, you name it. Memory isn't the only thing these 'hasty' programs will leak.

  • (cs)

    Sure, I agree that the problem should be fixed rather than swept under the rug. But also keep in mind that devoting that much time and resources to this problem is also "throwing money at the problem" - exponentially greater sums of money.

    It's easy for us to sit here and say "why spend $60 for a quick fix when you can get it right for only $<millions>."

  • Bob Johnson (unregistered) in reply to ounos
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.
    Correct me if I'm wrong, but every instance method has an implicit parameter added (I'm assuming this is C# based on a quick google search). Switching an instance method to a static would remove the "this" parameter, and reduce the junk on the stack. It may be a small performance gain, but those bytes can add up (my work is designed for small devices, so I may be nitpicking here).
  • Bernie (unregistered)

    The article doesn't actually state that the memory leaks were fixed. It does state that the memory footprint was reduced. If the leaks weren't fixed, then the app would still eventually run out of memory, only it would take longer (just as if more memory was added)... unless the app now leaks faster.

    Also, there is a thing called Time-Space Trade Off. I'm sure you've heard of it.

    In all, it is possible that the app is smaller but a lot slower and leaks memory so much faster that it runs out of memory sooner.

  • (cs) in reply to Nerdguy
    Nerdguy:
    Funny story. Years ago I worked at a company that managed the website for a large publishing concern. Let's call that publishing concern "Petrel Publishing". They used an unweildy CMS written in tcl/tk (You can guess which one), running on Solaris boxes. We had recently moved the site to a new set of servers and location, and things were going awful. They'd crash outright every time the traffic went even slightly up. Hard deadlock. Doom. Even with 2GB per server, and quad UltraSparc CPUs. So we'd just log in and restart the server and blame the damned CMS, until eventually the bosses "Somewhere in EMEA" got tired of it and demanded we fix it post haste.

    So all the young unix nebbishes are sitting around trying to think of reasons that the software sucks, and whether we could get away with cron'ing something up, when our CIO, who'd started programming back when Data General still made hardware, comes in, listens to us for ten minutes, squints a bit and says "What about ulimit?".

    Now, we were all Linux admins. Sure, we'd used Solaris, and adminned boxes that had already been set up, so we were a bit unfamiliar with Solaris' ulimit settings.

    By default, the rlim_fd_max on solaris is 64. So any process can only open 64 concurrent files. Now, the CMS needed to generate and serve up thousands upon thousand of files, and would just choke when asked to do so.

    Needless to say, we upped it to 8k, and the site never crashed again.

    So here's to you, old-timers.

    Sure, that works now, but what will you do after 50 years when you reach the hard limit? Whatcha gonna do then, huh?

    You hack! You ad hoc shmuck! The cheapest solution in the long term is to spend $inf to get a system that can have an infinite number of open files.

  • Gort (unregistered) in reply to Wilhelm
    Wilhelm:
    Commodore 64s only had 38k of RAM. The rest was write- or read-only memory
    No, it had the full 64k. Bloody n00b BASIC programmers.

    In fact IIRC it had 68k of usable RAM because you could store data in the tape/disc I/O buffers. It was a pain to get access to but if you were desparate...

  • (cs) in reply to Bob Johnson
    Bob Johnson:
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.
    Correct me if I'm wrong, but every instance method has an implicit parameter added (I'm assuming this is C# based on a quick google search). Switching an instance method to a static would remove the "this" parameter, and reduce the junk on the stack. It may be a small performance gain, but those bytes can add up (my work is designed for small devices, so I may be nitpicking here).

    I was thinking about this also and was surprised that it wasn't getting more notice. Here's my theory:

    1. They have a whole bunch of objects that are cached and live a long time.
    2. Each one of these objects depends on some common utility function.
    3. Instead of using a static method in some other class, each object creates its own instance of this other class.
    4. Each object holds onto this instance forever, instead of just using the method and releasing the object.

    The fact that the author of the article explicitly brought this issue up leads me to believe that this was a big part of the problem. Holding onto thousands (millions?) of useless objects like this definitely could impact memory usage.

    That's my theory, anyway. I've never actually seen anyone do something so stupid before.

  • Erlingur (unregistered) in reply to Wilhelm

    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

  • testx (unregistered)

    TRWTF is that people actually think this story is a WTF.

  • Ozz (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.
    I started with 1k on a Sinclair ZX81 back in '81. Though, I did have some fun in college in the mid 80s with a 6502-based EMMA (25-key keypad and a 7-segment display - it was programmed in raw hex) but I can't remember how much RAM it had.
  • anon (unregistered) in reply to Wilhelm
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    I'm a velosorapter, what do you do now?

  • frustrati (unregistered) in reply to Bob Johnson
    Bob Johnson:
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.
    Correct me if I'm wrong, but every instance method has an implicit parameter added (I'm assuming this is C# based on a quick google search). Switching an instance method to a static would remove the "this" parameter, and reduce the junk on the stack. It may be a small performance gain, but those bytes can add up (my work is designed for small devices, so I may be nitpicking here).
    It could also be that code like this:
    (new SomeClass()).instanceMethodThatShouldBeStatic();

    Was reduced to:

    SomeClass.instanceMethodThatShouldBeStatic();

    Which should drastically reduce the GC's workload.

    I agree with the countless people that state that Wilhelm should focus on fixing memory leaks (and tons of temporary objects), but not focus too much on getting the app below 512MB. You can probably fit a webapp in less than 2KB, but why bother? As with so many thing, it is about finding the balance.

  • CoderHero (unregistered)

    I think that nobody has bothered to realize that the WTF is that he spent 5 months with 5 people and only got a 50% reduction in memory use (the article doesn't really state that there was a leak)

    With 2 developer years of effort I would have hoped for something closer to an order of magnitude!

  • SomeCoder (unregistered) in reply to CoderHero
    CoderHero:
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

    Ok, so let me pose a different question to you: What is the end user's time worth to you?

    My time is also expensive. If I have to sit and wait for an app because it requires 4 GB of RAM and I have 1 GB, that's wasting MY time. Sure, your time was saved but which time is more valuable? In this case, I'd say it was the end user's time because if your app thrashes and requires a lot of RAM, he's going to want to use the competing product that doesn't. Or he'll go to a competing website.

    My computer is also expensive. Hardware is relatively cheap but it's not THAT cheap for the average user. You think the soccer mom who uses your app likes to go out and buy sticks of RAM just because some app is bloated? You think she likes to wait while your server crashes because the app leaks memory?

    I don't mean this post as a flame at all, but I think that if more programmers would start caring about the end user's time more than their own time, we'd have a lot more quality apps out there.

  • Anonymous Cow-Herd (unregistered) in reply to Edward Pearson
    Edward Pearson:
    You would pay a TEAM of people for FIVE months, to do something that could be achieved in <15 minutes and 60 dollars?

    No. But then solving the problem would not have been achieved in <15 minutes and 60 dollars.

  • Morasique (unregistered) in reply to testx

    Based on the comments, approximately 2 people think this is a WTF

  • (cs) in reply to Huh
    Huh:
    I guess I'm WTF cause I agree with Wilhelm.
    Me, too.

    Since when is "buy more memory" a more favorable solution to fixing your software so that it is more efficient and has less memory leaks?

    People complain about Microsoft Vista and how memory-intensive it is, but you don't hear many people shrugging off the problem by saying "just slap 4 GB in there, and it'll be fine!"

  • silent d (unregistered) in reply to Ozz
    Ozz:
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.
    I started with 1k on a Sinclair ZX81 back in '81. Though, I did have some fun in college in the mid 80s with a 6502-based EMMA (25-key keypad and a 7-segment display - it was programmed in raw hex) but I can't remember how much RAM it had.

    1k? You were lucky. When I started we only had a 16-bit abacus, and when that broke down we had to go back to scratching marks on clay tablets, outside in the pouring rain, walking uphill both ways and ... oh, nevermind.

  • CoderHero (unregistered) in reply to SomeCoder
    SomeCoder:
    CoderHero:
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

    Ok, so let me pose a different question to you: What is the end user's time worth to you?

    My time is also expensive. If I have to sit and wait for an app because it requires 4 GB of RAM and I have 1 GB, that's wasting MY time. Sure, your time was saved but which time is more valuable? In this case, I'd say it was the end user's time because if your app thrashes and requires a lot of RAM, he's going to want to use the competing product that doesn't. Or he'll go to a competing website.

    My computer is also expensive. Hardware is relatively cheap but it's not THAT cheap for the average user. You think the soccer mom who uses your app likes to go out and buy sticks of RAM just because some app is bloated? You think she likes to wait while your server crashes because the app leaks memory?

    I don't mean this post as a flame at all, but I think that if more programmers would start caring about the end user's time more than their own time, we'd have a lot more quality apps out there.

    The problem there is that it's almost impossible to measure. I know how to measure my time and what computing requirements cost.

    For Joe Sixpack who's unemployed, I can say his time is worth almost $0. For Bill HighflyingCEO, his time might be worth $100/hour.

    I don't know about you, but I prefer to take measurements of things I can measure, or reasonably estimate, rather than vague and fuzzy numbers.

  • Scurvy (unregistered) in reply to Gort
    Wilhelm:
    Commodore 64s only had 38k of RAM. The rest was write- or read-only memory

    I'm sorry, did you say write-only memory? Whereabouts can you buy that these days?

  • (cs)

    Real programmers free memory with a round-kick over the motherboard.

  • Rob (unregistered)

    As funny as this is, it ignores the fact that that original codebase wouldn't scale if traffic increased. There's a limit to how much RAM you can throw in a machine.

  • Henry (unregistered)

    Adding physical memory to solve a virtual memory issue...

    Worst article ever, it's only a WTF for non coders.

  • (cs) in reply to Salami

    Delaying a problem doesn't solve it. And pushing the solution to a point in future only means that you will spend even more money. The program grows and grows and if nobody takes care of the code you will end up with 100,000 lines to correct instead of 50,000. Who saves money in the end? OK, $60 for some 2GB is one thing but its temporary (as usual) and one day the problem will pop up anyway. So better review/rewrite/correct now instead of tomorrow.

  • SomeCoder (unregistered) in reply to CoderHero
    CoderHero:
    SomeCoder:
    CoderHero:
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

    Ok, so let me pose a different question to you: What is the end user's time worth to you?

    My time is also expensive. If I have to sit and wait for an app because it requires 4 GB of RAM and I have 1 GB, that's wasting MY time. Sure, your time was saved but which time is more valuable? In this case, I'd say it was the end user's time because if your app thrashes and requires a lot of RAM, he's going to want to use the competing product that doesn't. Or he'll go to a competing website.

    My computer is also expensive. Hardware is relatively cheap but it's not THAT cheap for the average user. You think the soccer mom who uses your app likes to go out and buy sticks of RAM just because some app is bloated? You think she likes to wait while your server crashes because the app leaks memory?

    I don't mean this post as a flame at all, but I think that if more programmers would start caring about the end user's time more than their own time, we'd have a lot more quality apps out there.

    The problem there is that it's almost impossible to measure. I know how to measure my time and what computing requirements cost.

    For Joe Sixpack who's unemployed, I can say his time is worth almost $0. For Bill HighflyingCEO, his time might be worth $100/hour.

    I don't know about you, but I prefer to take measurements of things I can measure, or reasonably estimate, rather than vague and fuzzy numbers.

    It's pretty easy to measure. There are two ways:

    1. Is your product getting bashed in reviews? Do you know of one person on the planet who actually likes it? Ok, you failed. I'm looking at Vista for the specific example here. The only person I've ever heard of that likes it is Microsoft Fanboy Jeff Atwood. Every other review of it is that it's a bloated piece of crap.

    2. Try using your product on a low end machine. And by low end, I mean something that you might not be able to purchase anymore. Is it incredibly painful? You just measured the cost of your time as it relates to using the app. Not everyone has dual core 2.4 GHz processors and 4 GB of RAM. My machine at home (which I am replacing soon because it's painful) only has 512 MB of RAM.

    Another way is to gauge how well your product is selling. MySpace does incredibly well (somehow) despite the fact that it's the most poorly designed web app ever.

    There are exceptions but I for one, believe that we need to get back to making applications that are of high quality as opposed to just saying "Eh, in the future 32 TB of RAM will be the norm so who cares!"

  • SomeCoder (unregistered)

    I should also say that I am speaking "ideally". I understand that in the real world, bosses often just want the hack in place to get moving. However, I am of the firm belief that we need to start correcting IT practices now, rather than later and that working extra hours should be the exception, not the norm so I am a little outside of the mainstream :)

  • Matthew (unregistered) in reply to Hypothetical
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Can't remember.

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #206616:

« Return to Article