• Debra (unregistered)

    My -> May

  • Magyar Monkey (unregistered) in reply to Debra

    There's another typo too, but I don't want to get agresive about it.

  • Aged .Net Guy (unregistered) in reply to Debra

    I disagree that was a typo. I suspect Remy meant the programmer was thinking "It works on my personal machine. Or at least it works on this one test machine I'm using." Then the dev didn't think any deeper than that.

    Bottom line on all this stuff is that since I started in the 1970s the minimum quality and quantity of knowledge required to be good at this job has gone up by orders of magnitude while the quality and quantity of training and worker development has gone down by at least 1 order of magnitude.

    Crap like this is the inevitable result of the yawning gap between the two.

  • Not that typo (unregistered) in reply to Aged .Net Guy

    The typo is the first word of the text: "My the gods spare us ..." -> "May the gods spare us ..."

  • isthisunique (unregistered)

    Does C# not come with a memory limit option? It's common to have some high value you safely expect your program to never reach. You would only turn it off or bump it for things where you expect high memory usage. That might be a minor annoyance for languages where it's just a global that can't be changed for a single function that has high memory use.

    It's awkward but if you know what you're doing it's better to detect memory leaks sooner rather than later when they cause a problem for the system or something else (they also sometimes make things slower and slower).

    Some will argue it's a bad practice. On the other hand the program being written by a programmer will have better knowledge of how many resources it really needs. The problem is with some languages that's more hit an miss than others. However if you set the memory limit with the language then you should that should play nice with the GC.

    Others depend on external monitoring to manage the problem. For an application that's a client running on other people's computer then it's a bit more complex. I'm guessing this was a case of never crash the program instead try to warn the user to save first/give them a hint to restart.

  • Andrew (unregistered)

    TRWTF expired certificate

  • TB (unregistered) in reply to Andrew

    Ah, you beat me to that, that's TRWTF today:)

  • 🤷 (unregistered)

    Expired certificates are scary.

  • Waldo (unregistered)

    Could be to enforce a license that only allows a specific amount of memory.

  • cheapie (unregistered)

    "A Very Private Memory" click "Your connection is not private"

    ...yeah, you might want to see about renewing that certificate.

  • Thomas (unregistered)

    Yeah, it's about time for an article about "all the things that went wrong with certificates at thedailywtf and why" :-P

  • K. (unregistered)

    TRWTF are most Browsers: Browser with expired certificate: <<DANGER>> The certificate is unsafe!!! Using plain HTTP: Everything is fine, carry on. (and a tiny Not Secure in the upper right corner)

    ... Because no encryption is safer than bad encryption ...right?

  • K. (unregistered)

    <moderation> held for <comment>

  • Aaron (unregistered)

    The article says, "here the developer tuned the number 1550000000", but the actual value in the code is "1150000000", which not the same thing, just going to prove how difficult and un-maintainable this code is.

  • Tekay37 (unregistered)

    "thedailywtf.com uses an invalid security certificate. The certificate expired on Monday, 5 March 2018, 14:00. The current time is Monday, 5 March 2018, 15:53."

    wtf?

  • (nodebb)

    If we're collecting typos, I'll point out the discrepancy between 1150000000 and 1550000000.

  • CPT Obvious (unregistered) in reply to Tekay37

    wtf?

    When the current date is after the expiration date, the certificate has expired.

  • Pedro (unregistered) in reply to jkshapiro

    I think it probably said 1550000000. One possible reason is that this is around the maximum memory a 32-bit c# .Net 4.0 app can allocate on windows without starting to have issues and throwing OOM exceptions. Not 2GB or 4GB as expected, for some reason.

  • snoofle (unregistered)

    Personally, I'm a big fan of allocating all the storage (that you know will be needed) at startup. It's very fast and acts as a heap-baseline. Then it's really easy to profile run-time under-load memory and figure out how the high water mark compares to your expectations. It also eliminates those unexpected memory "surges" when lazily loading something from a remote system that pukes a whole lot of unexpected data your way.

    Interestingly, not many people I've worked with seem to like this approach, though nobody has ever been able to make a really good case for avoiding it.

  • HK-47 (unregistered)

    Best part of this is that the function is probably called in a loop causing a barrage of neverending message boxes making the program impossible to close or save your data.

  • Brian (unregistered) in reply to snoofle

    I worked on a system once for which dynamic memory allocation (i.e. calls to new and malloc) was completely forbidden. Thus, every object had to be allocated either statically or on the stack. It was an embedded avionics project where one of the main keywords in the requirements was "deterministic", since this was the kind of system where a serious enough bug could literally kill someone. After a while, I did see the wisdom in such an approach, and even now I try to avoid dynamic allocation in my (C++) code unless it's absolutely necessary.

    Of course, one of the many WTFs of that project was that pointers themselves were forbidden, until I pointed out that maybe the performance problems they were having might have something to do with passing large data structures by value. Refactoring the system to use pass-by-reference semantics instead resulted in a huge speed boost.

  • Sole Purpose of Visit (unregistered)

    Whilst it might be nice to have the hard-coded limit as a configurable variable (per program), I can't really see the WTF in this. And I really don't understand Remy's outrage.

    It's easy to imagine a program that consumes 11G (or 15G, depending upon typo) and then something horrible happens ... and I'm prepared, without evidence either way, to assume that this is one of those desktop programs. I specify desktop, because there's a message box. Message boxes do not belong on servers, headless or otherwise.

    I like (no I don't: I loathe) the implication that garbage collection will take care of all memory issues, irregardless. This is not so. Dependent upon the chosen GC, and upon the mix of short-term/long-term small/large objects, you can get absolutely raped by GC pressure. And there's no good reason to claim that hiking page faults up on a non-deterministic basis is acceptable in the general case.

    There's no context attached, so (ignoring the possibility that this is encased in a hash-debug parenthesis, or somewhere in the bowels of NUNIT, and we are not specifically told) I will just point out that in my last job we had something like 500 reported yet unsolved exceptions. Of those (C#) exceptions, about 200 were OOM. We never tracked down a single one, because ... how could you?

    I'd add a button to "send report to developers," but other than that, I don't see a huge issue here.

  • Sole Purpose of Visit (unregistered) in reply to HK-47

    "Probably" as in "I have no idea but I'm just guessing."

    If that's the best part, I'd hate to see the worst part.

  • (nodebb) in reply to CPT Obvious

    Seems Capt. Obvious missed the obvious... thedailywtf.com uses an invalid security certificate. The certificate expired on Monday, 5 March 2018, 14:00. The current time is Monday, 5 March 2018, 15:53 It is currently Monday, 5 March 2018, 12:35...

    Addendum 2018-03-05 12:36: that is easy coast USA

  • Loren Pechtel (google)

    I don't think this is a case of a programmer being clever, but of debug code that never got removed.

  • I Am A Robot (unregistered)

    I'm currently running a utility with something similar in the code - it's an old application which if you leave it running too long will take up all the available memory, but there wasn't the time/money/management buy-off to rewrite it so it just checks now and again and tells the user to restart it if needed.

  • Dennis (unregistered) in reply to Loren Pechtel

    My thoughts exactly. It's one of those things one puts in to get a warning while testing a supposedly better build (leaks plugged, heavy allocating removed, whatever) to verify.

    Arguably, that's not the best way (proper snapshotting and profiling is), but it's a quick way.

  • CPT Obvious (unregistered) in reply to KattMan

    Nope, a valid x509 cert is everything in the spec, plus all the dumb hacks that have been immortalized to avoid bringing down major websites.

  • siciac (unregistered) in reply to snoofle

    (that you know will be needed)

    That this is a moving target would be the major sticking point for me. If the build process could determine this by running a test load, I'd bless off on it.

    People are biased to underestimate complexity; so developers behave as you've mentioned, and on the user side you get the cottage industry of memory optimizers and disk cleaners. All those 1's make the platters wobbly and the RAMs go slower, you see.

  • Jeremy Hannon (google)

    .Net is also not great an avoiding memory leaks in Debug mode. It has a much harder time garbage collecting certain types of objects in debug. This makes me think that this code only ever saw the problem in Debug and was definitely a works-for-me scenario.

    I have only run into memory issues a few times with .Net.

  • Worf (unregistered) in reply to Brian

    Safety critical systems often require use of safety-critical languages or programming guidelines. I did work for a safety-certified OS, and by default it was not certifiable. Instead, there was a huge manual associated with it that told you what you had to do (configuration, programming practices, etc) you had to use in order to make it certifiable.

    One of the big ones is no heap - heap usage is too unpredictable and hard to defend (what happens if malloc() fails on a critical path?).

    The system must be kept in a deterministic state and malloc() does not allow this. The system basically cannot fail, and the whole document went into how to make sure it doesn't fail unexpectedly.

  • Free Bird (unregistered) in reply to Brian

    I can only see the wisdom in such an approach if there are also rules limiting usage of the stack.

  • (nodebb)

    All of this begs the question: Why does a process need 1.5G of memory space. Sounds a little large to me, but then I was brought up when memory was real "core" and VERY expansive. We used kB in those days.

    It was a different era, and we used things wisely. Of course then again we didn't have applications that were in the multi-megabyte range like the apps I see on my iphone.

    (SIGH)

    Addendum 2018-03-05 20:39: Sorry, I meant expensive. And it was!

  • Gerry (unregistered) in reply to herby

    "...when memory was real "core" and VERY expansive..."

    And very expensive as well.

  • Wilson (unregistered)

    frist

  • (nodebb) in reply to herby

    The smallest machine I ever worked on had just 576 bits of internal memory for data, and a 2KB serial EEPROM for program storage, along with an RC clock generator, not a crystal. I suppose you'd have to say that it wasn't well suited to mining cryptocurrencies.

  • Zenith (unregistered) in reply to snoofle

    Because that requires that a programmer have access to and understand the big picture of how the program works and not just the narrow slice of an interchangeable cog churning out patch after patch after patch.

  • nasch (unregistered)

    "All of this begs the question: Why does a process need 1.5G of memory space."

    First, it doesn't beg the question. That's not what that means. Second, we have no idea what this program does. Maybe it's processing 4K video streams.

  • Loren Pechtel (google) in reply to herby

    "All of this begs the question: Why does a process need 1.5G of memory space. Sounds a little large to me, but then I was brought up when memory was real "core" and VERY expansive. We used kB in those days."

    Two things come to mind:

    1. Programs that simply have very large data sets they need to work with.

    2. Sometimes you can trade memory for time. I've used 1gb that way. I didn't actually need that much memory but if it was available I could cut search time by more than 90%.

  • Barf4Eva (unregistered) in reply to snoofle

    I like it

  • WTFGuy (unregistered) in reply to Free Bird

    In avionics, "deterministic" applies to both memory apace and to timing. Depending on the rest of the system details, calling something like malloc() may result in a big timing hiccup while the system does a heap compaction. You simply cannot allow that kind of timing surprise to occur.

    Stack allocations do run the risk of OOM errors. They are pretty risk-free as to timing.

  • Stephen Cleary (unregistered)

    Oh, that reminds me of some horrible code I wrote once - over my strenuous objections.

    It turned out that some of our clients would examine our process in Task Manager and flip out if we consumed "too much" - 150MB or so. The programmers' response of "educate the users" didn't go over with business, so I was forced to figure out which of the Process memory stats was reported in Task Manager, periodically monitor it, and then reduce our app's memory usage (details are thankfully fuzzy at this point; essentially a GC followed by some hey-really-free-the-memory-back-to-the-OS trick).

Leave a comment on “A Very Private Memory”

Log In or post as a guest

Replying to comment #493928:

« Return to Article