- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
My -> May
Admin
There's another typo too, but I don't want to get agresive about it.
Admin
I disagree that was a typo. I suspect Remy meant the programmer was thinking "It works on my personal machine. Or at least it works on this one test machine I'm using." Then the dev didn't think any deeper than that.
Bottom line on all this stuff is that since I started in the 1970s the minimum quality and quantity of knowledge required to be good at this job has gone up by orders of magnitude while the quality and quantity of training and worker development has gone down by at least 1 order of magnitude.
Crap like this is the inevitable result of the yawning gap between the two.
Admin
The typo is the first word of the text: "My the gods spare us ..." -> "May the gods spare us ..."
Admin
Does C# not come with a memory limit option? It's common to have some high value you safely expect your program to never reach. You would only turn it off or bump it for things where you expect high memory usage. That might be a minor annoyance for languages where it's just a global that can't be changed for a single function that has high memory use.
It's awkward but if you know what you're doing it's better to detect memory leaks sooner rather than later when they cause a problem for the system or something else (they also sometimes make things slower and slower).
Some will argue it's a bad practice. On the other hand the program being written by a programmer will have better knowledge of how many resources it really needs. The problem is with some languages that's more hit an miss than others. However if you set the memory limit with the language then you should that should play nice with the GC.
Others depend on external monitoring to manage the problem. For an application that's a client running on other people's computer then it's a bit more complex. I'm guessing this was a case of never crash the program instead try to warn the user to save first/give them a hint to restart.
Admin
TRWTF expired certificate
Admin
Ah, you beat me to that, that's TRWTF today:)
Admin
Expired certificates are scary.
Admin
Could be to enforce a license that only allows a specific amount of memory.
Admin
"A Very Private Memory" click "Your connection is not private"
...yeah, you might want to see about renewing that certificate.
Admin
Yeah, it's about time for an article about "all the things that went wrong with certificates at thedailywtf and why" :-P
Admin
TRWTF are most Browsers: Browser with expired certificate: <<DANGER>> The certificate is unsafe!!! Using plain HTTP: Everything is fine, carry on. (and a tiny
Not Secure
in the upper right corner)... Because no encryption is safer than bad encryption ...right?
Admin
<moderation> held for <comment>
Admin
The article says, "here the developer tuned the number 1550000000", but the actual value in the code is "1150000000", which not the same thing, just going to prove how difficult and un-maintainable this code is.
Admin
"thedailywtf.com uses an invalid security certificate. The certificate expired on Monday, 5 March 2018, 14:00. The current time is Monday, 5 March 2018, 15:53."
wtf?
Admin
If we're collecting typos, I'll point out the discrepancy between 1150000000 and 1550000000.
Admin
When the current date is after the expiration date, the certificate has expired.
Admin
I think it probably said 1550000000. One possible reason is that this is around the maximum memory a 32-bit c# .Net 4.0 app can allocate on windows without starting to have issues and throwing OOM exceptions. Not 2GB or 4GB as expected, for some reason.
Admin
Personally, I'm a big fan of allocating all the storage (that you know will be needed) at startup. It's very fast and acts as a heap-baseline. Then it's really easy to profile run-time under-load memory and figure out how the high water mark compares to your expectations. It also eliminates those unexpected memory "surges" when lazily loading something from a remote system that pukes a whole lot of unexpected data your way.
Interestingly, not many people I've worked with seem to like this approach, though nobody has ever been able to make a really good case for avoiding it.
Admin
Best part of this is that the function is probably called in a loop causing a barrage of neverending message boxes making the program impossible to close or save your data.
Admin
I worked on a system once for which dynamic memory allocation (i.e. calls to new and malloc) was completely forbidden. Thus, every object had to be allocated either statically or on the stack. It was an embedded avionics project where one of the main keywords in the requirements was "deterministic", since this was the kind of system where a serious enough bug could literally kill someone. After a while, I did see the wisdom in such an approach, and even now I try to avoid dynamic allocation in my (C++) code unless it's absolutely necessary.
Of course, one of the many WTFs of that project was that pointers themselves were forbidden, until I pointed out that maybe the performance problems they were having might have something to do with passing large data structures by value. Refactoring the system to use pass-by-reference semantics instead resulted in a huge speed boost.
Admin
Whilst it might be nice to have the hard-coded limit as a configurable variable (per program), I can't really see the WTF in this. And I really don't understand Remy's outrage.
It's easy to imagine a program that consumes 11G (or 15G, depending upon typo) and then something horrible happens ... and I'm prepared, without evidence either way, to assume that this is one of those desktop programs. I specify desktop, because there's a message box. Message boxes do not belong on servers, headless or otherwise.
I like (no I don't: I loathe) the implication that garbage collection will take care of all memory issues, irregardless. This is not so. Dependent upon the chosen GC, and upon the mix of short-term/long-term small/large objects, you can get absolutely raped by GC pressure. And there's no good reason to claim that hiking page faults up on a non-deterministic basis is acceptable in the general case.
There's no context attached, so (ignoring the possibility that this is encased in a hash-debug parenthesis, or somewhere in the bowels of NUNIT, and we are not specifically told) I will just point out that in my last job we had something like 500 reported yet unsolved exceptions. Of those (C#) exceptions, about 200 were OOM. We never tracked down a single one, because ... how could you?
I'd add a button to "send report to developers," but other than that, I don't see a huge issue here.
Admin
"Probably" as in "I have no idea but I'm just guessing."
If that's the best part, I'd hate to see the worst part.
Admin
Seems Capt. Obvious missed the obvious... thedailywtf.com uses an invalid security certificate. The certificate expired on Monday, 5 March 2018, 14:00. The current time is Monday, 5 March 2018, 15:53 It is currently Monday, 5 March 2018, 12:35...
Addendum 2018-03-05 12:36: that is easy coast USA
Admin
I don't think this is a case of a programmer being clever, but of debug code that never got removed.
Admin
I'm currently running a utility with something similar in the code - it's an old application which if you leave it running too long will take up all the available memory, but there wasn't the time/money/management buy-off to rewrite it so it just checks now and again and tells the user to restart it if needed.
Admin
My thoughts exactly. It's one of those things one puts in to get a warning while testing a supposedly better build (leaks plugged, heavy allocating removed, whatever) to verify.
Arguably, that's not the best way (proper snapshotting and profiling is), but it's a quick way.
Admin
Nope, a valid x509 cert is everything in the spec, plus all the dumb hacks that have been immortalized to avoid bringing down major websites.
Admin
That this is a moving target would be the major sticking point for me. If the build process could determine this by running a test load, I'd bless off on it.
People are biased to underestimate complexity; so developers behave as you've mentioned, and on the user side you get the cottage industry of memory optimizers and disk cleaners. All those 1's make the platters wobbly and the RAMs go slower, you see.
Admin
.Net is also not great an avoiding memory leaks in Debug mode. It has a much harder time garbage collecting certain types of objects in debug. This makes me think that this code only ever saw the problem in Debug and was definitely a works-for-me scenario.
I have only run into memory issues a few times with .Net.
Admin
Safety critical systems often require use of safety-critical languages or programming guidelines. I did work for a safety-certified OS, and by default it was not certifiable. Instead, there was a huge manual associated with it that told you what you had to do (configuration, programming practices, etc) you had to use in order to make it certifiable.
One of the big ones is no heap - heap usage is too unpredictable and hard to defend (what happens if malloc() fails on a critical path?).
The system must be kept in a deterministic state and malloc() does not allow this. The system basically cannot fail, and the whole document went into how to make sure it doesn't fail unexpectedly.
Admin
I can only see the wisdom in such an approach if there are also rules limiting usage of the stack.
Admin
All of this begs the question: Why does a process need 1.5G of memory space. Sounds a little large to me, but then I was brought up when memory was real "core" and VERY expansive. We used kB in those days.
It was a different era, and we used things wisely. Of course then again we didn't have applications that were in the multi-megabyte range like the apps I see on my iphone.
(SIGH)
Addendum 2018-03-05 20:39: Sorry, I meant expensive. And it was!
Admin
"...when memory was real "core" and VERY expansive..."
And very expensive as well.
Admin
frist
Admin
The smallest machine I ever worked on had just 576 bits of internal memory for data, and a 2KB serial EEPROM for program storage, along with an RC clock generator, not a crystal. I suppose you'd have to say that it wasn't well suited to mining cryptocurrencies.
Admin
Because that requires that a programmer have access to and understand the big picture of how the program works and not just the narrow slice of an interchangeable cog churning out patch after patch after patch.
Admin
"All of this begs the question: Why does a process need 1.5G of memory space."
First, it doesn't beg the question. That's not what that means. Second, we have no idea what this program does. Maybe it's processing 4K video streams.
Admin
"All of this begs the question: Why does a process need 1.5G of memory space. Sounds a little large to me, but then I was brought up when memory was real "core" and VERY expansive. We used kB in those days."
Two things come to mind:
Programs that simply have very large data sets they need to work with.
Sometimes you can trade memory for time. I've used 1gb that way. I didn't actually need that much memory but if it was available I could cut search time by more than 90%.
Admin
I like it
Admin
In avionics, "deterministic" applies to both memory apace and to timing. Depending on the rest of the system details, calling something like malloc() may result in a big timing hiccup while the system does a heap compaction. You simply cannot allow that kind of timing surprise to occur.
Stack allocations do run the risk of OOM errors. They are pretty risk-free as to timing.
Admin
Oh, that reminds me of some horrible code I wrote once - over my strenuous objections.
It turned out that some of our clients would examine our process in Task Manager and flip out if we consumed "too much" - 150MB or so. The programmers' response of "educate the users" didn't go over with business, so I was forced to figure out which of the Process memory stats was reported in Task Manager, periodically monitor it, and then reduce our app's memory usage (details are thankfully fuzzy at this point; essentially a GC followed by some hey-really-free-the-memory-back-to-the-OS trick).