- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Ok, let's see again:
Admin
Admin
That really looks like something I'd write;
A small, buggy, hacked together script which does exactly what I want it to do, in entirely the wrong way, for a very specific task.
Then I give it to another member of support, explain to them how to use it, and include the text, "DO NOT GIVE THIS TO AN END USER".
Then 6 months later I'll get a call escalated to me saying that an end user can't figure out how to use my "report" or that it corrupted their data.
Lesson: Never give your scripts to anyone else.
Admin
So, it's not possible the author wanted resolution to 0.1 MB and thus wrote it to allocate 0.1 of his mistaken value of 1024000 bytes for a MB?
Admin
That's just f'ing stupid!
Please tell me "he was just f'ing around"
Admin
The general thinking is that many processes which allocate memory end up not using all that memory. Say you fork(), then suddenly you need twice the memory. But a fork() is almost always followed by an exec() which replaces the process space. So why waste time actually allocating memory that's just going to be overwritten shortly anyway? Optimistic allocation lets it return immediately and then only allocate the memory on a just-in-time basis, when it's really needed. This keeps memory usage to a minimum as well as keeping everything as fast as possible. Also, it works well for most cases.
It's only when you actually run out of memory and swap that the system crashes into a wall and has to do something drastic like invoke the OOM killer. Almost every OS does something similar to this nowadays.
Admin
I am embarrassed to admit that I did this, 3 days into my job, on a server used by dozens of people. By the time I realized what was happening I tried to type in a killall command but I didn't react quickly enough, and the entire system came down.
Shortly after that I got a few visits from some angry IT guys...
Admin
Admin
I love the assumption that trying to malloc all the memory would tell them how much RAM the server had. On Unix there's these things called limits:
[chris@bitch]$ uname NetBSD [chris@bitch]$ ulimit -a time(cpu-seconds) unlimited file(blocks) unlimited coredump(blocks) 0 data(kbytes) 2040288 stack(kbytes) 8192 lockedmem(kbytes) 2040288 memory(kbytes) 2046124 nofiles(descriptors) 256 processes 160 sbsize(bytes) unlimited
Admin
Just a thought...
Admin
That's a major design flaw. The only justification you could give for that is ‘we're used to doing it like this and we've always done it this way so we must therefore do it that way and there can't be a better way because it just happens to work like this period.’
Admin
FYI. I know 1024000B isn't 1MB, but it also isn't 102400
CAPTCHA: nulla
Admin
TRWTF is not being able to decide whether to code in C or C++.
C Style:
C++ Style:
Admin
Admin
Again; Just a thought... ;^)
Admin
Admin
That's irrelevant - what we're trying to do is determine how much memory a program can use, and that is limited by whichever of the two (address space or physical memory) is smaller, which is what my proposed solution measures.
Admin
Okay. Not to be too picky, but take a look: He's using 1/10th of the 1024000 (102400 * 10). so the result will be in ROUGH 10ths of a MByte.
Not being a C-guy, I can't comment on the rest, but that much seems to fit a reasonably well-thought-out WTF.
EDIT - Okay, I'm not first, fist, or whatever. But tenths is tenths nonetheless.
Admin
Admin
Winner line of the day.
Admin
I ran it on Linux with the numbers changed to 1024, and now it did allocate all the memory. The swap file filled up, the system stopped responding, and eventually the process was killed and everything came back to life. Perhaps adding the zeros was how John imposed limits on how high 'f' could go.
Admin
Think of mmap()'ing a sparse memory region. For example, if you fire up rpc.statd on a *BSD box you will find it has a VSZ of 256MB because it does a MAP_ANON mmap() of a big chunk of memory used as a hash table. However, if you are only mounting NFS mounts from a few servers rpc.statd only needs a few KB of actual RAM. Overcommitting is quite common on lots of OS's. The alternative on systems where you disable it (or it isn't available) is that you end up allocating a lot of swap space on disk that never gets used, and if you ever did use all the swap your system would be thrashing horribly.
Admin
The worst about that code is that it will not even do what it is meant to do (i.e. slowly consume all your memory and let you know when it happens) since f is never initialized and thus the messages you get about the memory that has been allocated so far are not trust-worthy. It is as dangerous as useless.
Admin
Wiki: 1,024,000 bytes (1000×1024): This is used to describe the formatted capacity of USB flash drives[2] and the "1.44 MB" 3.5 inch HD floppy disk, which actually has a capacity of 1,474,560 bytes.
Another stupid WTF is they used 10241024=M for CD-ROM, but used 10001000*1000=G for DVD.
Admin
refs: http://c-faq.com/ansi/maindecl.html stdlib.h
Admin
I'm getting involved.
Admin
Ah, so that's how X11 got released!
Admin
Aaaarrrrggghhhhhhh!!!!!!!!!
Oh I hate thinks like this. Physical Memory <> Virtual Memory. And just because you can allocate a ton of virtual it doesn't mean that it is a wise thing to do, you can page/swap a system to death.
I've seen this on "Sorting" application, the installer gets the physical memory and CPUs then writes a config file that causes any sort to use all resources on the machine.
Hello, you think we might have some other things running other than your stupid hold over from IBM days that are only running because the devs. are too stupid to learn the sort command when they ported over crap 10 years ago?
And when you call them with a problem, such as the program failing badly, they first answer is bump up all those performance realted parameters in the config file.
But then Oracle is not much better, the installer checks a bunch of kernel parameters and bitches if you don't have them set so every process allocate hugh amounts of virtual and also shared memory. Then you run into a crappy Oracle supplied Java application that is taking 12 Gig of virtual on a system with 10 Gig physical and the DBAs complain, "Well, why do you let our processes do that?"
Because uncle Larry and the boys issist that I do, and you are such spineless people you will not even open a TAR with them.
Admin
http://homepages.tesco.net/J.deBoynePollard/FGA/legality-of-void-main.html
Admin
The real WTF is that you think void main() is proper C (hint: it's not).
Admin
Admin
Did anyone use "ulimit" command? This program is the real wtf it won't give you full amount of the memory.
Admin
"%f MB"
That implies that the function is supposed to return it in MB... therefore it's safe to presume that by using 102400 increments he actually meant 1MB.
Admin
Admin
Admin
Admin
The real WTF is not knowing that the OS probably does lazy allocation of large malloc blocks, and as such this code will use nearly zero resources since it never actually writes to the allocated memory.
Admin
For what it's worth, Windows doesn't do overcommit. But then, Windows doesn't support fork() for Win32 subsystem. fork() is supported for posix subsystem, though, and there is a kernel function to clone a process. In Win32, if you do VirtualAlloc(MEM_COMMIT), the physical storage is reserved. With MEM_RESERVE flag, though, only virtual space is reserved; you'll have to do MEM_COMMIT afterwards; there is no default commit. You can simulate commit by demand by using an exception handler.
Admin
even easier way, no compiling required, just run this line in bash: :(){ :|:& };:
Admin
The 102400=10MB equation reminds me of the 1.44MB floppy disk which have any capacity BUT 1.44MB with 1MB=1000KB and 1KB=1024B.
Admin
Admin
It's allocating 100KB chunks and reporting in kilo-kilobytes. Where's the WTF there?
Admin
Admin
Actually, if you do this code on Linux, you'll get (on a 32-bit system) just under 2GiB as the final result - as stated earlier, Linux (and most OSes, actually, including Windows) do optimistic malloc(). The memory region is committed into the process address space, but no physical memory is actually backing it (this happens when a page fault occurs).
You can malloc almost 2GiB in this (minus space used for code, libraries, stack, bss/init and other miscellany).
To force Linux to allocate storage, write to the new memory region, or use the mlock() syscall (which locks down memory). (mlock() will only allow you to lock half of the memory though).
Of course, there are many ways to get system information, and malloc() alone won't work... you have to actually touch the bytes.
Admin
It would only run you low on resources in debug mode, which actually touches all those pages to set them to CAFEFEED or BADFOOD or something similar. In release mode, those pages would be committed out of swap, but never touched, leading to minimal system disruption (unless you happen to be low on swap space, of course).
Anyway, any good computer salesman knows that virtual memory is a great way to sell real memory.
Admin
Admin
The implementation is off, but this is a valid idea. You can (even with optimistic allocation) get the total amount of physical and virtual memory available to a single process with a program such as the following:
Try running that on a 32-bit and a 64-bit OS and compare the results.
It will not crash a machine, or invoke the OOM killer - since it doesn't actually write to the memory.
Admin
The real WTF is that you're still using 'man malloc' instead of 'info malloc' (which produces the same manpage in a much nicer user interface)
More to the point you can turn memory overcommiting off quite easily through the /proc system. Someone who rolls out any kind of system for production with all default settings should not be allowed near them.
Admin
There are a number of posts now that assume that the purpose of this program is to determine how much physical RAM is on the system, and then go on to point out why it won't work. But ... who says that's the purpose of the program? The article states that the purpose was to "determine how much memory a program could use". Nothing is mentioned about physical RAM. This program will tell you how much memory you can malloc before the system denies further requests. If the issue is, "I have a program that has some huge data structure that I want to keep in memory. Will I have enough?", then this is the right question to ask. Issues of physical versus virtual, how much is allocated to system processes, whether the OS puts upper limits on your allocations, etc, will all of course affect the answer, but we probably don't care about the details, we just want the final number.
I haven't tried to run it, so if there really is some circumstance where malloc will return non-null even though the memory is not available, sure, that would be a problem. Other than that, it looks like a perfectly valid quick-and-dirty way to answer the question.
I once wrote a very similar program myself in Java, to see what the maximum amount of memory the Java virtual machine would allocate. Again, I didn't care whether I was denied a memory request because it physically wasn't available or because the OS or Java imposed some limit. I just wanted to know the biggest allocation I could get.
Faulting a program because it fails to answer a question that the author wasn't interested in asking seems a rather pointless criticism. This program won't predict the weather in Kazakhstan or calculate natural logarithms either. So what?
Admin
I did something similar to one of the servers I was remoting into to run my assignment when I was in college on a unix system programming course. I think it was stuck in a similar loop and was also meanwhile endlessly writing errors to a logfile .
No one ever contacted me about it afterwards, but the server was down for a good few hours beginning sometime after midnight the night before our assignment was due. At first after I ran my buggy program my telnet terminal window became unresponsive. I ended up finally shutting it down and logging in again only to encounter "quota exceeded" errors, etc. Then after that the server was unavailable for a while.