- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Yeah, but once it runs out of memory it quickly exits. Worst case a few holiday shoppers get kicked off our site, but hey we do that all the time anyway. Especially if they're not using the exact same browser as our developers.
Admin
So p is a nice clean never used malloc ?
Admin
Actually, my coworker was using something similiar (maybe except the floating point overkill) to check the amount free memory on a nintendo DS console. I still would look for a different way to do it, but at least the DS doesn't have a virtual memory management system to cripple the results :)
Admin
No, you see, as memory starts to overflow on itself, there's a 50% chance that the machine code for "while 1" will get overwritten with a 0 thereby cleanly exiting the loop.
Admin
I recall reading somewhere that the recommended way of checking how much memory a system had in DOS was to try to allocate a ridiculously large amount . The OS would then tell how much memory you actually had.
Unfortunately, some fools continued to use this method in the era of protected mode and virtual memory. I believe the result was something similar to what would occur if you ran this program.
Admin
Hey, wait, I see it now. "int main" should be on one line.
Admin
I like how the math to compute MB matches neither the binary nor decimal interpretations of 'MB'.
Admin
The real WTF is believing 1024000 = 1MB :)
Admin
This is not a MegaByte, this is a MixByte.
Admin
I see at least three problems (without running it):
Linux's man page for malloc says: "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."
OOM killer kills random memory-heavy processes. Not good on a production system. OTOH, the author only said something about a generic UNIX system, not Linux... but probably system performance will at least be severely affected in most circumstances where no per-process limits are set.
Sebastian
Admin
Now, where did I leave my pile of W98 install floppies?
Admin
Nice. First, 1 MB contains 1,048,576 bytes or perhaps 1,000,000 if you would ask a hard drive manufacturer, but certainly not 102,400 bytes.
Then there is the problem that many operating systems will "lie" about the amount of memory actually available, not only in the sense of adding virtual memory (swap space) to the equation but also in the sense of overcommitting.
As long as you do not actually write anything to the allocated memory (this program doesn't), most Linux kernels with default settings will happily promise you much more memory than is actually available at that time. If you actually start using all of it, an Out Of Memory killer will kill the misbehaving application.
Admin
Also, almost all operating systems do this not just Linux.
(hope you enjoy the food, cute little troll)
Admin
TRWTF is that there's a much shorter way to bring a computer to its knees :
while(1) fork();
This "bomb" is quite famous by IT students, because the fork function belongs to the "Networking" course where you had to write a loop whose condition wasn't easy to write...
Admin
No, it's 102400 = 1MB. Except for when you're printing in human readable form. Then it's 1024000.
I've used similar code to attempt crashing machines. Difference is, that was on purpose...
Before someone one-liner on how to check the memory, I have a zero-liner. Open up the computer and have a look.
captcha: genitus. One part genius and one part d*ck...
Admin
TRWTF is that the poster ran this on his machine.
Admin
Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand <a "href=http://www.opengroup.org/onlinepubs/000095399/functions/exit.html">exit() it doesn't do free allocated memory, or have I read not thoroughly enough?
Admin
I seem to remember that page allocation was a copy-on-write system in *nix -- thus this would give inaccurate results anyway as the memory is never written to.
Admin
The operating system will reclaim all of the program's memory when the program terminates, even if the programmer never called free().
Admin
Oh, and this code is nothing but a horrible memory leak. Plus, this probably won't work as the system will start using virtual memory to satisfy the memory allocation requests.
Admin
The OOM killer would not be come into play in this case, and the linux lazy-allocation works the way it does for precisely this type of code. Code which requests memory, but never uses it.
The only bug here is in your understanding of the situation ;-)
Admin
Where in the code does it say 102400 = 1MB? I mean, the author probably did think that but this code would be equally stupid regardless of the value. It only matters that the malloc and the value added to f are the same. In fact, using smaller values will give you much more accurate crashes.
Admin
Now i'm waiting someone to convert this to java and run it as a background process.
Admin
No, exiting a program, no matter in what way (that's why it's not in the exit() documentation) will free all memory associated with that process. The real problem is what happens before.
It's most fun to run something like this on a 64-Bit system...
Admin
Maybe i missed a joke here :-)
Linux / x86 assembly, as built by gcc -S for the posted code:
There is no literal 1 when compiled down to machine code, the part that implements the while(1) is the very last line "jmp .L2".
Admin
Here, not freeing the memory is the point.
You see, the program is trying to allocate blocks of 102400 bytes of memory until it cannot allocate any more. When this (supposedly) happens, you (supposedly) know how much memory is (supposedly) available to the program. Then the program (supposedly) exits, and all allocated memory is (supposedly) deallocated.
If you freed the memory, it would allocate the same block again and again and again until the end of the world. This would, however, be much more helpful than posted snippet. Go figure.
Admin
In addition to that, this code just does not work on some UNIXes (and it is correct). Doing malloc() does not mean actually reserving the physical memory. For this kind of test you need at least to do something like memset() to actually allocate the pages.
Admin
Actually I see only 3 valid solutions
Admin
Correct solution (or at least, as close to correct as you can get): create a file, ftruncate() it to a ridiculously large size, do a binary search with mmap() to see how much address space you can access, then do a binary search with mlock() to see how much physical memory you can access. Requires root privileges, though.
Admin
Looking at the code...it memory leaks, but that doesn't really matter for this case.
As has been mentioned, the behaviour of memory allocation on many systems is such that it's meaningless. However, it's not a system crasher. I complied and ran it, and got a series of lines, the end of the output being
no slowdown or issue at all. Also that value is far larger than my RAM+swap (Linux uses fixed size swap)
Admin
Admin
Not to mention another critical design flaw (and in some of the commented alternate solutions). Even if this worked and didn't have the paging problem, it doesn't take into account the current memory running on the system: run it with no services running and then run it again while someone else is compiling code.
Hmmm: ulimit ?
Admin
#include <malloc.h> ?! How old is the code...
Admin
It also assumes that physical memory size <= user-space virtual memory size, in other words, that a single process can hold (almost) all the physical memory. In 32-bit environments this works only up to 3GB of ram.
Admin
Of course, I meant to quote this before:
Admin
The second printf statement indicates that the author thought this.
Admin
Admin
"By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer." TRWTF
Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking...
Admin
Admin
I managed to get:
1.86392e+11 186392346624.000000 182023.781250 MB
And if you think I have 182GB of RAM, you're living in a fantasy world. It would have run for ever (Fedora 10 if you care) if I didn't get bored and hit CTRL-C.
Admin
Yeah, but let's be fair... Vista is running low on resources without even running this program.
Admin
Let's see:
That's ignoring my personal quibble about using ! on a pointer to see if it's NULL or not; I'd really rather people wrote "p != NULL", just because it makes it immediately obvious what you're doing. Of course there are people who object to that and prefer "NULL != p".
In many ways I'm glad we don't work in C here; I'm sure my predecessors would have made those errors themselves. It's bad enough looking at what they've done with C#.
I miss working in C sniffs
Admin
http://www.instantrimshot.com/
Admin
Admin
OMG, 102400, WTF? base 10 or base 2 guys, choose one and stick with it. Either use 131072 or (if tenths of MB were intended) 104856. Malloc also will probably actually allocate in powers of two so it would be... oh wait this code was wrong and evil in the first place.
also, read /etc/security/limits.conf and think "gloves".
CAPTCHA: nobody cares, srsly
Admin
errr, I can't type today it seems. 104857 would be the value I was looking for... not that it would matter due to native word/VM pagesize rounding, but I was wrong.
Admin
I'm pretty sure Linus, and all the other people who have been working on Linux's memory management for years, will thank this insightful comment; your clever point of view surely solves all of their problems.
Admin
Well, hopefully your pagesize would be such that a MB is cleanly divisible by it. Also, I think you're making a slight systematic error in your tenths of MBs calculation.
Admin
""" Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking... """
There are two knobs in Linux to control overcommitting: /proc/sys/vm/overcommit_memory and /proc/sys/vm/overcommit_ratio. The former has a few options, letting you control whether overcommitting is allowed and what algorithm to follow. The latter contains a number which indicates how much overcommitting is to be allowed. On my machine it is 50, so that means up to 50% beyond physical RAM + swap. So you could turn these knobs off and allow no overcommitting and the bug would be "fixed".
In reality, it's never really a problem. The whole rationale behind overcommitting is that programs often allocate more memory than they need, or than they need at any one time. So why deny valid allocations that will never cause a problem except in pathological cases?
BTW, mmap()-ing will count against virtual memory, but not physical RAM + swap because an mmap()ed segment will not be backed against swap, but against the file that is mmap()ed. So if those pages need to be evicted from physical RAM, they can be discarded outright and reloaded from the original file. As such, they effectively don't count against the system VM capacity. The same is true of any read-only segments.
Admin
Here's a bit of a different idea I had:
#!/bin/sh
Filename: freemem.sh
echo "Memory Usage:" free
Incidentally, I have about 1 gig of RAM free, I'm using about half a gig, and the other half is buffering / caching things that Linux must think are important.