• CaRL (unregistered)

    Yeah, but once it runs out of memory it quickly exits. Worst case a few holiday shoppers get kicked off our site, but hey we do that all the time anyway. Especially if they're not using the exact same browser as our developers.

  • Rompom (unregistered)

    So p is a nice clean never used malloc ?

  • qq (unregistered)

    Actually, my coworker was using something similiar (maybe except the floating point overkill) to check the amount free memory on a nintendo DS console. I still would look for a different way to do it, but at least the DS doesn't have a virtual memory management system to cripple the results :)

  • Gail (unregistered)

    No, you see, as memory starts to overflow on itself, there's a 50% chance that the machine code for "while 1" will get overwritten with a 0 thereby cleanly exiting the loop.

  • Sunil Joshi (unregistered)

    I recall reading somewhere that the recommended way of checking how much memory a system had in DOS was to try to allocate a ridiculously large amount . The OS would then tell how much memory you actually had.

    Unfortunately, some fools continued to use this method in the era of protected mode and virtual memory. I believe the result was something similar to what would occur if you ran this program.

  • Mainline (unregistered)

    Hey, wait, I see it now. "int main" should be on one line.

  • (cs)

    I like how the math to compute MB matches neither the binary nor decimal interpretations of 'MB'.

  • augur (unregistered)

    The real WTF is believing 1024000 = 1MB :)

  • ha (unregistered)

    This is not a MegaByte, this is a MixByte.

  • Sebastian (unregistered)

    I see at least three problems (without running it):

    1. f is never initialized to zero
    2. 102400? 1024000? Huh?
    3. Quite possibly, in a modern virtual memory based operating system, malloc will not return NULL when running out of physical and/or swap memory. It might return NULL when running out of virtual memory in the process adress space.

    Linux's man page for malloc says: "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."

    OOM killer kills random memory-heavy processes. Not good on a production system. OTOH, the author only said something about a generic UNIX system, not Linux... but probably system performance will at least be severely affected in most circumstances where no per-process limits are set.

    Sebastian

  • Mainline (unregistered) in reply to Sebastian
    Sebastian:
    "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."
    See, I told you Linux can't be counted on for production systems!

    Now, where did I leave my pile of W98 install floppies?

  • NiceWTF (unregistered)

    Nice. First, 1 MB contains 1,048,576 bytes or perhaps 1,000,000 if you would ask a hard drive manufacturer, but certainly not 102,400 bytes.

    Then there is the problem that many operating systems will "lie" about the amount of memory actually available, not only in the sense of adding virtual memory (swap space) to the equation but also in the sense of overcommitting.

    As long as you do not actually write anything to the allocated memory (this program doesn't), most Linux kernels with default settings will happily promise you much more memory than is actually available at that time. If you actually start using all of it, an Out Of Memory killer will kill the misbehaving application.

  • NiceWTF (unregistered) in reply to Mainline
    Mainline:
    See, I told you Linux can't be counted on for production systems!

    Also, almost all operating systems do this not just Linux.

    (hope you enjoy the food, cute little troll)

  • terepashii (unregistered)

    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    This "bomb" is quite famous by IT students, because the fork function belongs to the "Networking" course where you had to write a loop whose condition wasn't easy to write...

  • ath (unregistered) in reply to augur
    augur:
    The real WTF is believing 1024000 = 1MB :)

    No, it's 102400 = 1MB. Except for when you're printing in human readable form. Then it's 1024000.

    I've used similar code to attempt crashing machines. Difference is, that was on purpose...

    Before someone one-liner on how to check the memory, I have a zero-liner. Open up the computer and have a look.

    captcha: genitus. One part genius and one part d*ck...

  • ohye (unregistered)

    TRWTF is that the poster ran this on his machine.

  • rnq (unregistered)

    Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand <a "href=http://www.opengroup.org/onlinepubs/000095399/functions/exit.html">exit() it doesn't do free allocated memory, or have I read not thoroughly enough?

  • Dean (unregistered)

    I seem to remember that page allocation was a copy-on-write system in *nix -- thus this would give inaccurate results anyway as the memory is never written to.

  • christian (unregistered) in reply to rnq

    The operating system will reclaim all of the program's memory when the program terminates, even if the programmer never called free().

  • (cs)
    Sunil Joshi:
    I recall reading somewhere that the recommended way of checking how much memory a system had in DOS was to try to allocate a ridiculously large amount . The OS would then tell how much memory you actually had.
    Not quite. You wouldn't allocate the memory, you would attempt to access memory in upper memory locations. Being a simple, very non-restrictive OS you could access memory anywhere at any time.

    Oh, and this code is nothing but a horrible memory leak. Plus, this probably won't work as the system will start using virtual memory to satisfy the memory allocation requests.

  • Craig Perry (unregistered) in reply to Sebastian
    Sebastian:

    OOM killer kills random memory-heavy processes. Not good on a production system.

    Sebastian

    The OOM killer would not be come into play in this case, and the linux lazy-allocation works the way it does for precisely this type of code. Code which requests memory, but never uses it.

    The only bug here is in your understanding of the situation ;-)

  • cheers (unregistered) in reply to augur

    Where in the code does it say 102400 = 1MB? I mean, the author probably did think that but this code would be equally stupid regardless of the value. It only matters that the malloc and the value added to f are the same. In fact, using smaller values will give you much more accurate crashes.

  • some thing (unregistered)

    Now i'm waiting someone to convert this to java and run it as a background process.

  • Cochrane (unregistered) in reply to rnq
    rnq:
    Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand exit() it doesn't do free allocated memory, or have I read not thoroughly enough?

    No, exiting a program, no matter in what way (that's why it's not in the exit() documentation) will free all memory associated with that process. The real problem is what happens before.

    It's most fun to run something like this on a 64-Bit system...

  • Craig Perry (unregistered) in reply to Gail
    Gail:
    No, you see, as memory starts to overflow on itself, there's a 50% chance that the machine code for "while 1" will get overwritten with a 0 thereby cleanly exiting the loop.

    Maybe i missed a joke here :-)

    Linux / x86 assembly, as built by gcc -S for the posted code:

    .L2:
    	movl	$102400, (%esp)
    	call	malloc
    	movl	%eax, -12(%ebp)
    	cmpl	$0, -12(%ebp)
    	jne	.L3
    	movl	$.LC0, (%esp)
    	call	puts
    	movl	$1, (%esp)
    	call	exit
    .L3:
    	flds	-8(%ebp)
    	flds	.LC1
    	faddp	%st, %st(1)
    	fstps	-8(%ebp)
    	flds	-8(%ebp)
    	fstpl	4(%esp)
    	movl	$.LC2, (%esp)
    	call	printf
    	flds	-8(%ebp)
    	flds	.LC3
    	fdivrp	%st, %st(1)
    	flds	-8(%ebp)
    	flds	-8(%ebp)
    	fxch	%st(2)
    	fstpl	20(%esp)
    	fstpl	12(%esp)
    	fstpl	4(%esp)
    	movl	$.LC4, (%esp)
    	call	printf
    	jmp	.L2
    

    There is no literal 1 when compiled down to machine code, the part that implements the while(1) is the very last line "jmp .L2".

  • Jay (unregistered) in reply to rnq
    rnq:
    Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand exit() it doesn't do free allocated memory, or have I read not thoroughly enough?

    Here, not freeing the memory is the point.

    You see, the program is trying to allocate blocks of 102400 bytes of memory until it cannot allocate any more. When this (supposedly) happens, you (supposedly) know how much memory is (supposedly) available to the program. Then the program (supposedly) exits, and all allocated memory is (supposedly) deallocated.

    If you freed the memory, it would allocate the same block again and again and again until the end of the world. This would, however, be much more helpful than posted snippet. Go figure.

  • Nikolai (unregistered)

    In addition to that, this code just does not work on some UNIXes (and it is correct). Doing malloc() does not mean actually reserving the physical memory. For this kind of test you need at least to do something like memset() to actually allocate the pages.

  • Niels (unregistered)
    This code was sent to John W. by the support staff of one of the larger software vendors with the stated purpose of determining how much memory a program could use on one of the corporation's Unix servers.
    Snipped code: while(true) { malloc ; print } WTF 1: A 'major' vendor wrote this code?
    XP said I was running low on resources
    As expected.
    However, John was asked to run this program...in an environment that ran 24/7.
    WTF 2: Run _THIS_ code on a 24/7 environment and you're in a world of hurt.
    In the interest of appeasing vendor support, John did run it, but not until imposing some limits on how high "f" could go.
    WTF 3: So he ran software intended to answer a question by making sure the answer is known beforehand ?!?! So there would be no need at all to run this code because the result can be verified on any system.

    Actually I see only 3 valid solutions

    1. Simply refuse to run it on production and let the vendor fix his checking code.
    2. Do the test on a 'live like' test/acceptance system.
    3. Simply lie without running any testing software at all.
  • (cs)

    Correct solution (or at least, as close to correct as you can get): create a file, ftruncate() it to a ridiculously large size, do a binary search with mmap() to see how much address space you can access, then do a binary search with mlock() to see how much physical memory you can access. Requires root privileges, though.

  • m0ffx (unregistered)

    Looking at the code...it memory leaks, but that doesn't really matter for this case.

    As has been mentioned, the behaviour of memory allocation on many systems is such that it's meaningless. However, it's not a system crasher. I complied and ran it, and got a series of lines, the end of the output being

    3.20901e+09
    3.20901e+09 3209011200.000000 3133.800000 MB
    malloc returned NULL

    no slowdown or issue at all. Also that value is far larger than my RAM+swap (Linux uses fixed size swap)

  • (cs) in reply to Sebastian
    Sebastian:
    I see at least three problems (without running it): 1. f is never initialized to zero 2. 102400? 1024000? Huh? 3. Quite possibly, in a modern virtual memory based operating system, malloc will not return NULL when running out of physical and/or swap memory. It might return NULL when running out of virtual memory in the process adress space.

    Linux's man page for malloc says: "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."

    OOM killer kills random memory-heavy processes. Not good on a production system. OTOH, the author only said something about a generic UNIX system, not Linux... but probably system performance will at least be severely affected in most circumstances where no per-process limits are set.

    Sebastian

    I don't know about Linux but I ran into that once on good old stock SunOS - pre-solaris. Malloc would return a valid pointer no matter how much you tried to allocate, but would crap out if you tried to access more than virtual memory could provide.

  • jrwsampson (unregistered)

    Not to mention another critical design flaw (and in some of the commented alternate solutions). Even if this worked and didn't have the paging problem, it doesn't take into account the current memory running on the system: run it with no services running and then run it again while someone else is compiling code.

    Hmmm: ulimit ?

  • (cs)

    #include <malloc.h> ?! How old is the code...

  • qq (unregistered) in reply to DES

    It also assumes that physical memory size <= user-space virtual memory size, in other words, that a single process can hold (almost) all the physical memory. In 32-bit environments this works only up to 3GB of ram.

  • qq (unregistered) in reply to qq

    Of course, I meant to quote this before:

    DES:
    Correct solution (or at least, as close to correct as you can get): create a file, ftruncate() it to a ridiculously large size, do a binary search with mmap() to see how much address space you can access, then do a binary search with mlock() to see how much physical memory you can access. Requires root privileges, though.
    qq:
    It also assumes that physical memory size <= user-space virtual memory size, in other words, that a single process can hold (almost) all the physical memory. In 32-bit environments this works only up to 3GB of ram.
  • (cs) in reply to cheers
    cheers:
    Where in the code does it say 102400 = 1MB? I mean, the author probably did think that but this code would be equally stupid regardless of the value. It only matters that the malloc and the value added to f are the same. In fact, using smaller values will give you much more accurate crashes.

    The second printf statement indicates that the author thought this.

  • (cs) in reply to some thing
    some thing:
    Now i'm waiting someone to convert this to java and run it as a background process.
    Not quite - Java will only run this up to the limit of the jvm's heap, not the system's vm.
  • Shinobu (unregistered)

    "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer." TRWTF

    Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking...

  • (cs) in reply to kennytm
    kennytm:
    #include <malloc.h> ?! How old is the code...
    Old enough that the compiler doesn't complain that "f" is being used without being initialized.
  • Sam (unregistered) in reply to m0ffx
    m0ffx:
    Looking at the code...it memory leaks, but that doesn't really matter for this case.

    As has been mentioned, the behaviour of memory allocation on many systems is such that it's meaningless. However, it's not a system crasher. I complied and ran it, and got a series of lines, the end of the output being

    3.20901e+09
    3.20901e+09 3209011200.000000 3133.800000 MB
    malloc returned NULL

    no slowdown or issue at all. Also that value is far larger than my RAM+swap (Linux uses fixed size swap)

    I managed to get:

    1.86392e+11 186392346624.000000 182023.781250 MB

    And if you think I have 182GB of RAM, you're living in a fantasy world. It would have run for ever (Fedora 10 if you care) if I didn't get bored and hit CTRL-C.

  • Evo (unregistered)
    spoiler: XP said I was running low on resources

    Yeah, but let's be fair... Vista is running low on resources without even running this program.

  • (cs)

    Let's see:

    • Failure to initialize f to NULL.
    • Not knowing that 1 megabyte is actually 1,048,576 bytes.
    • Never calling free() on the successfully allocated memory.

    That's ignoring my personal quibble about using ! on a pointer to see if it's NULL or not; I'd really rather people wrote "p != NULL", just because it makes it immediately obvious what you're doing. Of course there are people who object to that and prefer "NULL != p".

    In many ways I'm glad we don't work in C here; I'm sure my predecessors would have made those errors themselves. It's bad enough looking at what they've done with C#.

    I miss working in C sniffs

  • (cs) in reply to Evo
    Evo:
    spoiler: XP said I was running low on resources

    Yeah, but let's be fair... Vista is running low on resources without even running this program.

    http://www.instantrimshot.com/

  • (cs) in reply to cheers
    cheers:
    Where in the code does it say 102400 = 1MB?
    Rrrright here:
      f += <span style="color:red;">102400</span>;
      printf ("%g\n", f);
      printf ("%g %f %f <span style="color:red;">MB</span>\n", f, f, <span style="color:red;">f/1024000</span>);
    
  • Ken (unregistered) in reply to some thing
    some thing:
    Now i'm waiting someone to convert this to java and run it as a background process.
    J2EE has that built in as a feature! </troll>

    OMG, 102400, WTF? base 10 or base 2 guys, choose one and stick with it. Either use 131072 or (if tenths of MB were intended) 104856. Malloc also will probably actually allocate in powers of two so it would be... oh wait this code was wrong and evil in the first place.

    also, read /etc/security/limits.conf and think "gloves".

    CAPTCHA: nobody cares, srsly

  • Ken (unregistered) in reply to Ken

    errr, I can't type today it seems. 104857 would be the value I was looking for... not that it would matter due to native word/VM pagesize rounding, but I was wrong.

  • EPE (unregistered) in reply to Shinobu
    Shinobu:
    "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer." TRWTF

    Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking...

    I'm pretty sure Linus, and all the other people who have been working on Linux's memory management for years, will thank this insightful comment; your clever point of view surely solves all of their problems.

  • Shinobu (unregistered) in reply to Ken

    Well, hopefully your pagesize would be such that a MB is cleanly divisible by it. Also, I think you're making a slight systematic error in your tenths of MBs calculation.

  • Joel (unregistered) in reply to Shinobu

    """ Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking... """

    There are two knobs in Linux to control overcommitting: /proc/sys/vm/overcommit_memory and /proc/sys/vm/overcommit_ratio. The former has a few options, letting you control whether overcommitting is allowed and what algorithm to follow. The latter contains a number which indicates how much overcommitting is to be allowed. On my machine it is 50, so that means up to 50% beyond physical RAM + swap. So you could turn these knobs off and allow no overcommitting and the bug would be "fixed".

    In reality, it's never really a problem. The whole rationale behind overcommitting is that programs often allocate more memory than they need, or than they need at any one time. So why deny valid allocations that will never cause a problem except in pathological cases?

    BTW, mmap()-ing will count against virtual memory, but not physical RAM + swap because an mmap()ed segment will not be backed against swap, but against the file that is mmap()ed. So if those pages need to be evicted from physical RAM, they can be discarded outright and reloaded from the original file. As such, they effectively don't count against the system VM capacity. The same is true of any read-only segments.

  • Troy (unregistered)

    Here's a bit of a different idea I had:

    #!/bin/sh

    Filename: freemem.sh

    echo "Memory Usage:" free

    Incidentally, I have about 1 gig of RAM free, I'm using about half a gig, and the other half is buffering / caching things that Linux must think are important.

Leave a comment on “The Sky's the Limit!”

Log In or post as a guest

Replying to comment #232563:

« Return to Article