• Obloodyhell (unregistered)

    http://xkcd.com/394/

  • Doug (unregistered) in reply to Tino
    Your code still has 2 major BUGs:
    1. printf() might allocate memory and thus could fail
    2. It never completes on a Turing machines
    • And unsigned probably isn't big enough for a 64-bit machine
    • What about the memory overhead of malloc (it allocates an extra 16 bytes per call on my machine)
  • KKK (unregistered) in reply to rnq

    its memory leak. but the purpose of the program is to do that purposefully to test memory consumed by the process ...

  • SuperKoko (unregistered) in reply to Alan
    From that statement I can tell that you've never written a program which has been designed to withstand OOM errors and has actually been proven (as much as is possible) to do so.
    I'm currently developing a streaming hex editor (which supports inserting bytes in the file) with a OOM and crash-safe atomic save feature. To perform operations safely, I first write a journal to disk. Currently, I'm facing three problems. 1) Even if my program has enough memory, some critical systems call may return ENOMEM because they internally call some memory allocation routine. I hope the Linux kernel reserves some memory for kernel space to make this unlikely. This isn't an issue for my program since I write a journal to disk. The program may fail, but it can always be launched again when memory is freed.
    1. Even fsync()'ed, the journal isn't guaranteed to be written to disk because of the HD cache. This should at least be ok if the application, or maybe OS crashes but there's no power outage (UPS should be used on any critical computer).

    2. With sparse files (or worst, compressed file systems, but I won't use such a file system), write(2), fsync(2) and fdatasync(2) may fail, for lack of disk space, when writing within the bounds of a file. posix_fallocate(3) is supposed to fix this problem on file systems where this is possible (this should be possible on ext3), but I don't trust the implementation. GLIBC has a BUGGY implementation which doesn't write inside "holes" of sparse files for kernels < 2.6.23. I think writing a generic "posix_fallocate" was a bad idea at first. IMO, this function should be specifically coded for each file system. Fortunately, kernel >= 2.6.23 provides fallocate(), which has to be implemented by the file system driver. I've not yet tested it, but I hope it got things right for ext3.

    Thanks to the journal, on low disk space conditions, the program will fail, but can be resumed if disk space is freed. Hard disk physical failure is the ultimate issue. Nothing can be done against that, except backing up and using good disks or RAID1 disks. RAID1 won't protect against failing/buggy disk controllers, though.

    he said his codebase bloated by >30% and still could not handle OOM in all scenarios.
    Nobody said it was going to be easy. But, some critical applications (e.g. some database engines on critical servers) are designed to cope, as well as they can, with low-disk and low-memory conditions. Overcommitting only makes their task harder for dubious benefits. A watchdog should restart the critical application but an application crash is never good.
  • SuperKoko (unregistered) in reply to Anonymous Coward
    now, the meaning of M, G etc. _depends on context_, which unnecessarily complicates things and causes inconsistency. that's what it's all about.
    Worst... K, M and G may mean different things in the *same* context. For example: disk sizes. Hey, my OS says that my 1TB disk is 910GB. Where're my 90GB? I've seen somebody who thought that the 128GB and 137GB limits of HD capacities were different! Actually, the old 28 bits ATA standard has a 128GiB~=137GB limit. The funniest is the 1.44 kilokibibyte size of floppy disk. Fortunately, most GNU tools properly use SI units.
  • lowell (unregistered)

    i had to change a couple of things to make it build in xcode:

    #include <stdio.h>
    #include <stdlib.h>
    
    int main () {
      float f; void *p;
      while (1) { p = malloc(102400);
        if (!p) {
          printf ("malloc returned NULL\n");
          exit(1); }
        f += 102400;
        printf ("%g\n", f);
        printf ("%g %f %f MB\n", f, f, f/1024000); } }
    

    it still won't crash my system though. it just counts to 3.6 billion. :(

  • borkborkbork (unregistered) in reply to terepashii

    I like this obfuscated version as a unix shell script.

    :(){ :;:; } ;:

  • Random832 (unregistered) in reply to NiceWTF
    NiceWTF:
    As long as you do not actually write anything to the allocated memory (this program doesn't), most Linux kernels with default settings will happily promise you much more memory than is actually available at that time. If you actually start using all of it, an Out Of Memory killer will kill an application.

    Fixed your post.

  • Risto (unregistered) in reply to Shinobu
    Shinobu:
    On my Debian installation I've seen the kicker or the IDE blown away due to out of memory conditions ... And considering how I've seen the OOM killer's heuristics fail, I think it'd be more reliable to make the database server not crash on an OOM condition anyway.

    TRWTF is that you think that killing Kicker is a fail of OOM heuristics. I think that Kicker is one of the least critical apps you can have running on you desktop, so it's always better to kill that before anything else. (If we are talking about KDE's Kicker)

  • Rob (unregistered)

    Haha I wrote an app like this whilst learning programming.

    It would continously allocate memory whilst using windows API calls to tell you how much you had left :) good times.

  • Nishant (unregistered) in reply to Cochrane

    I don't think that is entirely correct. Shared memory will not be released.

  • Steve H (unregistered) in reply to some thing
    some thing:
    Now i'm waiting someone to convert this to java and run it as a background process.

    I think this actually is the source for the JVM.

  • Tony (unregistered) in reply to Nishant

    Shared memory gets released when the last reference to it has gone away.

    In a modern OS it's pretty hard to leak real resources because they're very good at cleaning up after themselves.

    Also these days the theoretical free memory for an app is bounded by ulimit, addressable space (2gb/3gb depending on OS and configuration eg. windows needs a boot switch to use 3gb) and maximum virtual memory.

    Not that it's the whole story - poor memory allocation schemes can reduce that a heck of a lot (the Windows C library is so bad we routinely replace it in production code because it'll fail allocation of a few tens of mb even when there are hundreds of mb free).

  • TopicSlayer (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    rnq:
    Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand exit() it doesn't do free allocated memory, or have I read not thoroughly enough?

    Everything frees memory when the program ends; the only issue is for programs that run for long periods of time. The practical upshot is, if you're writing a little script or a batch that runs at irregular intervals, you don't need to worry about your memory, because when it ends, it's reclaimed.

    The only exception to this rule is the occasional forked zombie process. Make sure your forked processes are self-terminating.

    Zombie processes are not exceptions to this rule. They are indeed cleaned up. The only exception is that their entry in the job table must remain, thus they continue to "hog" a pid. This behavior is required such that the parent can call waitpid and retrieve the child's exit status.

    Until the parent either calls waitpid or dies (in which case the child will be inherited by init who will call waitpid) the process will remain a zombie.

    A parent who forks children and does not care about their exit status can ignore SIGCHLD, and never a zombie it shall make.

  • Guy with Real OS (unregistered)

    I don't get the wtf. I tried this (actually a simplified, less stupid version*) on my (Linux x86_64) machine. With -m32 it reported just under 4 GB as expected. With -m64, the mouse got a bit slow around the 500 GB mark, so I ctrl-C'd it. But seriously, if you need 500 GB of address space...

    The original submission did, after all, specify this was for use on UNIX platforms.

    (* I do however see the WTF in 1024000 == 1 MB. It should be 1<<20, of course.)

  • Alastair Lynn (unregistered)

    My compiler (clang) eliminated the malloc, and (correctly) made the float undefined, so I just got an infinite loop of nans as output. Go figure.

  • LD (unregistered) in reply to rnq

    Assuming the OS isn't totally broken, memory will be completely freed once the program terminates (or is killed by the user/OS/whatever).

  • nono (unregistered)

    Fortunately, most GNU tools properly use labatterie SI units.

Leave a comment on “The Sky's the Limit!”

Log In or post as a guest

Replying to comment #:

« Return to Article