• EPE (unregistered) in reply to hikari
    hikari:
    Let's see:
    • Failure to initialize f to NULL.
    • Not knowing that 1 megabyte is actually 1,048,576 bytes.
    • Never calling free() on the successfully allocated memory.

    That's ignoring my personal quibble about using ! on a pointer to see if it's NULL or not; I'd really rather people wrote "p != NULL", just because it makes it immediately obvious what you're doing. Of course there are people who object to that and prefer "NULL != p".

    In many ways I'm glad we don't work in C here; I'm sure my predecessors would have made those errors themselves. It's bad enough looking at what they've done with C#.

    I miss working in C sniffs

    Ok, let's see again:

    • f is a float, not a pointer: it must be initialized to 0, not to NULL.
    • The (intended) purpose of all that crap is to allocate as much memory as possible, by means of allocating one MB after another.
  • Shinobu (unregistered) in reply to EPE
    EPE:
    ... your clever point of view surely solves all of their problems.
    Come now, granted, if they were to implement something similar it would make one problem a lot more tolerable, but to say that it would solve all their problems is going a bit far.
  • (cs)

    That really looks like something I'd write;

    A small, buggy, hacked together script which does exactly what I want it to do, in entirely the wrong way, for a very specific task.

    Then I give it to another member of support, explain to them how to use it, and include the text, "DO NOT GIVE THIS TO AN END USER".

    Then 6 months later I'll get a call escalated to me saying that an end user can't figure out how to use my "report" or that it corrupted their data.

    Lesson: Never give your scripts to anyone else.

  • Spoe (unregistered) in reply to JayFin
    JayFin:
    cheers:
    Where in the code does it say 102400 = 1MB?
    Rrrright here:
      f += <span style="color:red;">102400</span>;
      printf ("%g\n", f);
      printf ("%g %f %f <span style="color:red;">MB</span>\n", f, f, <span style="color:red;">f/1024000</span>);
    

    So, it's not possible the author wanted resolution to 0.1 MB and thus wrote it to allocate 0.1 of his mistaken value of 1024000 bytes for a MB?

  • (cs)

    That's just f'ing stupid!

    Please tell me "he was just f'ing around"

  • Otto (unregistered) in reply to Shinobu
    Shinobu:
    Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking...

    The general thinking is that many processes which allocate memory end up not using all that memory. Say you fork(), then suddenly you need twice the memory. But a fork() is almost always followed by an exec() which replaces the process space. So why waste time actually allocating memory that's just going to be overwritten shortly anyway? Optimistic allocation lets it return immediately and then only allocate the memory on a just-in-time basis, when it's really needed. This keeps memory usage to a minimum as well as keeping everything as fast as possible. Also, it works well for most cases.

    It's only when you actually run out of memory and swap that the system crashes into a wall and has to do something drastic like invoke the OOM killer. Almost every OS does something similar to this nowadays.

  • (cs) in reply to terepashii
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    I am embarrassed to admit that I did this, 3 days into my job, on a server used by dozens of people. By the time I realized what was happening I tried to type in a killall command but I didn't react quickly enough, and the entire system came down.

    Shortly after that I got a few visits from some angry IT guys...

  • Shinobu (unregistered) in reply to Joel
    Joel:
    In reality, it's never really a problem. The whole rationale behind overcommitting is that programs often allocate more memory than they need, or than they need at any one time. So why deny valid allocations that will never cause a problem except in pathological cases?
    Ah, thanks for a reply that is actually useful. For what it's worth, I personally consider overcommitting a bug. I also don't think it's quite as common as you describe. I've seen quite some (mostly open) source code and the closest I've seen is several heap allocators that necessarily allocate in multiples of the page size, to facilitate suballocating. Which doesn't even fit the difinition that well. On the Windows side, I know processes can ask the system to reserve memory without allocating it, but the purpose behind that is precisely to ensure that the memory will be there when you need it. I've never tried it because I didn't need it, but I think Windows will complain if you try to reserve more memory than could be made available. But it would be interesting to test.
  • (cs)

    I love the assumption that trying to malloc all the memory would tell them how much RAM the server had. On Unix there's these things called limits:

    [chris@bitch]$ uname NetBSD [chris@bitch]$ ulimit -a time(cpu-seconds) unlimited file(blocks) unlimited coredump(blocks) 0 data(kbytes) 2040288 stack(kbytes) 8192 lockedmem(kbytes) 2040288 memory(kbytes) 2046124 nofiles(descriptors) 256 processes 160 sbsize(bytes) unlimited

  • (cs) in reply to dpm
    dpm:
    kennytm:
    #include <malloc.h> ?! How old is the code...
    Old enough that the compiler doesn't complain that "f" is being used without being initialized.
    Does that mean that it's possible this code was written at a time when this approach would've worked on most OSes and progress means it's now a WTF when it used to be an industry standard?

    Just a thought...

  • Shinobu (unregistered) in reply to Otto

    That's a major design flaw. The only justification you could give for that is ‘we're used to doing it like this and we've always done it this way so we must therefore do it that way and there can't be a better way because it just happens to work like this period.’

  • pia (unregistered) in reply to JayFin
    JayFin:
    cheers:
    Where in the code does it say 102400 = 1MB?
    Rrrright here:
      f += <span style="color:red;">102400</span>;
      printf ("%g\n", f);
      printf ("%g %f %f <span style="color:red;">MB</span>\n", f, f, <span style="color:red;">f/1024000</span>);
    
    It says 1MB = 1024000, not 102400. the loop just allocates memory in 100kB chunks (102400).

    FYI. I know 1024000B isn't 1MB, but it also isn't 102400

    CAPTCHA: nulla

  • Stiflers.Mom (unregistered) in reply to dpm
    dpm:
    kennytm:
    #include <malloc.h> ?! How old is the code...
    Old enough that the compiler doesn't complain that "f" is being used without being initialized.

    TRWTF is not being able to decide whether to code in C or C++.

    C Style:

    • return value defined as 'void'
    • returning '0' on success

    C++ Style:

    • return value defined as 'int'
    • returning 'EXIT_SUCCESS' on success
  • (cs) in reply to WhiskeyJack
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    I do prefer the more concise and exponential:
    while(fork() || !fork());
  • (cs) in reply to JimM
    JimM:
    dpm:
    kennytm:
    #include <malloc.h> ?! How old is the code...
    Old enough that the compiler doesn't complain that "f" is being used without being initialized.
    Does that mean that it's possible this code was written at a time when this approach would've worked on most OSes and progress means it's now a WTF when it used to be an industry standard?

    Just a thought...

    You know, the more i think about this the more sense it makes. The original code is written to count KBs of memory, and all the numbers are 1024. Then it lies dormant, until one day someone says "Hey, don't we have a widget around to check how much memory is available?". Someone finds this code, but by now machines have MBs of memory instead of KBs, and it performs far too many iterations. "Can't we make it count in 100KB and display in MB?" asks someone. "Sure." says the assigned developer; but he's too lazy to rewrite it properly so he just adds the appropriate number of 0s, thinking it'll be close enough and hey, it's not like it's going to be used for production code because it's a dumb way to do this.

    Again; Just a thought... ;^)

  • pia (unregistered) in reply to Shinobu
    Shinobu:
    Joel:
    In reality, it's never really a problem. The whole rationale behind overcommitting is that programs often allocate more memory than they need, or than they need at any one time. So why deny valid allocations that will never cause a problem except in pathological cases?
    Ah, thanks for a reply that is actually useful. For what it's worth, I personally consider overcommitting a bug. I also don't think it's quite as common as you describe. I've seen quite some (mostly open) source code and the closest I've seen is several heap allocators that necessarily allocate in multiples of the page size, to facilitate suballocating. Which doesn't even fit the difinition that well. On the Windows side, I know processes can ask the system to reserve memory without allocating it, but the purpose behind that is precisely to ensure that the memory will be there when you need it. I've never tried it because I didn't need it, but I think Windows will complain if you try to reserve more memory than could be made available. But it would be interesting to test.
    Overcommitting goes a little bit further. It doesn't matter if you make 10 allocations of 1 MB or one allocation of 10MB If the process uses only 10 kilobytes the system will allocate only 3 pages to that process (assuming page size in 4kb)
  • (cs) in reply to qq
    qq:
    DES:
    Correct solution (or at least, as close to correct as you can get): create a file, ftruncate() it to a ridiculously large size, do a binary search with mmap() to see how much address space you can access, then do a binary search with mlock() to see how much physical memory you can access. Requires root privileges, though.
    It also assumes that physical memory size <= user-space virtual memory size, in other words, that a single process can hold (almost) all the physical memory. In 32-bit environments this works only up to 3GB of ram.

    That's irrelevant - what we're trying to do is determine how much memory a program can use, and that is limited by whichever of the two (address space or physical memory) is smaller, which is what my proposed solution measures.

  • (cs) in reply to JayFin
    JayFin:
    cheers:
    Where in the code does it say 102400 = 1MB?
    Rrrright here:
      f += <span style="color:red;">102400</span>;
      printf ("%g\n", f);
      printf ("%g %f %f <span style="color:red;">MB</span>\n", f, f, <span style="color:red;">f/1024000</span>);
    

    Okay. Not to be too picky, but take a look: He's using 1/10th of the 1024000 (102400 * 10). so the result will be in ROUGH 10ths of a MByte.

    Not being a C-guy, I can't comment on the rest, but that much seems to fit a reasonably well-thought-out WTF.

    EDIT - Okay, I'm not first, fist, or whatever. But tenths is tenths nonetheless.

  • Shinobu (unregistered) in reply to pia
    pia:
    Overcommitting goes a little bit further. It doesn't matter if you make 10 allocations of 1 MB or one allocation of 10MB If the process uses only 10 kilobytes the system will allocate only 3 pages to that process (assuming page size in 4kb)
    I got it the first time round, thank you. It doesn't address why you wouldn't simply ask for 10 kB, and let the system allocate 3 pages for you (one page of which can be used for example for another 2 kB object to be allocated later) - this is how allocators tend to work. The only real reasons I can see are reserving memory, in which case you want to know it will be there, or for example modifying a DLL (CoW schenarios) in which case I think it would be better to pause the offending process or let it fail with an access violation than to kill off a random process.
  • sewiv (unregistered) in reply to cheers
    cheers:
    In fact, using smaller values will give you much more accurate crashes.

    Winner line of the day.

  • Gerrit (unregistered) in reply to JimM
    JimM:
    You know, the more i think about this the more sense it makes. The original code is written to count KBs of memory, and all the numbers are 1024. Then it lies dormant, until one day someone says "Hey, don't we have a widget around to check how much memory is available?". Someone finds this code, but by now machines have MBs of memory instead of KBs, and it performs far too many iterations. "Can't we make it count in 100KB and display in MB?" asks someone. "Sure." says the assigned developer; but he's too lazy to rewrite it properly so he just adds the appropriate number of 0s, thinking it'll be close enough and hey, it's not like it's going to be used for production code because it's a dumb way to do this.

    Again; Just a thought... ;^)

    I ran it on Linux with the numbers changed to 1024, and now it did allocate all the memory. The swap file filled up, the system stopped responding, and eventually the process was killed and everything came back to life. Perhaps adding the zeros was how John imposed limits on how high 'f' could go.

  • jhb (unregistered) in reply to Shinobu

    Think of mmap()'ing a sparse memory region. For example, if you fire up rpc.statd on a *BSD box you will find it has a VSZ of 256MB because it does a MAP_ANON mmap() of a big chunk of memory used as a hash table. However, if you are only mounting NFS mounts from a few servers rpc.statd only needs a few KB of actual RAM. Overcommitting is quite common on lots of OS's. The alternative on systems where you disable it (or it isn't available) is that you end up allocating a lot of swap space on disk that never gets used, and if you ever did use all the swap your system would be thrashing horribly.

  • Luis (unregistered)

    The worst about that code is that it will not even do what it is meant to do (i.e. slowly consume all your memory and let you know when it happens) since f is never initialized and thus the messages you get about the memory that has been allocated so far are not trust-worthy. It is as dangerous as useless.

  • cf18 (unregistered)

    Wiki: 1,024,000 bytes (1000×1024): This is used to describe the formatted capacity of USB flash drives[2] and the "1.44 MB" 3.5 inch HD floppy disk, which actually has a capacity of 1,474,560 bytes.

    Another stupid WTF is they used 10241024=M for CD-ROM, but used 10001000*1000=G for DVD.

  • MikeCD (unregistered) in reply to Stiflers.Mom
    Stiflers.Mom:
    TRWTF is not being able to decide whether to code in C or C++.

    C Style:

    • return value defined as 'void'
    • returning '0' on success

    C++ Style:

    • return value defined as 'int'
    • returning 'EXIT_SUCCESS' on success
    • void main is illegal in C, just like C++. That said, many compilers will allow it.
    • EXIT_SUCCESS is from C, not C++.

    refs: http://c-faq.com/ansi/maindecl.html stdlib.h

  • Anonymous (unregistered)

    I'm getting involved.

  • ballantine (unregistered) in reply to seamustheseagull
    seamustheseagull:
    That really looks like something I'd write;

    A small, buggy, hacked together script which does exactly what I want it to do, in entirely the wrong way, for a very specific task.

    Then I give it to another member of support, explain to them how to use it, and include the text, "DO NOT GIVE THIS TO AN END USER".

    Then 6 months later I'll get a call escalated to me saying that an end user can't figure out how to use my "report" or that it corrupted their data.

    Lesson: Never give your scripts to anyone else.

    Ah, so that's how X11 got released!

  • (cs)

    Aaaarrrrggghhhhhhh!!!!!!!!!

    Oh I hate thinks like this. Physical Memory <> Virtual Memory. And just because you can allocate a ton of virtual it doesn't mean that it is a wise thing to do, you can page/swap a system to death.

    I've seen this on "Sorting" application, the installer gets the physical memory and CPUs then writes a config file that causes any sort to use all resources on the machine.

    Hello, you think we might have some other things running other than your stupid hold over from IBM days that are only running because the devs. are too stupid to learn the sort command when they ported over crap 10 years ago?

    And when you call them with a problem, such as the program failing badly, they first answer is bump up all those performance realted parameters in the config file.

    But then Oracle is not much better, the installer checks a bunch of kernel parameters and bitches if you don't have them set so every process allocate hugh amounts of virtual and also shared memory. Then you run into a crappy Oracle supplied Java application that is taking 12 Gig of virtual on a system with 10 Gig physical and the DBAs complain, "Well, why do you let our processes do that?"

    Because uncle Larry and the boys issist that I do, and you are such spineless people you will not even open a TAR with them.

  • Stiflers.Mom (unregistered) in reply to MikeCD
    MikeCD:
    void main is illegal in C, just like C++. That said, many compilers will allow it.

    http://homepages.tesco.net/J.deBoynePollard/FGA/legality-of-void-main.html

  • (cs) in reply to Stiflers.Mom
    Stiflers.Mom:
    TRWTF is not being able to decide whether to code in C or C++.

    C Style:

    • return value defined as 'void'

    The real WTF is that you think void main() is proper C (hint: it's not).

  • Stiflers.Mom (unregistered) in reply to MikeCD
    MikeCD:
    - EXIT_SUCCESS is from C, not C++
    Who said that "it is from C++" ? I was talking about style.
  • Vladimir (unregistered)

    Did anyone use "ulimit" command? This program is the real wtf it won't give you full amount of the memory.

  • Lewis (unregistered) in reply to cheers

    "%f MB"

    That implies that the function is supposed to return it in MB... therefore it's safe to presume that by using 102400 increments he actually meant 1MB.

  • Shinobu (unregistered) in reply to jhb
    jhb:
    Think of mmap()'ing a sparse memory region. For example, if you fire up rpc.statd on a *BSD box you will find it has a VSZ of 256MB because it does a MAP_ANON mmap() of a big chunk of memory used as a hash table. However, if you are only mounting NFS mounts from a few servers rpc.statd only needs a few KB of actual RAM. Overcommitting is quite common on lots of OS's. The alternative on systems where you disable it (or it isn't available) is that you end up allocating a lot of swap space on disk that never gets used, and if you ever did use all the swap your system would be thrashing horribly.
    Logical fallacy: false dichotomy. This could be implemented in any number of ways, and I'm not quite convinced this is the best one. (First reaction: ‘and this is supposed to be in defence of overcommitting?’) But even if you do it like this, it doesn't look much different from the CoW scenario.
  • Shinobu (unregistered) in reply to sewiv
    sewiv:
    cheers:
    In fact, using smaller values will give you much more accurate crashes.
    Winner line of the day.
    I second that.
  • Shinobu (unregistered) in reply to Stiflers.Mom
    Stiflers.Mom:
    If the return type is not compatible with int, the termination status returned to the host environment is unspecified.
    @_@
  • Otto (unregistered)

    The real WTF is not knowing that the OS probably does lazy allocation of large malloc blocks, and as such this code will use nearly zero resources since it never actually writes to the allocated memory.

  • (cs) in reply to Shinobu
    Shinobu:
    "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer." TRWTF

    Seriously, I think that almost any strategy would be better than this. I don't know if this was inspired by the thought that almost no one actually checks their mallocs, but still... wouldn't it be better for example, to pause the process until more memory becomes available, giving the user the choice to kill a process if necessary? Or to blandly refuse to do the allocation, like Windows does when your drive with the swapfile is full? Of course, then we would have to do the pesky malloc checking...

    For what it's worth, Windows doesn't do overcommit. But then, Windows doesn't support fork() for Win32 subsystem. fork() is supported for posix subsystem, though, and there is a kernel function to clone a process. In Win32, if you do VirtualAlloc(MEM_COMMIT), the physical storage is reserved. With MEM_RESERVE flag, though, only virtual space is reserved; you'll have to do MEM_COMMIT afterwards; there is no default commit. You can simulate commit by demand by using an exception handler.

  • Spoonman (unregistered) in reply to terepashii
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    This "bomb" is quite famous by IT students, because the fork function belongs to the "Networking" course where you had to write a loop whose condition wasn't easy to write...

    even easier way, no compiling required, just run this line in bash: :(){ :|:& };:

  • SuperKoko (unregistered) in reply to Mainline
    Mainline:
    Sebastian:
    "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."
    See, I told you Linux can't be counted on for production systems!

    Now, where did I leave my pile of W98 install floppies?

    Fortunately, this brain damaged behavior can be disabled through /proc/sys/vm. Of course, any sensible person sets that in a startup script. Overcommitting really sucks badly. Either you've a limited amount of swap+RAM and know that you'll encounter OOM conditions. In that case, randomly killing applications is a huge reliability and security issue. Or you've huge amounts of swap (on x86, i686 can have up to 64GB of swap and x86-64 can have even more) and are unlikely to met an OOM condition, unless one (e.g. on x86-64) or many (e.g. on i686) malicious or buggy (e.g. a program leaking chunks of memory after having accessed them) programs eats up huge amounts of memory. In that case, without overcommitting, the malicious or buggy programs crash or fail while good programs, at worst, get a few temporary failures of resource allocation functions, which they handle well if they've been properly written. With overcommitting, random applications, good or evil are killed. That's far worse behavior. Now, comes the ONLY case where overcommitting "works".

    1. You've buggy non-malicious programs allocating too much memory, but not using it.
    2. The sum of all committed memory is smaller than RAM+SWAP.
    3. The sum of all committed+unused memory is larger than RAM+SWAP. This condition should be rare and I doubt it's stable (I fear that, two minutes later, committed memory is larger than RAM+SWAP). Overcommitting is especially brain-damaged because programs might quickly become DEPENDENT on this feature as leaking large chunks of unused memory isn't seen anymore, at first, on the system's performances. That's especially funny because, I would expect the same type of program leak large chunks of USED memory or small chunks of memory (GLIBC mmap's only LARGE chunks of memory, so that overcommitting doesn't apply to small chunks). Combining the two behavior result in an (accidental) exploit!

    The 102400=10MB equation reminds me of the 1.44MB floppy disk which have any capacity BUT 1.44MB with 1MB=1000KB and 1KB=1024B.

  • CynicalTyler (unregistered) in reply to augur
    augur:
    The real WTF is believing 1024000 = 1MB :)
    They're so close to understanding base-2 math, yet so far away. This singular mistake is my favorite WTF in a while.
  • dhasenan (unregistered) in reply to ath
    ath:
    No, it's 102400 = 1MB. Except for when you're printing in human readable form. Then it's 1024000.

    It's allocating 100KB chunks and reporting in kilo-kilobytes. Where's the WTF there?

  • (cs) in reply to Stiflers.Mom
    Stiflers.Mom:
    - return value defined as 'void' - returning '0' on success
    Wouldn't attempting to return 0 on success in a function that's designed to return void cause a compiler error?
  • Someone Else (unregistered)

    Actually, if you do this code on Linux, you'll get (on a 32-bit system) just under 2GiB as the final result - as stated earlier, Linux (and most OSes, actually, including Windows) do optimistic malloc(). The memory region is committed into the process address space, but no physical memory is actually backing it (this happens when a page fault occurs).

    You can malloc almost 2GiB in this (minus space used for code, libraries, stack, bss/init and other miscellany).

    To force Linux to allocate storage, write to the new memory region, or use the mlock() syscall (which locks down memory). (mlock() will only allow you to lock half of the memory though).

    Of course, there are many ways to get system information, and malloc() alone won't work... you have to actually touch the bytes.

  • (cs)

    It would only run you low on resources in debug mode, which actually touches all those pages to set them to CAFEFEED or BADFOOD or something similar. In release mode, those pages would be committed out of swap, but never touched, leading to minimal system disruption (unless you happen to be low on swap space, of course).

    Anyway, any good computer salesman knows that virtual memory is a great way to sell real memory.

  • Shinobu (unregistered) in reply to Someone Else
    alegr:
    For what it's worth, Windows doesn't do overcommit.
    Someone Else:
    Actually, if you do this code on Linux, you'll get (on a 32-bit system) just under 2GiB as the final result - as stated earlier, Linux (and most OSes, actually, including Windows) do optimistic malloc().
    Have you never seen the message ‘Windows is expanding the swapfile. During this process requests for memory may be denied.’?
  • Jonathan (unregistered)

    The implementation is off, but this is a valid idea. You can (even with optimistic allocation) get the total amount of physical and virtual memory available to a single process with a program such as the following:

    #include <stdio.h>
    #include <stdlib.h>
    int main ( void )
    {
            size_t siz = 1024 * 1024 ;
            size_t idx = 1 ;
            void *ptr ;
            for (;;)
            {
                    ptr = malloc ( siz * idx );
                    if ( ! ptr )
                            break ;
                    free ( ptr );
                    idx ++ ;
            }
            printf ( "Max malloc %d MB \n", idx - 1 );
            return ( 0 );
    }

    Try running that on a 32-bit and a 64-bit OS and compare the results.

    It will not crash a machine, or invoke the OOM killer - since it doesn't actually write to the memory.

  • Brompot (unregistered) in reply to Sebastian
    Sebastian:
    Linux's man page for malloc says: "By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available. This is a really bad bug. In case it turns out that the system is out of memory, one or more processes will be killed by the infamous OOM killer."

    The real WTF is that you're still using 'man malloc' instead of 'info malloc' (which produces the same manpage in a much nicer user interface)

    More to the point you can turn memory overcommiting off quite easily through the /proc system. Someone who rolls out any kind of system for production with all default settings should not be allowed near them.

  • Jay (unregistered)

    There are a number of posts now that assume that the purpose of this program is to determine how much physical RAM is on the system, and then go on to point out why it won't work. But ... who says that's the purpose of the program? The article states that the purpose was to "determine how much memory a program could use". Nothing is mentioned about physical RAM. This program will tell you how much memory you can malloc before the system denies further requests. If the issue is, "I have a program that has some huge data structure that I want to keep in memory. Will I have enough?", then this is the right question to ask. Issues of physical versus virtual, how much is allocated to system processes, whether the OS puts upper limits on your allocations, etc, will all of course affect the answer, but we probably don't care about the details, we just want the final number.

    I haven't tried to run it, so if there really is some circumstance where malloc will return non-null even though the memory is not available, sure, that would be a problem. Other than that, it looks like a perfectly valid quick-and-dirty way to answer the question.

    I once wrote a very similar program myself in Java, to see what the maximum amount of memory the Java virtual machine would allocate. Again, I didn't care whether I was denied a memory request because it physically wasn't available or because the OS or Java imposed some limit. I just wanted to know the biggest allocation I could get.

    Faulting a program because it fails to answer a question that the author wasn't interested in asking seems a rather pointless criticism. This program won't predict the weather in Kazakhstan or calculate natural logarithms either. So what?

  • (cs) in reply to WhiskeyJack
    whiskeyjack:
    I am embarrassed to admit that I did this, 3 days into my job, on a server used by dozens of people. By the time I realized what was happening I tried to type in a killall command but I didn't react quickly enough, and the entire system came down.

    Shortly after that I got a few visits from some angry IT guys...

    I did something similar to one of the servers I was remoting into to run my assignment when I was in college on a unix system programming course. I think it was stuck in a similar loop and was also meanwhile endlessly writing errors to a logfile .

    No one ever contacted me about it afterwards, but the server was down for a good few hours beginning sometime after midnight the night before our assignment was due. At first after I ran my buggy program my telnet terminal window became unresponsive. I ended up finally shutting it down and logging in again only to encounter "quota exceeded" errors, etc. Then after that the server was unavailable for a while.

Leave a comment on “The Sky's the Limit!”

Log In or post as a guest

Replying to comment #232594:

« Return to Article