• awesome_rob (unregistered) in reply to JayFin
    JayFin:
    cheers:
    Where in the code does it say 102400 = 1MB?
    Rrrright here:

    f += 102400; printf ("%g\n", f); printf ("%g %f %f MB\n", f, f, f/1024000);

    Um, no - that says that f in MB is f/1024000.. which is correct (for some definition of MB - he's mixing his Ks here, which is irritating, but maybe it was written by a major disk vendor?) Nowhere is the statement '102400 = 1MB' expressed or implied, why do people keep saying it is?

    Also, this isn't really all that much of a WTF. I wrote something very similar to test that memory limits were working on our production servers..

    If you allow regular processes to eat all the virtual memory on a production server, that's TRWTF.

  • some thing (unregistered) in reply to snoofle

    Don't think that would happen, it would gc that unreferenced memory each round and allocate same amount again. Point was it would surely eat quite a lot cpu time.

  • ysth (unregistered) in reply to Mithfindel
    Mithfindel:
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    I do prefer the more concise and exponential:
    while(fork() || !fork());
    I think you mean less concise and exponential?
  • MG (unregistered) in reply to Joel
    Joel:
    BTW, mmap()-ing will count against virtual memory, but not physical RAM + swap because an mmap()ed segment will not be backed against swap, but against the file that is mmap()ed. So if those pages need to be evicted from physical RAM, they can be discarded outright and reloaded from the original file. As such, they effectively don't count against the system VM capacity. The same is true of any read-only segments.

    You are assuming that mmap() would be used to map a file. Where does the the system reload pages from when mmap() was used with the MAP_ANONYMOUS flag?

  • BJ Upton (unregistered) in reply to Ken
    Ken:
    errr, I can't type today it seems. 104857 would be the value I was looking for... not that it would matter due to native word/VM pagesize rounding, but I was wrong.

    Murphry's Law in effect...

  • (cs) in reply to Shinobu
    Shinobu:
    For what it's worth, I personally consider overcommitting a bug.

    I personally consider this opinion irrelevant.

    Shinobu:
    the closest I've seen is several heap allocators that necessarily allocate in multiples of the page size, to facilitate suballocating.

    Quite often you need a range of consecutive addresses in virtual memory, for instance for allocating large arrays or memory pools. The only thing that really works on all of *nix (and is not excruciatingly complicated) is using a single mmap with free base address and the corresponding size. Large arrays in turn are often underpopulated (for example std::vector will employ an exponential growth strategy to reduce copying).

    Shinobu:
    On the Windows side, I know processes can ask the system to reserve memory without allocating it, but the purpose behind that is precisely to ensure that the memory will be there when you need it.

    You are mistaken. Reserving memory using VirtualAlloc with MEM_RESERVE actually only guarantees that the virtual memory is available, in other words Windows only promises you a consecutive range of virtual addresses that will not be used otherwise (within the same address space). To actually use the memory you have to commit it (MEM_COMMIT), and only then will Windows try to back the virtual memory up with physical memory (be it RAM or swap). It's quite possible that Windows will allow you to MEM_RESERVE a ton of memory, but upon your attempt to MEM_COMMIT even only a portion of it, tell you that you are SOL.

    That said, although such two-step allocation requires more effort on behalf of the application programmer, it also makes it easier to handle out of memory conditions gracefully as overcommit really means that any write access to an anonymously mmap'ed and previously unused page can raise a signal. Writing a signal handler for this condition that doesn't just spit out some log information and abort() is a serious PITA.

  • Stiflers.Mom (unregistered) in reply to Spectere
    Spectere:
    Stiflers.Mom:
    - return value defined as 'void' - returning '0' on success
    Wouldn't attempting to return 0 on success in a function that's designed to return void cause a compiler error?
    defining the function as returning 'void' is wrong, of course. the problem is that most compilers won't complain as 'main' is a pretty special function anyway: - if the return-statement is missing '0' will be assumed - it can't be declared as 'inline' - it can't be overloaded - its address can't be found out - it can't be called by the program internally - ...
  • Franz Kafka (unregistered) in reply to DES
    DES:
    qq:
    DES:
    Correct solution (or at least, as close to correct as you can get): create a file, ftruncate() it to a ridiculously large size, do a binary search with mmap() to see how much address space you can access, then do a binary search with mlock() to see how much physical memory you can access. Requires root privileges, though.
    It also assumes that physical memory size <= user-space virtual memory size, in other words, that a single process can hold (almost) all the physical memory. In 32-bit environments this works only up to 3GB of ram.

    That's irrelevant - what we're trying to do is determine how much memory a program can use, and that is limited by whichever of the two (address space or physical memory) is smaller, which is what my proposed solution measures.

    No it doesn't. The correct way to determine how much RAM is available is to check ulimit and ask the system how much memory you have

  • (cs) in reply to Shinobu
    Shinobu:
    [The only real reasons I can see are reserving memory, in which case you want to know it will be there, or for example modifying a DLL (CoW schenarios) in which case I think it would be better to pause the offending process or let it fail with an access violation than to kill off a random process.

    The problem here is the definition of "offending process". Think of Linux as some sort of bank. You have a credit limit, I have a credit limit, and millions of others have. However, if we all used our credit limit at the same time to the full extent, the bank wouldn't be able to service all of our requests. But who, to use the term "offending", would be the offending customer? The answer is that it's in general hard to tell. You could define that as the last customer who tries to use his credit limit when the bank no longer has sufficient liquid funds, but wouldn't that be sort of arbitrary?

    Now consider overcommit. Maybe the process that bites the bullet is actually a long-running database server process that just needs some temporary memory to process an important query while another process using far more resources is just some WTF memory tester the foolish apprentice launched half a minute ago. In this case it would typically be better to kill the memory test application than to even suspend the database server. As I take it, the OOM killer employs some heuristics to decide which is the most wasteful yet least valuable process, for example unprivileged processes are selected more eagerly than privileged ones, short-running processes are selected more eagerly than long-running ones etc. These heuristics can never be perfect but they appear to me to be no less sensible than your suggestion of just freezing the last guy to perform a write operation to a previously unused page. The bottom line is that overcommit and the OOM killer are evils, but in the current state of affairs they are necessary evils.

  • Shinobu (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:
    I personally consider this opinion irrelevant.
    After the above discussion, I must insist that it isn't an opinion, but a fact. If you consider facts irrelevant, then I consider you irrelevant.
    Alexis de Torquemada:
    Quite ... copying).
    None of these things require overcommitting and implementing them with overcommitting is fragile. See discussion above.
    Alexis de Torquemada:
    You are mistaken.
    True, but already pointed out above. Did you even read the discussion? Also, I pointed out my uncertainty about that already.
  • Paweł (unregistered) in reply to some thing
    Don't think that would happen, it would gc that unreferenced memory each round and allocate same amount again. Point was it would surely eat quite a lot cpu time.
    There is no garbage collection in pure C/C++ (you can get one as additional library, of course).

    Captcha: nulla. Nice one.

  • (cs) in reply to rnq
    rnq:
    Question to the pros: he never frees the malloc'd buffer, isn't that what's usually called a "memory leak"? From how I understand exit() it doesn't do free allocated memory, or have I read not thoroughly enough?

    Everything frees memory when the program ends; the only issue is for programs that run for long periods of time. The practical upshot is, if you're writing a little script or a batch that runs at irregular intervals, you don't need to worry about your memory, because when it ends, it's reclaimed.

    The only exception to this rule is the occasional forked zombie process. Make sure your forked processes are self-terminating.

  • Shinobu (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:
    ... but in the current state of affairs they are necessary evils.
    I agree that they're evils, but they're obviously not necessary. Proof by counterexample: Windows works just fine without overcommit. Better, as a rule. On my Debian installation I've seen the kicker or the IDE blown away due to out of memory conditions (which were caused by a programming bug on my part, but that doesn't matter... what a one-char typo can do...) and today I learnt why. The same thing never happened to me on Windows. I suppose there's a possibility that just as the offending program has allocated its last block, the IDE tries to allocate something, and then even enough, before the buggy program fails, but in practice that isn't going to happen. Even if you ignore the swapfile expansion message. I think the same would apply to the database server. And considering how I've seen the OOM killer's heuristics fail, I think it'd be more reliable to make the database server not crash on an OOM condition anyway.
  • Anonymous Coward (unregistered) in reply to NiceWTF
    NiceWTF:
    Nice. First, 1 MB contains 1,048,576 bytes or perhaps 1,000,000 if you would ask a hard drive manufacturer, but certainly not 102,400 bytes.

    actually, the hard drive manufacturers are part of a minority; those who do it correctly. 1MB is exactly 10^6B=1000^2B=1000000B. M is a known SI prefixed, used and accepted everywhere, which has one definite meaning. The fact that IT guys wrongly use M to mean 1024*1000, 1024^2 or anything else doesn't make it any better.

    1024^2 or 2^20 has its own SI prefix: Mi.

    if you choose to ignore international standards and help preserve inconsistency and confusion, please do so, but stop spreading misinformation.

  • Pseudonoise (unregistered)

    So ... how do y'all test whether malloc is working correctly, or whether an out-of-memory handler works correctly, if not something similar to this method?

    We've done the same thing in embedded systems where we were trying to use a custom "operator new" and needed to check failure conditions.

    The only WTFs in this posting:

    • f isn't initialized
    • it's used to detect system memory, where as it may just indicate process memory and/or limitations of heap not accounting for already-used memory and heap fragmentation
    • it was to be executed on a system running critical tasks
  • Slash. (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:
    Shinobu:
    [The only real reasons I can see are reserving memory, in which case you want to know it will be there, or for example modifying a DLL (CoW schenarios) in which case I think it would be better to pause the offending process or let it fail with an access violation than to kill off a random process.

    The problem here is the definition of "offending process". Think of Linux as some sort of bank. You have a credit limit, I have a credit limit, and millions of others have. However, if we all used our credit limit at the same time to the full extent, the bank wouldn't be able to service all of our requests. But who, to use the term "offending", would be the offending customer? The answer is that it's in general hard to tell. You could define that as the last customer who tries to use his credit limit when the bank no longer has sufficient liquid funds, but wouldn't that be sort of arbitrary?

    Now consider overcommit. Maybe the process that bites the bullet is actually a long-running database server process that just needs some temporary memory to process an important query while another process using far more resources is just some WTF memory tester the foolish apprentice launched half a minute ago. In this case it would typically be better to kill the memory test application than to even suspend the database server. As I take it, the OOM killer employs some heuristics to decide which is the most wasteful yet least valuable process, for example unprivileged processes are selected more eagerly than privileged ones, short-running processes are selected more eagerly than long-running ones etc. These heuristics can never be perfect but they appear to me to be no less sensible than your suggestion of just freezing the last guy to perform a write operation to a previously unused page. The bottom line is that overcommit and the OOM killer are evils, but in the current state of affairs they are necessary evils.

    I'm not following you. Could you some how express this as a car analogy?

  • revenant (unregistered)

    TRWTF is using float to store memory size

  • Shinobu (unregistered) in reply to Anonymous Coward
    Anonymous Coward:
    actually, the hard drive manufacturers are part of a minority; those who do it correctly.
    Considering the new prefixes were thought up way after the HD manufacturers redefined the MB, you are historically speaking incorrect. That these later got to be redefined in a way that suits them nicely, while going against all other common usage of MB and kB, doesn't make them anymore ‘right’. Considering the situation, and that I never had a say in this, it follows from the social contract that I'm perfectly free to ignore SI on this one, and be no more correct or incorrect than anyone else.
  • Franz Kafka (unregistered) in reply to revenant
    revenant:
    TRWTF is using float to store memory size

    TRWTF is assuming that a process can allocate all the available free memory. I've got XP at work, and it can allocate 2GB to a process while having 3GB of VM left over.

  • (cs) in reply to Sam
    Sam:
    m0ffx:
    Looking at the code...it memory leaks, but that doesn't really matter for this case.

    As has been mentioned, the behaviour of memory allocation on many systems is such that it's meaningless. However, it's not a system crasher. I complied and ran it, and got a series of lines, the end of the output being

    3.20901e+09
    3.20901e+09 3209011200.000000 3133.800000 MB
    malloc returned NULL

    no slowdown or issue at all. Also that value is far larger than my RAM+swap (Linux uses fixed size swap)

    I managed to get:

    1.86392e+11 186392346624.000000 182023.781250 MB

    And if you think I have 182GB of RAM, you're living in a fantasy world. It would have run for ever (Fedora 10 if you care) if I didn't get bored and hit CTRL-C.

    I'll bet you're running a 64bit system, though ;) It's getting the max amount of memory a process could theoretically allocate, not the actual amount available.

  • Anonymous Coward (unregistered) in reply to Shinobu

    you are free to do whatever you like, as long as legislation doesn't say otherwise. however, (international) standards define what is correct and what is not. of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.

    the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.

  • Alan (unregistered) in reply to SuperKoko
    SuperKoko:
    In that case, without overcommitting, the malicious or buggy programs crash or fail while good programs, at worst, get a few temporary failures of resource allocation functions, which they handle well if they've been properly written.
    From that statement I can tell that you've never written a program which has been designed to withstand OOM errors and has actually been proven (as much as is possible) to do so.

    I remember reading an article before written by a developer who tried to do this for his particular library. If I remember correctly, he said his codebase bloated by >30% and still could not handle OOM in all scenarios.

    If you honestly think that handling OOM errors with anything other than an abort is possible, you need to write a serious piece of software and make it OOM proof. Then I'll listen.

  • (cs) in reply to Shinobu
    Shinobu:
    Anonymous Coward:
    actually, the hard drive manufacturers are part of a minority; those who do it correctly.
    Considering the new prefixes were thought up way after the HD manufacturers redefined the MB, you are historically speaking incorrect.
    But if we go back further, some computer guy redefined "kilo-" and "mega-" etc.
    That these later got to be redefined in a way that suits them nicely, while going against all other common usage of MB and kB, doesn't make them anymore ‘right’.
    That "kilo-" later got to be redefined in a way that suits CS nicely, while going against all other common usage of "kilo-", doesn't make them any more right.
  • Alan (unregistered) in reply to Alan

    Sorry, I forgot to mention that all dependencies must also handle OOM's 'properly' and you must be able to handle all error conditions for those dependencies, and they must also be bugfree.

    Only then can your app be considered OOM proof.

  • (cs) in reply to MikeCD
    MikeCD:
    - void main is illegal in C, just like C++. That said, many compilers will allow it. - EXIT_SUCCESS is from C, not C++.

    refs: http://c-faq.com/ansi/maindecl.html stdlib.h

    Sigh.

    void main is LEGAL in C.

    Reference: ISO/IEC 9899:1999 (Also known as the ISO C Standard). Specifically, refer to sections 5.1.2.2.1.1 (definition of "main", notice the "or in some other implementation-defined manner") and 5.1.2.2.3.1 (which implies that the return type of the main function does not have to be type compatible with int).

  • Shinobu (unregistered) in reply to Anonymous Coward
    Anonymous Coward:
    you are free to do whatever you like, as long as legislation doesn't say otherwise.
    Might doesn't make right.
    Anonymous Coward:
    however, (international) standards _define_ what is correct and what is not.
    You can't _define_ what is correct. Sometimes, you can test it, but that obviously doesn't apply here.
    Anonymous Coward:
    of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.
    What the ‘standard’ says isn't inherently more or less scientific. Argument from authority.
    Anonymous Coward:
    the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.
    In this case it obviously does. Your ‘science in history analogy’ is flawed.

    As for the ‘reserving memory’ thing - it turns out that this is default behaviour on Windows: ‘When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page.’ So it turns out I was right after all, it's just that it's the default behaviour instead of something you have to specify a special flag for, as I initially tentatively assumed.

  • Shinobu (unregistered) in reply to EvanED

    I think that post just underscores my point.

    Anyway, time to go down for maintenance. Nighty-night.

  • Anonymous Coward (unregistered) in reply to Shinobu
    Shinobu:
    Anonymous Coward:
    you are free to do whatever you like, as long as legislation doesn't say otherwise.
    Might doesn't make right.
    Anonymous Coward:
    however, (international) standards _define_ what is correct and what is not.
    You can't _define_ what is correct. Sometimes, you can test it, but that obviously doesn't apply here.
    Anonymous Coward:
    of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.
    What the ‘standard’ says isn't inherently more or less scientific. Argument from authority.
    Anonymous Coward:
    the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.
    In this case it obviously does. Your ‘science in history analogy’ is flawed.

    As for the ‘reserving memory’ thing - it turns out that this is default behaviour on Windows: ‘When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page.’ So it turns out I was right after all, it's just that it's the default behaviour instead of something you have to specify a special flag for, as I initially tentatively assumed.

    of course this whole thing is an issue of definition. do you think prefixes are "god"-given or part of nature?

    ignoring the meaning and definition of such important and universally accepted symbols like the SI prefixes is unscientific. that's like if you assumed one minute had 100 seconds, just because it happens to be easier to handle for you.

    regarding the "who misused the prefix first" argument: read EvanED's post, please.

  • Franz Kafka (unregistered) in reply to Anonymous Coward
    Anonymous Coward:
    you are free to do whatever you like, as long as legislation doesn't say otherwise. however, (international) standards _define_ what is correct and what is not.

    No, they standardize jargon. The jargon was already standardized, so GiB was plain unnecessary.

    of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.

    That's fine - the standard itself is a bit religious - a fine example of a solution in search of a problem.

    the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.

    It does - things worked fine in the past - why fix what isn't broken.

  • (cs) in reply to Alan
    Alan:
    From that statement I can tell that you've never written a program which has been designed to withstand OOM errors and has actually been proven (as much as is possible) to do so.

    I remember reading an article before written by a developer who tried to do this for his particular library. If I remember correctly, he said his codebase bloated by >30% and still could not handle OOM in all scenarios.

    If you honestly think that handling OOM errors with anything other than an abort is possible, you need to write a serious piece of software and make it OOM proof. Then I'll listen.

    From that statement I can tell you've never written(or tried to write) a program to deal with low/out of memory conditions.

    It doesn't require 30% bloat. The harder part is trying to figure out what to do.

    But I can tell you that handling OOM is possible. You can't really make an app OOM proof (not sure what that means) but you can make sure you don't crash, and you handle things gracefully.

  • (cs) in reply to Shinobu
    Shinobu:
    As for the ‘reserving memory’ thing - it turns out that this is default behaviour on Windows: ‘When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page.’ So it turns out I was right after all, it's just that it's the default behaviour instead of something you have to specify a special flag for, as I initially tentatively assumed.

    I don't know what you're trying to prove here, but VirtualAlloc doesn't have a "default" behavior. MEM_RESERVE and MEM_COMMIT are both non-zero flags. RESERVE only reserves virtual space, COMMIT reserves physical backing for them ("commits" it). If you do reserve and commit in one call, you have to specify both flags. As long as a page is committed (physical backing is reserved), it can never cause OOM fault. The memory manager doesn't bother to do anything with actual page until it's first touched; only then the page is zero-initialized.

  • (cs)

    This WTF smells like:

    Calvin: How do they know the load limit on bridges, Dad?

    Dad: They drive bigger and bigger trucks over the bridge until it breaks. Then they weigh the last truck and rebuild the bridge.

  • (cs) in reply to terepashii
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    This "bomb" is quite famous by IT students, because the fork function belongs to the "Networking" course where you had to write a loop whose condition wasn't easy to write...

    Ah, the "Sorcerer's Apprentice" loop. The CS student's favorite.

    Usually found in first attempts of using fork(), which end up with a zillion processes, right before the whole system crashes down. :)

  • (cs)

    TRWTF is that you can do that with 6 lines of code:

    int main()
    {
    	unsigned mem = 0;
    	while( malloc(1024 * 1024) ) mem += 1024 * 1024;
    	printf("%u MB\n", mem / 1024 / 1024);
    	return 0;
    }

    And you'll get the exact same failure, a few bugs in less.

  • Franz Kafka (unregistered) in reply to danixdefcon5
    danixdefcon5:
    terepashii:
    TRWTF is that there's a much shorter way to bring a computer to its knees :

    while(1) fork();

    This "bomb" is quite famous by IT students, because the fork function belongs to the "Networking" course where you had to write a loop whose condition wasn't easy to write...

    Ah, the "Sorcerer's Apprentice" loop. The CS student's favorite.

    Usually found in first attempts of using fork(), which end up with a zillion processes, right before the whole system crashes down. :)

    thankfully, ulimit is easy to set up, so nowadays they just get an error message. They can still trash their own systems, of course.

  • Franz Kafka (unregistered) in reply to Felix C.
    Felix C.:
    TRWTF is that you can do that with 6 lines of code:
    int main()
    {
    	unsigned mem = 0;
    	while( malloc(1024 * 1024) ) mem ++;
    	printf("%u MB\n", mem);
    	return 0;
    }

    And you'll get the exact same failure, a few bugs in less.

    ahh, I feel better.

  • Jeff Grigg (unregistered)

    The solution is to add a "sleep" for 18 months in the body of the loop. Then (if the customer keeps their technology up-to-date and running) Moore's law will keep it working. ;->

  • Anonymous (unregistered) in reply to Stiflers.Mom
    Stiflers.Mom:
    - its address can't be found out - it can't be called by the program internally
    int main(){printf("%p\n",&main);main();}
  • Alucard (unregistered)

    Something doesn't seem quite right with those constants...

    First 102400 and then later on 1024000.

    Guess it should be 1024000, but then still, why?

    A megabyte = 2^20 bytes = 1048576 bytes.

  • (cs) in reply to Paweł
    Paweł:
    Don't think that would happen, it would gc that unreferenced memory each round and allocate same amount again. Point was it would surely eat quite a lot cpu time.
    There is no garbage collection in pure C/C++ (you can get one as additional library, of course).

    Captcha: nulla. Nice one.

    Indeed. libGC is a nice garbage collector, I suppose. It should keep memory usage down to a steady 30MB or so :)

    As for storing the size in floats, I don't know how many languages this has been said in already, but:

    1. float is a rather tiny floating point type, especially for this application. double has a more reasonable range and resolution for most applications.
    2. unsigned long long should be used instead, as it will work well for 64-bit platforms
    3. One way to fix the problem of the blocks not being literally allocated is to fault them. E.g., you could set every 4096(or whatever sysconf(_SC_PAGESIZE) is)th byte to some number.

    Stating the obvious FTW!

  • Anonymous Coward (unregistered) in reply to Franz Kafka
    Franz Kafka:
    Anonymous Coward:
    you are free to do whatever you like, as long as legislation doesn't say otherwise. however, (international) standards _define_ what is correct and what is not.

    No, they standardize jargon. The jargon was already standardized, so GiB was plain unnecessary.

    of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.

    That's fine - the standard itself is a bit religious - a fine example of a solution in search of a problem.

    the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.

    It does - things worked fine in the past - why fix what isn't broken.

    it was broken, and it still is because nobody seems to care about the standard. the issue is that the SI prefixes should have one and only one meaning, no matter where they are used. but because the CS folks like to pretend they are something special, they redefined those prefixes. now, the meaning of M, G etc. depends on context, which unnecessarily complicates things and causes inconsistency. that's what it's all about.

    the reason why there are binary prefixes now is that we IT people can have a prefix too, based on powers of 2 (or 1024) instead of 10.

    TRWTF is that almost everybody seems to knowingly ignore this.

  • Pax (unregistered) in reply to NiceWTF
    NiceWTF:
    Mainline:
    See, I told you Linux can't be counted on for production systems!

    Also, almost all operating systems do this not just Linux.

    (hope you enjoy the food, cute little troll)

    OS/2 had a neat DosAlloc() which could (with one of its parameters) not only allocate the virtual memory but also ensure there was physical memory backing it (physical in terms of RAM or disk space).

  • Tino (unregistered) in reply to Felix C.
    Felix C.:
    int main()
    {
    	unsigned mem = 0;
    	while( malloc(1024 * 1024) ) mem += 1024 * 1024;
    	printf("%u MB\n", mem / 1024 / 1024);
    	return 0;
    }
    Your code still has 2 major BUGs:
    1. printf() might allocate memory and thus could fail
    2. It never completes on a Turing machines
  • Peas (unregistered) in reply to NiceWTF

    Not Solaris.

  • Roger Wolff (unregistered)

    I'm sorry, but I have enough confidence in Linux that I just cat the stuff into a file, compile and run it. It returns 3136.4 in some unknown, possibly buggy units. So? It returns 4185.4 on another server.

    I must say that I did initialize the f variable, as gcc warned me that the value of "f" was used uninitialized.

  • Gumpy Gus (unregistered)

    Code like this was mildly informative about 25 years ago when:

    (1) OS's did not have virtual memory. (2) OS's did not have critical system processes. (3) OS's did not have disk caches. (4) 2^sizeof(int) >> 2^sizeof( char * )

    But if there's virtual memory, the numbers will climb waaay up and mean nothing. a good VM system will cheerfully toss out pages you havent used recently, which is all of them since you're not saving "p" and periodically touching what it points to.

    And even if you do get that memory, it may be REALLY SLOW memory as most of it may be out on the paging disk file. Milliseconds instead of nanoseconds.

    And the OS and other programs may slow waay down or even crash if your requests cause critical stuff to get paged or even swapped out.

    And on a system with 64 bit pointers it could take years to allocate 2^64 bytes of memory one megabyte at a time.

    A better way is to do something like "top | grep "memory free". Fast and does not crash things.

  • Roger Wolff (unregistered) in reply to joeyadams

    [quote user="joeyadams"][quote user="Paweł"][quote]

    1. float is a rather tiny floating point type, especially for this application. double has a more reasonable range and resolution for most applications. [/quote] Excuse me? My floats have a mantissa of about 24 bits. This means that the loop will get to execute about 2^24 times before we lose any precision. That's over a million megabytes. Mwah I think that's rather enough. (because the original worked in 0.1 megabytes at a time, and there is a sign bit, we lose about 4 bits, resulting in about 2^20 megabytes, or still over a million megabytes).
  • Roger Wolff (unregistered) in reply to Gumpy Gus
    Gumpy Gus:
    But if there's virtual memory, the numbers will climb waaay up and mean nothing. a good VM system will cheerfully toss out pages you havent used recently, which is all of them since you're not saving "p" and periodically touching what it points to.
    A reasonable OS won't even have to toss them out, as you never touch them. My system chearfully runs this in 3.92 seconds allocating 3.2Gb of memory before returning null. It runs in 0.391 seconds if the output need not go to the terminal. It would run another 10 times faster, if the printf's were eliminated up until the end, but then you'd need additional code for the then-very-likely case that printf would bomb because it cannot allocate any memory for the buffer.
  • /Arthur (unregistered)

    And then there is vmstat

    procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 44 73456 170308 272036 0 0 1 16 12 30 4 0 96 0

    usage: vmstat [-V] [-n] [delay [count]] -V prints version. -n causes the headers not to be reprinted regularly. -a print inactive/active page stats. -d prints disk statistics -D prints disk table -p prints disk partition statistics -s prints vm table -m prints slabinfo -S unit size delay is the delay between updates in seconds. unit size k:1000 K:1024 m:1000000 M:1048576

  • Gumpy Gus (unregistered) in reply to Roger Wolff

    A reasonable OS won't even have to toss them out, as you never touch them.

    You don't touch the pages, but many implementations of malloc() keep pointers to the previous and next blocks just before and after the allocated area, so typically two pages will get touched and allocated. Without these pointers malloc() and free() would have to keep a separate list of free and allocated blocks which has it's own issues.

    Worse yet, some implementations, when memory is low, will chase through the whole linked list of blocks, swapping in a whole bunch of pages with maybe 16 useful bytes of info in each.

Leave a comment on “The Sky's the Limit!”

Log In or post as a guest

Replying to comment #232740:

« Return to Article