- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Also, this isn't really all that much of a WTF. I wrote something very similar to test that memory limits were working on our production servers..
If you allow regular processes to eat all the virtual memory on a production server, that's TRWTF.
Admin
Don't think that would happen, it would gc that unreferenced memory each round and allocate same amount again. Point was it would surely eat quite a lot cpu time.
Admin
Admin
You are assuming that mmap() would be used to map a file. Where does the the system reload pages from when mmap() was used with the MAP_ANONYMOUS flag?
Admin
Murphry's Law in effect...
Admin
I personally consider this opinion irrelevant.
Quite often you need a range of consecutive addresses in virtual memory, for instance for allocating large arrays or memory pools. The only thing that really works on all of *nix (and is not excruciatingly complicated) is using a single mmap with free base address and the corresponding size. Large arrays in turn are often underpopulated (for example std::vector will employ an exponential growth strategy to reduce copying).
You are mistaken. Reserving memory using VirtualAlloc with MEM_RESERVE actually only guarantees that the virtual memory is available, in other words Windows only promises you a consecutive range of virtual addresses that will not be used otherwise (within the same address space). To actually use the memory you have to commit it (MEM_COMMIT), and only then will Windows try to back the virtual memory up with physical memory (be it RAM or swap). It's quite possible that Windows will allow you to MEM_RESERVE a ton of memory, but upon your attempt to MEM_COMMIT even only a portion of it, tell you that you are SOL.
That said, although such two-step allocation requires more effort on behalf of the application programmer, it also makes it easier to handle out of memory conditions gracefully as overcommit really means that any write access to an anonymously mmap'ed and previously unused page can raise a signal. Writing a signal handler for this condition that doesn't just spit out some log information and abort() is a serious PITA.
Admin
Admin
No it doesn't. The correct way to determine how much RAM is available is to check ulimit and ask the system how much memory you have
Admin
The problem here is the definition of "offending process". Think of Linux as some sort of bank. You have a credit limit, I have a credit limit, and millions of others have. However, if we all used our credit limit at the same time to the full extent, the bank wouldn't be able to service all of our requests. But who, to use the term "offending", would be the offending customer? The answer is that it's in general hard to tell. You could define that as the last customer who tries to use his credit limit when the bank no longer has sufficient liquid funds, but wouldn't that be sort of arbitrary?
Now consider overcommit. Maybe the process that bites the bullet is actually a long-running database server process that just needs some temporary memory to process an important query while another process using far more resources is just some WTF memory tester the foolish apprentice launched half a minute ago. In this case it would typically be better to kill the memory test application than to even suspend the database server. As I take it, the OOM killer employs some heuristics to decide which is the most wasteful yet least valuable process, for example unprivileged processes are selected more eagerly than privileged ones, short-running processes are selected more eagerly than long-running ones etc. These heuristics can never be perfect but they appear to me to be no less sensible than your suggestion of just freezing the last guy to perform a write operation to a previously unused page. The bottom line is that overcommit and the OOM killer are evils, but in the current state of affairs they are necessary evils.
Admin
Admin
Captcha: nulla. Nice one.
Admin
Everything frees memory when the program ends; the only issue is for programs that run for long periods of time. The practical upshot is, if you're writing a little script or a batch that runs at irregular intervals, you don't need to worry about your memory, because when it ends, it's reclaimed.
The only exception to this rule is the occasional forked zombie process. Make sure your forked processes are self-terminating.
Admin
Admin
actually, the hard drive manufacturers are part of a minority; those who do it correctly. 1MB is exactly 10^6B=1000^2B=1000000B. M is a known SI prefixed, used and accepted everywhere, which has one definite meaning. The fact that IT guys wrongly use M to mean 1024*1000, 1024^2 or anything else doesn't make it any better.
1024^2 or 2^20 has its own SI prefix: Mi.
if you choose to ignore international standards and help preserve inconsistency and confusion, please do so, but stop spreading misinformation.
Admin
So ... how do y'all test whether malloc is working correctly, or whether an out-of-memory handler works correctly, if not something similar to this method?
We've done the same thing in embedded systems where we were trying to use a custom "operator new" and needed to check failure conditions.
The only WTFs in this posting:
Admin
Admin
TRWTF is using float to store memory size
Admin
Admin
TRWTF is assuming that a process can allocate all the available free memory. I've got XP at work, and it can allocate 2GB to a process while having 3GB of VM left over.
Admin
I'll bet you're running a 64bit system, though ;) It's getting the max amount of memory a process could theoretically allocate, not the actual amount available.
Admin
you are free to do whatever you like, as long as legislation doesn't say otherwise. however, (international) standards define what is correct and what is not. of course, you can choose to ignore the whole standard system and play your own little game because you like your own subjective views better, but this would be more like a religious decision than a scientific one.
the fact that something was different in the past (insert "science in history" analogy here) doesn't change anything either.
Admin
I remember reading an article before written by a developer who tried to do this for his particular library. If I remember correctly, he said his codebase bloated by >30% and still could not handle OOM in all scenarios.
If you honestly think that handling OOM errors with anything other than an abort is possible, you need to write a serious piece of software and make it OOM proof. Then I'll listen.
Admin
Admin
Sorry, I forgot to mention that all dependencies must also handle OOM's 'properly' and you must be able to handle all error conditions for those dependencies, and they must also be bugfree.
Only then can your app be considered OOM proof.
Admin
Sigh.
void main is LEGAL in C.
Reference: ISO/IEC 9899:1999 (Also known as the ISO C Standard). Specifically, refer to sections 5.1.2.2.1.1 (definition of "main", notice the "or in some other implementation-defined manner") and 5.1.2.2.3.1 (which implies that the return type of the main function does not have to be type compatible with int).
Admin
As for the ‘reserving memory’ thing - it turns out that this is default behaviour on Windows: ‘When pages are committed, storage is allocated in the paging file, but each page is initialized and loaded into physical memory only at the first attempt to read from or write to that page.’ So it turns out I was right after all, it's just that it's the default behaviour instead of something you have to specify a special flag for, as I initially tentatively assumed.
Admin
I think that post just underscores my point.
Anyway, time to go down for maintenance. Nighty-night.
Admin
of course this whole thing is an issue of definition. do you think prefixes are "god"-given or part of nature?
ignoring the meaning and definition of such important and universally accepted symbols like the SI prefixes is unscientific. that's like if you assumed one minute had 100 seconds, just because it happens to be easier to handle for you.
regarding the "who misused the prefix first" argument: read EvanED's post, please.
Admin
No, they standardize jargon. The jargon was already standardized, so GiB was plain unnecessary.
That's fine - the standard itself is a bit religious - a fine example of a solution in search of a problem.
It does - things worked fine in the past - why fix what isn't broken.
Admin
From that statement I can tell you've never written(or tried to write) a program to deal with low/out of memory conditions.
It doesn't require 30% bloat. The harder part is trying to figure out what to do.
But I can tell you that handling OOM is possible. You can't really make an app OOM proof (not sure what that means) but you can make sure you don't crash, and you handle things gracefully.
Admin
I don't know what you're trying to prove here, but VirtualAlloc doesn't have a "default" behavior. MEM_RESERVE and MEM_COMMIT are both non-zero flags. RESERVE only reserves virtual space, COMMIT reserves physical backing for them ("commits" it). If you do reserve and commit in one call, you have to specify both flags. As long as a page is committed (physical backing is reserved), it can never cause OOM fault. The memory manager doesn't bother to do anything with actual page until it's first touched; only then the page is zero-initialized.
Admin
This WTF smells like:
Calvin: How do they know the load limit on bridges, Dad?
Dad: They drive bigger and bigger trucks over the bridge until it breaks. Then they weigh the last truck and rebuild the bridge.
Admin
Usually found in first attempts of using fork(), which end up with a zillion processes, right before the whole system crashes down. :)
Admin
TRWTF is that you can do that with 6 lines of code:
And you'll get the exact same failure, a few bugs in less.
Admin
thankfully, ulimit is easy to set up, so nowadays they just get an error message. They can still trash their own systems, of course.
Admin
ahh, I feel better.
Admin
The solution is to add a "sleep" for 18 months in the body of the loop. Then (if the customer keeps their technology up-to-date and running) Moore's law will keep it working. ;->
Admin
Admin
Something doesn't seem quite right with those constants...
First 102400 and then later on 1024000.
Guess it should be 1024000, but then still, why?
A megabyte = 2^20 bytes = 1048576 bytes.
Admin
Indeed. libGC is a nice garbage collector, I suppose. It should keep memory usage down to a steady 30MB or so :)
As for storing the size in floats, I don't know how many languages this has been said in already, but:
Stating the obvious FTW!
Admin
it was broken, and it still is because nobody seems to care about the standard. the issue is that the SI prefixes should have one and only one meaning, no matter where they are used. but because the CS folks like to pretend they are something special, they redefined those prefixes. now, the meaning of M, G etc. depends on context, which unnecessarily complicates things and causes inconsistency. that's what it's all about.
the reason why there are binary prefixes now is that we IT people can have a prefix too, based on powers of 2 (or 1024) instead of 10.
TRWTF is that almost everybody seems to knowingly ignore this.
Admin
OS/2 had a neat DosAlloc() which could (with one of its parameters) not only allocate the virtual memory but also ensure there was physical memory backing it (physical in terms of RAM or disk space).
Admin
Admin
Not Solaris.
Admin
I'm sorry, but I have enough confidence in Linux that I just cat the stuff into a file, compile and run it. It returns 3136.4 in some unknown, possibly buggy units. So? It returns 4185.4 on another server.
I must say that I did initialize the f variable, as gcc warned me that the value of "f" was used uninitialized.
Admin
Code like this was mildly informative about 25 years ago when:
(1) OS's did not have virtual memory. (2) OS's did not have critical system processes. (3) OS's did not have disk caches. (4) 2^sizeof(int) >> 2^sizeof( char * )
But if there's virtual memory, the numbers will climb waaay up and mean nothing. a good VM system will cheerfully toss out pages you havent used recently, which is all of them since you're not saving "p" and periodically touching what it points to.
And even if you do get that memory, it may be REALLY SLOW memory as most of it may be out on the paging disk file. Milliseconds instead of nanoseconds.
And the OS and other programs may slow waay down or even crash if your requests cause critical stuff to get paged or even swapped out.
And on a system with 64 bit pointers it could take years to allocate 2^64 bytes of memory one megabyte at a time.
A better way is to do something like "top | grep "memory free". Fast and does not crash things.
Admin
[quote user="joeyadams"][quote user="Paweł"][quote]
Admin
Admin
And then there is vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 44 73456 170308 272036 0 0 1 16 12 30 4 0 96 0
usage: vmstat [-V] [-n] [delay [count]] -V prints version. -n causes the headers not to be reprinted regularly. -a print inactive/active page stats. -d prints disk statistics -D prints disk table -p prints disk partition statistics -s prints vm table -m prints slabinfo -S unit size delay is the delay between updates in seconds. unit size k:1000 K:1024 m:1000000 M:1048576
Admin
You don't touch the pages, but many implementations of malloc() keep pointers to the previous and next blocks just before and after the allocated area, so typically two pages will get touched and allocated. Without these pointers malloc() and free() would have to keep a separate list of free and allocated blocks which has it's own issues.
Worse yet, some implementations, when memory is low, will chase through the whole linked list of blocks, swapping in a whole bunch of pages with maybe 16 useful bytes of info in each.