- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
It's nice to see actual WTF for a change.
That's what the
O_CLOEXEC
/FD_CLOEXEC
flags are for. Though with a bunch of libraries outside your control opening files like crazy it's indeed a problem.The limit is 65536 hard here. It used to be much less. Like 8192. I don't think it's more anywhere.
The real problem is that this closed everything. Including stdin, stdout, the output channel whatever it was, database connection, possibly the script itself etc. Guaranteed to take the whole thing down. Presumably it got restarted (if it was a web app, the server would do just that), so it started working again after a delay and possibly forgetting some state.
That's what epoll is for. When you have that many file descriptors open, it should be more efficient.
I envy you. You are pretty much up to date. We didn't start catching up with C++11 yet because the last published compiler for Windows CE/Embedded is still MSC++ 15, that is the fscking 2008 version. And the bug in Android gcc 4.7 and newer that they wontfix because it only affects Android < 2.3 is not helping either.
Proper RAII will change your life much more ;-). Because it also works for resources owned by other objects, not just lexical scopes.
My biggest grief with Java is that it is still a memory hog. And it's not the inherent overhead of garbage collector, because other managed languages manage to be better.
256 is a discourse
no¹ followed by your eyeno². See above for the actual limit.It is native
int
width in both kernel- and userland and always was. Technically that would have had been 2 bytes back in stone age, but in Linux it was always 4.Admin
one step at a time. ;-)
Admin
Yes, but a year ago we were still using 2009 stuff, including Java 6. There was a lot of pain to get where we are now. So I feel your pain.
Admin
sadly 'in the posix spec' doesn't mean 'implemented on solaris/aix'. fcntl is useless. as you can open a file and another thread can fork between you opening the file and doing the fcntl. and the child process will inherit a handle you didn't want it to :-(
Admin
Discourseno and Disconumber added to Discopedia.
Admin
Hm, reading it again I think it should have been “discourseo” (and “eyeo”), because the ‘n’ in “braino” comes from “brain”, not from “typo”. And it is not meant to be specific to numbers; “discourseo” is simply any mistake caused by discourse dropping semantically significant markup from quotes.
Edit: Ah, I see, it makes sense to make misquoted number a “discourseno” and general misquote a “discourseo”.
Admin
If that's a problem, then you should limit disk buffers, not open files. That's like saying "I limit the number of spoons in the drawer so my roommates can't eat all the ice cream in my freezer."
Admin
The things that I've seen have a lot of files open recently were all big programs that used a lot of libraries; they had one FD per library file, used for bringing the library into memory. (The exact way they did that depended on the language.) Some asset-hungry GUI apps get very large that way, and closing all the FDs would be an “interesting” way to sabotage them.
:close_all_the_files.meme.jpg:
Admin
No, it's the handles that are a problem. Creating handles allows process to consume kernel memory which is unswappable and not counted towards process memory consumption for the purpose of the OOM killer. Opening too many file handles could take system down without generating any disk activity. So it is file handles and not anything else that needs limiting.