• (disco) in reply to VinDuv

    It's nice to see actual WTF for a change.

    VinDuv:
    when trying to exec(3) another one in a clean environment

    That's what the O_CLOEXEC/FD_CLOEXEC flags are for. Though with a bunch of libraries outside your control opening files like crazy it's indeed a problem.

    VinDuv:
    TRWTF is the 263 FD limit.

    The limit is 65536 hard here. It used to be much less. Like 8192. I don't think it's more anywhere.

    The real problem is that this closed everything. Including stdin, stdout, the output channel whatever it was, database connection, possibly the script itself etc. Guaranteed to take the whole thing down. Presumably it got restarted (if it was a web app, the server would do just that), so it started working again after a delay and possibly forgetting some state.

    Jerome_Grimbert:
    I use select with increased fd range

    That's what epoll is for. When you have that many file descriptors open, it should be more efficient.

    boomzilla:
    framework to catch up.

    I envy you. You are pretty much up to date. We didn't start catching up with C++11 yet because the last published compiler for Windows CE/Embedded is still MSC++ 15, that is the fscking 2008 version. And the bug in Android gcc 4.7 and newer that they wontfix because it only affects Android < 2.3 is not helping either.

    accalia:
    They'll change your life and it won't take very long to learn about them.

    Proper RAII will change your life much more ;-). Because it also works for resources owned by other objects, not just lexical scopes.

    blakeyrat:
    Except the JVM, that's still just as broken as ever.

    My biggest grief with Java is that it is still a memory hog. And it's not the inherent overhead of garbage collector, because other managed languages manage to be better.

    EatenByAGrue:
    256-limit

    256 is a discourseno¹ followed by your eyeno². See above for the actual limit.

    EatenByAGrue:
    bump it up to 2 bytes

    It is native int width in both kernel- and userland and always was. Technically that would have had been 2 bytes back in stone age, but in Linux it was always 4.

  • (disco) in reply to Bulb
    Bulb:
    Proper RAII will change your life much more . Because it also works for resources owned by other objects, not just lexical scopes.

    one step at a time. ;-)

  • (disco) in reply to Bulb
    Bulb:
    I envy you. You are pretty much up to date.

    Yes, but a year ago we were still using 2009 stuff, including Java 6. There was a lot of pain to get where we are now. So I feel your pain.

  • (disco) in reply to dkf

    sadly 'in the posix spec' doesn't mean 'implemented on solaris/aix'. fcntl is useless. as you can open a file and another thread can fork between you opening the file and doing the fcntl. and the child process will inherit a handle you didn't want it to :-(

  • (disco) in reply to Bulb
    Bulb:
    discourseno

    Discourseno and Disconumber added to Discopedia.

  • (disco) in reply to HardwareGeek

    Hm, reading it again I think it should have been “discourseo” (and “eyeo”), because the ‘n’ in “braino” comes from “brain”, not from “typo”. And it is not meant to be specific to numbers; “discourseo” is simply any mistake caused by discourse dropping semantically significant markup from quotes.

    Edit: Ah, I see, it makes sense to make misquoted number a “discourseno” and general misquote a “discourseo”.

  • (disco) in reply to EatenByAGrue
    EatenByAGrue:
    1. Let's say you have a process that opens up 400,000 file descriptors and creates so much disk buffer activity that an admin can't even log into the system to correct the problem.

    If that's a problem, then you should limit disk buffers, not open files. That's like saying "I limit the number of spoons in the drawer so my roommates can't eat all the ice cream in my freezer."

  • (disco) in reply to immibis_
    immibis_:
    If that's a problem, then you should limit disk buffers, not open files. That's like saying "I limit the number of spoons in the drawer so my roommates can't eat all the ice cream in my freezer."

    The things that I've seen have a lot of files open recently were all big programs that used a lot of libraries; they had one FD per library file, used for bringing the library into memory. (The exact way they did that depended on the language.) Some asset-hungry GUI apps get very large that way, and closing all the FDs would be an “interesting” way to sabotage them.

    :close_all_the_files.meme.jpg:

  • (disco) in reply to immibis_
    immibis_:
    EatenByAGrue:
    1. Let's say you have a process that opens up 400,000 file descriptors and creates so much disk buffer activity that an admin can't even log into the system to correct the problem.

    If that's a problem, then you should limit disk buffers, not open files. That's like saying "I limit the number of spoons in the drawer so my roommates can't eat all the ice cream in my freezer."

    No, it's the handles that are a problem. Creating handles allows process to consume kernel memory which is unswappable and not counted towards process memory consumption for the purpose of the OOM killer. Opening too many file handles could take system down without generating any disk activity. So it is file handles and not anything else that needs limiting.

Leave a comment on “A Small Closing”

Log In or post as a guest

Replying to comment #:

« Return to Article