• Rob (unregistered)

    Just because the inner try-finally block doesn't have any except clauses doesn't mean that exceptions are swallowed. If an exception occurs when writing the message, the exception will escape the inner try-finally block and be caught by the outer except block. In other words - whether an exception is thrown when opening the file or when writing to the file, the exception is caught and printed. "no error is reported" is therefore not a correct statement. You just have to check the console output.

  • (nodebb)

    if we succeed in opening the file but fail to write to it- the logging message is lost and no error is reported.

    This is surely not true. Doesn't the uncaught exception from the try ... finally propagate to the outer try ... except?

    Addendum 2023-05-23 07:28: Well, ok the logged message is lost but an error will be reported.

  • (nodebb)

    I have to get back to implementing logging myself- using a set of well-tested and well-understood libraries.

    Sad to say this, but this is no defence. After all, most folks thought that log4j was well-tested and well-understood, and most of us know what happened to it.

  • Jaloopa (unregistered) in reply to Steve_The_Cynic

    The only answer is to not log anything then

  • LZ79LRU (unregistered) in reply to Steve_The_Cynic

    It's all about guilt distribution. You own the code you write and are responsible when, not if, it fails. With libraries it's always someone elses fault so when, not if, they fail you won't get fired. You'll just have to refactor to use another library.

  • Charles (unregistered)

    Guilty, guilty, guilty. Still paying the price for it months later. Sigh. I did eventually discover https://docs.python.org/3.8/library/logging.html?highlight=log#module-logging, but only after I thought to look for it. Sigh. Lesson painfully learned.

  • Scragar (unregistered)

    The requirement that the input in an absolute path is mostly a choice

    Surely that should be "the input IS an absolute path"?

  • MRAB (unregistered)

    It was written for Python 2 (you can tell from the print statements), and that reached end of life in January 2020. It won't run on Python 3, and, I hope, won't be ported.

  • (nodebb)

    sigh you think it's rare for people not to write code that behaves usefully when the disk fills up. ahaha. ahah hahah ahahah.

    And as for people who expect 'do_command > file' to give them a usable file when the disk is full, because it doesn't return an error.

  • Pauller (unregistered)

    I used to do a lot of C work in the DOS and early Windows days. There's a lot of value in opening / writing / closing all at once to ensure the log would get written on a program crash. I used to try opening at start and leaving it open, but on a crash, the log would lie about how far it got. Open / write / close was need to get around that. What I worked on wasn't time critical, and it time ran on an unattended box, so having dependable logging was a must. None of that applies in the present day though :)

  • Barry Margolin (github)

    Unless the message is really long, f.write() won't get an error due to the disk filling up. File output is buffered, so this will just write to the buffer. It won't be flushed until we call f.close(), which is in the finally block, not the inner try.

    So the inner try will only detect an error trying to open the file. If that happens, fh won't be set, so fh.close() will get an error due to using the uninitialized variable.

    So everything is actually caught by the outer try. The only point of the inner try is to ensure that the file is closed. But the right way to do that is with with open(self.LogFileName, "a") as fh:.

    Oh, this code has another one of my pet peeves: Using the + modifier of the opening mode when not needed.

  • markm (unregistered)

    Perhaps the reason for closing and reopening the file is to flush the buffers. Otherwise when the system crashes, most likely the logged messages leading up to the crash were lost in the buffers. But isn't there a function to just flush the buffers without incurring the cost of closing and reopening?

  • Randal L. Schwartz (github) in reply to jeremypnet

    Well, ok the logged message is lost but an error will be reported.

    Uh, ok, where is this error going to be reported? Does it use the same function? :)

  • (nodebb)

    Seems the biggest wtf is not doing import logging...

  • (nodebb)

    It may not appear very efficient to open the file for appending each time and eventually closing it but the latter is key to a complete log in case the application crashes. Closing a file will flush all buffer content into the file immediately whereas keeping the file open will collect write accesses until the buffer is full, and only then complete the file. What good is buffering log entries when in case of a crash the most significant information never makes it into the log file? And in environments like Windows you will have a hard time watching a log while it is still being written as the owning application blocks any access to it thanks to keeping the file open all the time. So the permanent opening and closing is a compromise but often for a good reason. imho, in development it may save a lot of troubleshooting time to have an extensive log that follows anything that happens. Should be completely different in production, however, unless there is an error situation. In such a case it is certainly tolerable to sacrifice a bit of the performance for a more complete log, but the application should allow to configure the messages being logged, and also the file access behavior when anything is logged, so in normal operation it does not become a performance bottleneck. TRWTF is probably that the developer(s) didn't just pick a well-established logging framework and let it do the heavy lifting.

  • (nodebb) in reply to Steve_The_Cynic

    "After all, most folks thought that log4j was well-tested and well-understood"

    But log4j really was well-tested and well-understood. It just did a lot more than people expected and bargained for. Someone exploited a feature, not a bug, albeit in a "very creative" way.

  • (nodebb) in reply to jeremypnet

    the logged message is lost

    No, it isn't. Python exceptions are weird in that they're also written to variables in the sys module. The traceback printer fishes the info out of there. It's ugly, but the code doesn't commit the cardinal sin of terminal exception handling: the failure isn't lost. (Just writing it to stdout or stderr isn't great, but it beats total loss.)

    The code is just ugly and very very unsophisticated and duplicates stuff done in Python.

  • Michael Campbell (unregistered) in reply to JoeMs2018

    Open-write-close is a very common paradigm, not only for the app crash issue as you correctly note, but for the ability to move the log file while the process is open. On unixy systems this is possible when doing open-write-close. If it's just one open-write-write-write..., moving the log file for rotation (if logrotate is not configured to understand this), or any other reason leaves the same FD open writing to the same file, even after moved.

  • IPGuru (unregistered)

    fortunately I fund the python logging module before I started actively logging a real project to disk so never wrote my own. always check the std library Guido's time machine means that what you require has almost certainly been created

  • Shivani Desai (unregistered)
    Comment held for moderation.
  • (nodebb) in reply to Ralf

    “But log4j really was well-tested and well-understood. It just did a lot more than people expected and bargained for.”

    I am not entirely sure then that it classes as “well-understood”. I was shocked by the craziness of the functionality that was left open to abuse.

  • (nodebb)

    I came across (another, sigh) flaw in PowerShell from this exact behaviour. (I feel so alone in hating PowerShell and thinking that it is a complete disaster of a language, but am I really the crazy one?)

    See, if you issue a sequence of instructions in PowerShell to append lines to a log file, it internally uses open-write-close for each instruction. This leaves it wide open for hitting a sharing violation against itself: it can’t open the file back up again because (in a way that only Windows can) it’s not managed to release the file yet from the last write.

    PowerShell seems designed to force that behaviour, although no doubt you could mess around with .NET classes to achieve better file handling. It’s just that every time you do that, you’re mixing up models and syntax again and revisiting the question of what the point was of creating a language that can’t do anything by itself …

    (That Python code BTW is horrible, all full of leftover C bits, and rubbish like joining strings with “+” as though they never learnt anything from JavaScript’s mistakes. It’s though they really took Monty Python to heart, not just borrowing the name …)

  • (nodebb)

    People recommending the open-write-close dance on anything but Windows here are blissfully unaware of the overhead such a simple operation can have in distributed systems, where the corresponding permission checks can easily surpass writing some log data in terms of network roundtrips. Also the proper way (fflush and fsync on Unix) has already been alluded to and works just as well in distributed systems (unless deliberately configured to cheat). The preferred method for logrotate is to notify the original writer(s) of the change and archive later, every other method is inherently racy or costly.

Leave a comment on “Reopening the Log”

Log In or post as a guest

Replying to comment #604543:

« Return to Article