• Ginssuart (unregistered)

    I assume Files.deleteIfExists(filePath) returns true, false or FileNotFound

    btw, frist.

  • Michael R (unregistered)

    Comments.deleteThisIfFristExists();

  • Robin (unregistered) in reply to Ginssuart

    This made me chuckle way more than the code and article itself! Take a bow :)

  • Lothar (unregistered)

    Looks like some migration from java.io.File to java.nio.fileFiles where you've had

    if (file.exists()) { file.delete(); }

    changed 1:1 first to

    if (Files.exists(filePath)) { Files.delete(filePath); }

    and than to the final version after the compiler complained that Files.delete throws an exception that must be catched or declared in the method signature.

  • (nodebb)

    There should be a try/catch at the end of this block that checks for the file and throws a "File Not Deleted" exception which, of course, needs to be captured and swallowed with no log entry, no UI message, and no action taken.

    Or, maybe the "File Not Deleted" exception should create the file and then re-run the delete. Y'know - to make sure it really gets deleted!

  • (nodebb)

    "Or, maybe the "File Not Deleted" exception should create the file and then re-run the delete. Y'know - to make sure it really gets deleted!" The file equivalent to an "Upsert?"

  • 516052 (unregistered)

    What it should do is create the file silently in a different location. Preferably one that looks deceptively like the original one like an adjacent folder. Or alternatively in documents or appdata. That way it breeds both confusion and problems for future maintainers.

  • (nodebb)

    What it really needs is user confirmation -- "Do you really want to delete this file?", followed by "Are you really sure you want to delete this file?"

  • D Boone (unregistered)

    Way, way back in the early days of Microsoft Foundation Classes (MFC), trying to delete a file that didn't exist threw an exception which the delete code caught and then translated into the error code. But that exception handling was very expensive, it caused a long pause while waiting for MFC to do its thing, so we did that kind of test to see if the file existed before trying to delete it. (As a bonus, we didn't try to use the MFC File.Exists(), we asked for the file's attributes because File.Exists() did a FindFirst/FindNext even for just one file. The good old days.)

  • r (unregistered)

    if (Files.exists(filePath)) { Files.deleteIfExists(filePath); } if (Files.exists(filePath)) { System.err.println("Didn't work! File still exists!!!"); }

    ... THERE!!! Fixed it for you. After all, you can never be too carefull. Or too redundant! :-D

  • (nodebb)

    On Linux, deleting a file doesn't actual delete the data for a file if it is still open by a process or has more than one link to it. Deleting files is more complicated than is obvious, but still...

  • (nodebb) in reply to Rick

    Careful. You have made a statement that is simultaneously too-restrictive and too-inclusive.

    Specifically:

    It's too restrictive because it isn't just Linux, but almost any UNIX-ish operating system, including all the BSDs, Solaris, AIX, etc. YMMV on Microware's OS9, which is very vaguely UNIXish.

    It's too inclusive, because you ignored the point that filesystems like FAT12, FAT16, FAT32 and exFAT don't allow that, even on UNIXishes. (They don't support hard links, among other things.)

    And it's also worth noting that NTFS also has something very, very like hard links...

  • Officer Johnny Holzkopf (unregistered) in reply to Llarry

    Schrödinger's Cat is asking: Are you really really, I mean REALLY sure you want to delete the file that might not even exist? - Yes, no, all.

  • (nodebb)

    UNIX systems employ a two-phase delete system. When you unlink a file, the directory entry to that file's inode is deleted and the inode's hard link count is reduced by one. If the count is zero, it means that file inode no longer has any references and it's a candidate for deletion. The kernel then checks the file's reference count (the number of processes that have that file open), stored in the file inode's in-memory kernel counterpart. If there are still references to that file, the file cannot be reclaimed. The instant the last reference is gone, then the kernel reclaims the disk blocks used by the file as well as the file inode.

    And systems like Linux which support filesystems like FAT do support two-phase delete as well. It's just that the hard link count is fixed at 1 because those filesystems treat multiple references to the same FAT chain as an error. But it is possible to create hard links on FAT filesystems, accidentally or on purpose. It's just that FAT doesn't really support it and lacks the infrastructure to handle it sanely.

    Windows doesn't implement two-phase deletes - it immediately deletes the file. This is semantics it inherits from MS-DOS where open files are locked. (Not quite MS-DOS, but more MS-DOS SHARE.EXE which handles opening files over a network and the sharing of such files across a network). For compatibility reasons Windows remains like this.

  • (nodebb)

    Incidentally, it's why the lost+found directory exists as well. This directory is created by fsck and contains directory entries to file inodes which were orphaned - that is, there are file inodes that have no directory references to them. This happens because the two-phase delete operation happens solely in system RAM - when you unlink a file, the directory entry to that file is removed, and the inode hard link count is reduced by 1. Both of those operations are then committed to disk. But the disk block reclamation only happens once the file reference count drops to 0. If between now and then the kernel panics or the computer loses power, then you have a bunch of file inodes that have 0 hard link counts so they were in the middle of being deleted, but the computer didn't have a chance to finish the operation. Since programs could use those files for various purposes, fsck then creates a directory entry for them (and increments the hardlink count) in case you might be able to do some data recovery with it.

    Remember, programs take advantage of two-phase by creating a file read-write and opening it, then they delete it. They then use the file as a buffer for various operations, and if the program crashes or it exits normally then the file blocks are reclaimed. It's entirely possible that if you lost data, it might have been in one of these temporary files so you can go through lost+found.

    The reference count is held in RAM as it's not needed when the OS isn't running, so the entirety of the two-phase delete system requires the system to still be running when the last file handle to the file is closed. (On shutdown, as processes terminate this happens so the kernel will reclaim disk blocks after the user space is torn down).

  • Kotarak (unregistered)

    The real WTF is that idempotent operations can fail.

  • (nodebb) in reply to Steve_The_Cynic

    Deleting files is more complicated than is obvious, but still... That line was correct. :-)

    I like the idea that on a unix like file system that I can continue to grow a file using up all space on a filesystem without that file having a name in any directory.

  • 516052 (unregistered) in reply to Rick

    Let he who has newer done that as a prank on someones system cast the first stone.

  • TechHound (unregistered) in reply to Steve_The_Cynic

    Steve_The_Cynic:

    And it's also worth noting that NTFS also has something very, very like hard links...

    NTFS does indeed have it's own version of hard links, which have actually been a part of that file system since at least Windows 2000, but had no built-in tools until Vista which introduced mklink which has the /H option. Before that, Link Shell Extension (https://schinagl.priv.at/nt/hardlinkshellext/hardlinkshellext.html) was the main option that allows management of hard links (plus symbolic links and junctions) from Explorer, as well as it's command line counter-part, ln.exe (https://schinagl.priv.at/nt/ln/ln.html) (which is much more powerful than mklink), both of which still work well on modern Windows.

    Rick:

    I like the idea that on a unix like file system that I can continue to grow a file using up all space on a filesystem without that file having a name in any directory.

    It will still be in the lost+found in the root of the containing file system.

  • (nodebb) in reply to TechHound

    TechHound. The deleted file might end up in lost+found after a system crash, but even then the name is gone. Deleting files is more complicated than is obvious.

  • Officer Johnny Holzkopf (unregistered) in reply to Rick

    The entries in lost+found/ are named with the inode number, as the associated original file name would have been stored in the parent's inode information, and that information has already been gone. However, data recovery can still be performed, and in many cases, the "file" program and human memories of potential file content can be a great help, if needed.

  • (nodebb) in reply to Officer Johnny Holzkopf

    As far as I know, file names are never kept in inodes. It has been ages since I have found any files in lost+found. I use Ubuntu and ext4. What filesystems are you using where you see files in lost+found?

  • (nodebb) in reply to Rick

    It has been ages since I have found any files in lost+found. I use Ubuntu and ext4.

    The only time that I can remember finding anything in there, was on a Linux server in the mid 2000s where someone had forced a fsck run on /dev/hda rather than /dev/hda2 that ran part way before realizing they screwed up.

    In the end there were a couple hundred orphaned files and directories in lost+found.

Leave a comment on “Repeating Your Existence”

Log In or post as a guest

Replying to comment #692843:

« Return to Article