• Capitalist (unregistered) in reply to jay
    jay:
    fjf:
    Haven't we had this discussion some weeks ago? AFAIR, the issues mentioned back then (like multiple file with the same (base) name) haven't been solved (rather than waved away) by the proponents of your way.

    Here's two more issues:

    • Searching speed. This in particular affects the (main) executables. When the user types "foo", the traditional Unix (and, as I gather, also Windows) approach only has to search in a few directories (/usr/bin etc. on Unix). When each packages installs their binaries in their own place, it has to search all of them.

    • Heterogeneous networks: Though surely not as important anymore today, this is one of the classic reasons for the Unix file system design: Architecture-independent stuff (documentation, data) goes under /usr/share, so they can be shared on a network, whereas /usr/bin, /usr/lib etc. can only be shared across machines of the same architecture.

    Search speed: That depends on how you search for executables. In Windows and Linux GUIs, you don't type in an app name and the OS searches a couple of directories for it. You have icons or shortcuts for all your apps that include the full path to the executable. There is no search so it's a non-issue.

    Oh yeah, and all scripts include the full path to all programs used (and the path may include version numbers, may be translates or arbitrarily renamed by the user). That would be fun.

    IMHO, each app should have its own directory. Then have one central, shared place where we list all the apps, that would basically have just an app name, the path to the executable, and a path to an icon. I think that would be it. Maybe some security-related info or some such. So yes, an install would have to update that central list. But that would be far simpler than the many places that an install updates today.
    Even the latter bit is doubtful. For an (un)installer it's generally easier to place/remove a file in a unique location (/usr/bin/foo) than adding removing an entry in a central list which at least involved some amount of locking and synchronization. In fact, most Linux distributions have broken up many files in /etc which would have to be writable by different package installations into directories. E.g., instead of a single crontab that every package that needs a cronjob writes itself into (and removes itself from when uninstalled), there's now a directory where each package that need it creates/removes a file with a unique name (usually the package name).
    RE networks: Sure, some files should be accessible to anyone and others only to certain users. But it's hard to see why some pieces of a single app need to be available to different users than other pieces. Why would someone need to access the documentation for an app if they can't run the app?
    That's not really what I said, read it again. E.g., take an application, perhaps a flight simulator, with a relatively small executable but huge amounts of data that you want to use on a heterogeneous network, i.e. a network of different architectures, let's say they're diskless machines with a central file server. You'd want to share the data for all the machines, but need different executables for the different architectures. With the "classical" directory layout, that's easy: have /usr/share shared, and a different /usr/bin and /usr/lib for each architecture.
    But I wouldn't separate main executable from libraries from help screens from configuration data from preferences etc etc.
    This wasn't about separating executables from libraries (the reason to separate them is searching, you don't want to search all libraries when searching an executable), but separating help texts like other data, as I described. Configuration data and preferences are generally per-user and therefore separate anyway.
  • Norman Diamond (unregistered) in reply to Meep
    Meep:
    Yeah, I found a site that quoted gold prices in kg, and that was 53201.77 USD, so 278.52 kg, or about 614 pounds. I like the fact that the editors highlighted the comment from idiot who was pedantic and completely wrong.
    I like the fact that an idiot who complains about idiots who are pedantic and completely wrong is pedantic and completely wrong. In case you didn't previously know that gold ounces and pounds are troy, how many comments have already been posted to teach you? It's about 750 pounds.
  • Norman Diamond (unregistered) in reply to jay
    jay:
    Sure. It's not just Windows that throws files everywhere, Unix/Linux do too. I wasn't bashing Windows per se. I'm an equal opportunity basher.
    What? You can be an equal opportunity basher on Linux, but until it gets ported to Windows you have to be an equal opportunity PowerSheller.
  • jay (unregistered) in reply to Capitalist
    Capitalist:
    Can you guarantee me 100% that programmers do not make other kinds of mistakes? Of course not. And programs can (and do) fail for many apparently strange reasons all the time. The lesson to be learnt is to fix the bugs. ... OK, anything can fail if you assume there can be arbitrary bugs. So in your world, nothing ever works. Thanks, but I prefer to stay in my world where things do work.

    My point is that there is a vast difference between these two scenarios:

    Scenario 1: I install app A. It works fine. Six months later I install app B. It fails.

    Scenario 2: I install app A. It works fine. Six months later I install app B. App B works fine, but now app A fails.

    If an app fails, the reasonable thing to do is to look for a bug IN THAT APP. To say that any time an app fails you have to look at all the apps on your system to see what damage one app might have done to an unrelated app creates a nightmare scenario. It's often tough enough to track down a bug in an app. Never mind trying to track down a bug when it could be caused by seemingly-unrelated apps anywhere on your computer.

    Suppose I install app A, which happens to be my year-end account reconciliation program. Then six months later I install app B. App B replaces a DLL used by App A. But I don't try to run app A again for another six months, until the next year-end cycle hits. And then it fails. How in the world can I figure out that the problem was that app B trashed one of his DLLs?

    I had a case once where an app trashed another app's DLL. And I was lucky that I happened to run the victim app the day before installing the new app and again the day after, so I at least could say, Hey, what changed yesterday? But if there had been a long gap in there? How would I know?

    I would think the goal should be to contain the damage that any error can cause.

  • jay (unregistered) in reply to Capitalist

    [quote user="Capitalist"][quote user="jay"]Search speed: That depends on how you search for executables. In Windows and Linux GUIs, you don't type in an app name and the OS searches a couple of directories for it. You have icons or shortcuts for all your apps that include the full path to the executable. There is no search so it's a non-issue.[/quote]Oh yeah, and all scripts include the full path to all programs used (and the path may include version numbers, may be translates or arbitrarily renamed by the user). That would be fun.[/quote]

    You do NOT specify a full path for every file you reference in a script, either at each reference, or by setting appropriate paths or shell variables? Wow, I always do. Otherwise you run the risk of other apps creating files that coincidentally have the same name as yours and suddenly your script stops working.

    [quote][quote]IMHO, each app should have its own directory. Then have one central, shared place where we list all the apps, that would basically have just an app name, the path to the executable, and a path to an icon. I think that would be it. Maybe some security-related info or some such. So yes, an install would have to update that central list. But that would be far simpler than the many places that an install updates today.[/quote]

    Even the latter bit is doubtful. For an (un)installer it's generally easier to place/remove a file in a unique location (/usr/bin/foo) than adding removing an entry in a central list which at least involved some amount of locking and synchronization. In fact, most Linux distributions have broken up many files in /etc which would have to be writable by different package installations into directories. E.g., instead of a single crontab that every package that needs a cronjob writes itself into (and removes itself from when uninstalled), there's now a directory where each package that need it creates/removes a file with a unique name (usually the package name). [quote]

    That depends on how the central list is managed. Sure, if it's a flat file and each installer updates the flat file however it pleases you'd create the danger that two installs could collide or an install with a bug could trash the list. But I can think of numerous ways to handle it cleanly. It could be a directory into which each install drops a file with the metadata for that app. That would create no issues that do not exist now. If it's a single file there could be system calls to update it. Windows 3.1 did that with a flat file; current versions of Windows do that with the Registry. Either way, the OS is then responsible for handling locking, queuing, whatever, and there's one place to manage to make sure that is done right.

    [quote][quote]RE networks: Sure, some files should be accessible to anyone and others only to certain users. But it's hard to see why some pieces of a single app need to be available to different users than other pieces. Why would someone need to access the documentation for an app if they can't run the app?[/quote]

    That's not really what I said, read it again. E.g., take an application, perhaps a flight simulator, with a relatively small executable but huge amounts of data that you want to use on a heterogeneous network, i.e. a network of different architectures, let's say they're diskless machines with a central file server. You'd want to share the data for all the machines, but need different executables for the different architectures. With the "classical" directory layout, that's easy: have /usr/share shared, and a different /usr/bin and /usr/lib for each architecture.[/quote]

    Oh, sorry, yes, I missed your main point. But in any case, I agree that we routinely need to separate code from data for a variety of reasons: The same app will often run against different data files, and we might want to run the same data through different apps. Today I want to process my source code with a text editor, tomorrow I want to run it through the compiler, the next day I want to run a search program against it. We can't assume that there is a one-to-one relationship between code and data. So yes, apps from different architectures might want to access the same data.

    I guess I can imagine cases where you could have a Linux version of an app and a Windows version of the same app, and you want them to share help screens or configuration files or some such. But I think that would be a fairly rare case. I'd want to allow for it, but I don't see designing your structure around that odd case when it makes the normal case complicated. Let the odd case be complicated.

  • jay (unregistered)

    Bummer, I apparently messed up matching the quote tags in my previous post. Sorry.

  • Norman Diamond (unregistered) in reply to jay
    jay:
    Bummer, I apparently messed up matching the quote tags in my previous post. Sorry.
    Now you know why there's a Preview button next to the Submit button. However, I don't know why there's a Submit button next to the Preview button.
  • Bill C. (unregistered)

    Hey, me too. I wouldn't dream of asking someone to submit without getting a preview.

    Captcha: inhibeo. Wrong, TDWTF, wrong. Not me.

  • (cs)

    There's no surer sign of a feeble mind than that "Lorem Ipsum" crap.

  • (cs) in reply to Evan
    Evan:
    Ted:
    (At the risk of being mocked for a sincere question...) and why not? Why don't packages install under a single directory? It would make it a lot easier to copy them, share them on a network drive, or delete them when you're done. (Why should you need an "uninstalller" to do what should be just a delete?)
    Not always. There are sometimes system stuff that has to go elsewhere, for instance (for some stupid reason or other).

    I'm not convinced. You admitted the stupidity of such a scheme in your post. I already knew that just randomly making registry keys / folders all over the place, maybe registering some COM objects, etc. was a stupid application install strategy. Don't tell me it makes sense. Yeah, some shit would have to change to do things correctly, i.e. within a single folder in the file system... these things should change, then.

    There are general-purpose OSes that work this way. I think Apple uses this installation strategy for its desktop computers.

    Evan:
    But even more to the point: that would give an inaccurate count. Installers don't just say "I installed to c:\blah, time to deltree c:\blah"; they maintain an actual list of files which are installed, and only remove those files. If you or the program put other files into the installation directory, they will be left.

    You're assuming way too much. I wrote an uninstaller once. It was for a commercial product. It didn't do any of the crap you're describing. It did a deltree... maybe. I'm not sure my uninstaller did anything but just spin the progress bar in the user's face for a few minutes. Why would I have wasted my time writing what you describe? The only reason I would ever have wanted to facilitate uninstallation would have been to reinstall our product... some IT types seem to think that uninstall/reinstall is a good approach. So, I just made the uninstall process do nothing, and made the installation process repeatable ad infinitum. This way, if the tech forgets his precious uninstall, it's no issue. This was the best design from my employer's standpoint.

    Evan:
    I'm not that familiar with NTFS, but there have been operating systems that maintain a per-directory count of space used in real time, plus you can set quotas so the print queue doesn't take down the production database. In such a system, reporting the acutal space used would be a single file system query, as in, nanoseconds.
    Even single queries take far more than "nanoseconds" if they have to go to disk; usually more like "milliseconds" (6 orders of magnitude more than "nanoseconds"). Maybe many microseconds if you're on an SSD.

    But remember, that has a tradeoff: you're increasing the amount that you have to write when you change the size of a file. (Effectively, I think this will usually be true for every file write, as I think most programs don't do in-place changes.) My guess is it's not worth it.

    I don't think there's a great answer to this design problem. So, the best design is to add shit up as needed.

  • Neil (unregistered) in reply to Norman Diamond
    Norman Diamond:
    jay:
    Sure. It's not just Windows that throws files everywhere, Unix/Linux do too. I wasn't bashing Windows per se. I'm an equal opportunity basher.
    What? You can be an equal opportunity basher on Linux, but until it gets ported to Windows you have to be an equal opportunity PowerSheller.
    Just how many ports do you need? I only know of four: WinBash, MSYS, CygWin and Interix.
  • (cs) in reply to Norman Diamond
    Norman Diamond:
    jay:
    Sure. It's not just Windows that throws files everywhere, Unix/Linux do too. I wasn't bashing Windows per se. I'm an equal opportunity basher.
    What? You can be an equal opportunity basher on Linux, but until it gets ported to Windows you have to be an equal opportunity PowerSheller.

    Is this some puny example?

  • (cs)
    Josip Medved:
    It's important to have minimum of zero characters for State field. We don't want our database to store negative character counts, do we?
    The requirement is not about the State field. The box states clearly:
    Seeed:
    Your state must contain a minimum of 0 characters.
    So it appears you are coming from a very deserted place!

Leave a comment on “Not the Sharpest Blade in the Data Center”

Log In or post as a guest

Replying to comment #:

« Return to Article