• (cs) in reply to Rob
    Rob:
    Confused...:
    This story can't be real. Is anything on this site factual?

    I doubt it

    Ahem http://thedailywtf.com/Comments/Diseased.aspx?pg=3#326981.

  • (cs) in reply to Zomby
    Zomby:
    I want the next person who defends MUMPS to do it in relation to a modern language. Because endless "Well Common Lisp crashing into COBOL while being soddomised by SQL was good enough for the 70s," apologetics make me weep for humanity.

    I have bad news for you, you have Stackholm Syndrome.

    Really, the language is not really much different from any other procedural programming language. Every language has its own idiosyncrasies, but whether you're doing procedural programming in MUMPS or C, the basic design and flow is going to be the same. There's always a few language-specific peculiarities to deal with, but that's details. A good C programmer could pick up MUMPS in a matter of a week or two.

    The reason to choose it over another language is the rest of the platform -- Cache' has very good performance when dealing with very large sets of sparse data. When you're dealing with databases ranging from tens of gigabytes into the terabyte range, you need a system that scales well, and it's not coincidence that the vendors that are doing the best in the large medical practice market are those with MUMPS-based solutions. Since Allscripts merged with Eclipsys I think the only major vendor that doesn't use MUMPS for at least some of their products is Cerner.

  • Psygonis (unregistered)

    That reminds me of OCaml which also is a hell of a language to figure out when you have been raised with C++ and Java...

  • brian t (unregistered) in reply to akatherder
    akatherder:
    Death:
    Code does not get like that by accident or even incompetence. Darren was covering his ass, in case something went wrong. Apparently it worked.

    I suppose we can all agree that something has gone horribly wrong when your best option is retaining a fugitive pedophile for long-term maintenance.

    That has to be the quote of the century. I'm literally crying with laughter.

  • Real Programmer (unregistered) in reply to boog
    boog:
    Wow, what a scumbag. Sounds like Darren deserves to be beaten within an inch of his life. With a 2x4. And a shovel. While he's on fire.

    Seriously, writing code like that is just unforgivable.

    Wait--what? You say he's a pedophile too?

    No, no. Whenever you can write something important like this, you should always do so. Clearly you missed the point of this story: Despite being a pedophile fugitive on the run from the law, he was still hired back.

  • Overand (unregistered)

    So. Aware that you're going to have to leave the country, you run a massive FIND/REPLACE (sed, whatever) against your entire codebase, and keep the index of what's-really-what. You then hope that you can be re-hired as a telecommuter, from outside of the country?

    Sounds plausible, actually.

  • LHM (unregistered)

    When I worked for a [subsidiary of a large telecom branching out into health care] company back in '88 I had the misfortune of writing MUMPS. Now MUMPS was pretty much rotten, but it was possible to write workable, if not pretty, applications. Once you started to get into "pretty", however, you were in a world of hurt. I remember writing some mouse control code once - at the request of a client - which was something like 4 times the actual application code. I believe the application and my mouse code are still in use...

  • Darren (unregistered) in reply to Death
    Death:
    Code does not get like that by accident or even incompetence. Darren was covering his ass, in case something went wrong. Apparently it worked.
    This my friend is what we call "Job Security"
  • (cs) in reply to Rob
    Rob:
    anonymous coward:
    you have GOT to be kidding. computing performance for the stuff is below that of a javascript engine (i have seen benchmarks by highly skilled engineers working with mumps, showing that javascript is faster) and the IDE/compiler makes being productive a huge pain in the ass

    Do please let us see these benchmarks by these "highly skilled engineers". Here's a few real ones that seem to tell a somewhat different story:

    http://www.redhat.com/pdf/Profile_Benchmark_Results_11_15_2007.pdf

    http://www.intersystems.com/cache/whitepapers/Cache_benchmark.html

    http://tinco.pair.com/bhaskar/gtm/doc/misc/101005-1dthreeen1fFilesystemBenchmark.pdf

    Normally, if you are making benchmarks for technical people, you have at least one other product you are benchmarking your product against. If you're a technical person looking for benchmarks, you toss any that don't.

  • (cs) in reply to Jay
    Jay:
    Okay, I've never had to deal with MUMPS, but I did once have a job working on a system where the original author named all his integer variables N1, N2, N3, etc, and all his string variables S1, S2, S3, etc. And just in case you figured out what a variable actually contained, he would re-use the same variables for different things elsewhere in the same module. So at the top S1 might be customer name but halfway through it would suddenly become zip code.

    I finally figured out that the only way to work on these programs was to first take the time to figure out what all the variables contained and and then do a mass search-and-replace to rename them. This was tough when a variable was re-cycled because it wasn't always easy to break it in to two variables -- you had to figure out where it was used for what. And a mistake on a rename might be harmless, but a mistake on breaking a variable in two would break a working program.

    FTFY.

    This works better if you use an editor that allows you to do a regional search and replace: you identify the scope of the variable you're fixing, and you change it only in that scope. Proceed from the smallest scope to the largest, and you're done without needing to revert anything.

    Vim's an example of an editor that can do this. I'd guess emacs can do it also, but I don't know for certain.

  • (cs) in reply to Mike
    Mike:
    Huh?:
    Mike:
    Wouldn't it be easier to find out what the code was SUPPOSED to do - what function did it perform? Then write code in a decent language that would do the same thing, but faster and less opaquely.

    Like in BASH. With awk/grep/sed/etc

    :)

    Perl.

    shame

    I don't know perl... I actually wrote a huge log-analysis app in bash script. I'll eventually learn perl - and then I'll get to recode the app in Perl. :)

    I used to have a coworker who insisted upon writing his huge log analysis scripts in bash. I'd come in with perl, and get a 100 factor performance improvement or better. Eventually, due to increasing log file sizes and increasing script complexity, it got to the point where his scripts were taking most of a week (more precisely, about 143 hours) to process a single day's log file. Just doing a straight-forward conversion to pure perl brought the runtime down to under an hour. Tweaking the code so that it didn't perform unnecessary comparisons, and it was down to about half an hour to process a normal day's log file.

    Now, you can write perl to fork-exec all over the place like a sh/ksh/bash/zsh script does, and a lot of new perl programmers do that. But that doesn't give you much of the speed boost.

    I'll also not deny that there are times when writing performance-critical Perl sections in C can give another 100 factor performance boost (for example, some string processing where one needs to track nesting levels - something even perl 5.10.1's "regex"es aren't very good at. They can do it, which is why "regex" is in quotes back there. But it's a dancing bear. Of course, perl 5.12.2 is out, and that may do it better.)

  • MyName (unregistered)

    The code sample looks more like something that would be produced by a decompiler.

    Maybe the whole thing was just a cover story to get someone to clean up a decompiled piece of software.

  • Tez (unregistered) in reply to tgape

    Performance of Intersystems Cache is poor for any querying. I recently copied a 38m row table off our 16-core dedicated server and put it into a MySQL server on my laptop, because

    • It was taking up to an hour to complete a query
    • Queries had to be written in native global traversing rather than SQL to avoid timing out and waiting for several hours to complete. Writing a global iterating query storing in arrays and parsing results takes minutes/ hours where as writing a few line SQL query takes seconds.

    My queries on MySQL were between 0-15 seconds to execute and between 0-180 seconds to write.

    This is not to mention that cross table joining is a complete nightmare in Cache as the performance is SO POOR it makes you want to cry.

    Here are some simple benchmarks. SELECT count(*) FROM table

    Cache (16 core redhat scsi etc server no GUI) - ~5 minutes MySQL (laptop 5400rpm HDD windows 7 GUI) - 8.7 seconds

    SELECT columnA, min(columnB), max(columnB), count(*) FROM table group by columnA

    Cache - was forced to abandon query as I was consuming too much resource on a live system MySQL - 141 seconds.

    Note that columnA and B were not indexed in the MySQL instance.

    Cache does not do query caching, the query is just as slow the second time it is executed.

Leave a comment on “Diseased”

Log In or post as a guest

Replying to comment #:

« Return to Article