- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Ahem http://thedailywtf.com/Comments/Diseased.aspx?pg=3#326981.
Admin
Really, the language is not really much different from any other procedural programming language. Every language has its own idiosyncrasies, but whether you're doing procedural programming in MUMPS or C, the basic design and flow is going to be the same. There's always a few language-specific peculiarities to deal with, but that's details. A good C programmer could pick up MUMPS in a matter of a week or two.
The reason to choose it over another language is the rest of the platform -- Cache' has very good performance when dealing with very large sets of sparse data. When you're dealing with databases ranging from tens of gigabytes into the terabyte range, you need a system that scales well, and it's not coincidence that the vendors that are doing the best in the large medical practice market are those with MUMPS-based solutions. Since Allscripts merged with Eclipsys I think the only major vendor that doesn't use MUMPS for at least some of their products is Cerner.
Admin
That reminds me of OCaml which also is a hell of a language to figure out when you have been raised with C++ and Java...
Admin
That has to be the quote of the century. I'm literally crying with laughter.
Admin
Admin
So. Aware that you're going to have to leave the country, you run a massive FIND/REPLACE (sed, whatever) against your entire codebase, and keep the index of what's-really-what. You then hope that you can be re-hired as a telecommuter, from outside of the country?
Sounds plausible, actually.
Admin
When I worked for a [subsidiary of a large telecom branching out into health care] company back in '88 I had the misfortune of writing MUMPS. Now MUMPS was pretty much rotten, but it was possible to write workable, if not pretty, applications. Once you started to get into "pretty", however, you were in a world of hurt. I remember writing some mouse control code once - at the request of a client - which was something like 4 times the actual application code. I believe the application and my mouse code are still in use...
Admin
Admin
Normally, if you are making benchmarks for technical people, you have at least one other product you are benchmarking your product against. If you're a technical person looking for benchmarks, you toss any that don't.
Admin
FTFY.
This works better if you use an editor that allows you to do a regional search and replace: you identify the scope of the variable you're fixing, and you change it only in that scope. Proceed from the smallest scope to the largest, and you're done without needing to revert anything.
Vim's an example of an editor that can do this. I'd guess emacs can do it also, but I don't know for certain.
Admin
I used to have a coworker who insisted upon writing his huge log analysis scripts in bash. I'd come in with perl, and get a 100 factor performance improvement or better. Eventually, due to increasing log file sizes and increasing script complexity, it got to the point where his scripts were taking most of a week (more precisely, about 143 hours) to process a single day's log file. Just doing a straight-forward conversion to pure perl brought the runtime down to under an hour. Tweaking the code so that it didn't perform unnecessary comparisons, and it was down to about half an hour to process a normal day's log file.
Now, you can write perl to fork-exec all over the place like a sh/ksh/bash/zsh script does, and a lot of new perl programmers do that. But that doesn't give you much of the speed boost.
I'll also not deny that there are times when writing performance-critical Perl sections in C can give another 100 factor performance boost (for example, some string processing where one needs to track nesting levels - something even perl 5.10.1's "regex"es aren't very good at. They can do it, which is why "regex" is in quotes back there. But it's a dancing bear. Of course, perl 5.12.2 is out, and that may do it better.)
Admin
The code sample looks more like something that would be produced by a decompiler.
Maybe the whole thing was just a cover story to get someone to clean up a decompiled piece of software.
Admin
Performance of Intersystems Cache is poor for any querying. I recently copied a 38m row table off our 16-core dedicated server and put it into a MySQL server on my laptop, because
My queries on MySQL were between 0-15 seconds to execute and between 0-180 seconds to write.
This is not to mention that cross table joining is a complete nightmare in Cache as the performance is SO POOR it makes you want to cry.
Here are some simple benchmarks. SELECT count(*) FROM table
Cache (16 core redhat scsi etc server no GUI) - ~5 minutes MySQL (laptop 5400rpm HDD windows 7 GUI) - 8.7 seconds
SELECT columnA, min(columnB), max(columnB), count(*) FROM table group by columnA
Cache - was forced to abandon query as I was consuming too much resource on a live system MySQL - 141 seconds.
Note that columnA and B were not indexed in the MySQL instance.
Cache does not do query caching, the query is just as slow the second time it is executed.