• Cpt (unregistered)

    Very well put, I agree totally.

  • DSPC (unregistered)

    My algorithm disagrees completely.

    SyntaxError: Unexpected number
        at Object.exports.createScript (vm.js:24:10)
        at REPLServer.defaultEval (repl.js:236:25)
        at bound (domain.js:280:14)
        at REPLServer.runBound [as eval] (domain.js:293:12)
        at REPLServer.<anonymous> (repl.js:441:10)
        at emitOne (events.js:101:20)
        at REPLServer.emit (events.js:188:7)
        at REPLServer.Interface._onLine (readline.js:219:10)
        at REPLServer.Interface._line (readline.js:561:8)
        at REPLServer.Interface._ttyWrite (readline.js:838:14)
    
  • Yazeran (unregistered)

    Well put.

    And yes, those 'red/yellow/green' things for overview is NOT the right way to do things.

    I remember some 15 years ago, when SETI@Home had been running for a year or so and I found one of our (quad CPU) servers doing absolutely nothing (PS showed 99% idle) so I set up 4 instances of SETI to use all the CPU's as no-one else was using them. I set the clients to nice-19 so anything else would get higher priority in order to not interfere with anything.

    All was well for a few months (and my score quickly improved), but then my 'SETI cluster' was terminated.

    The reason you ask?

    Someone in administration did not like the usage for that server peaking in the red on their 'dashboard' so they killed my programs it even though any other program would have had higher priority (and no-one was using it), and no explanation / argument from my side could change that as 'the lamp must not be red!' sigh

    Yazeran

    Plan: To go to Mars one day with a hammer

    PS And I still can't log in: Illegal arguments: string, object

  • Mike McCarthy (unregistered)

    Great stuff.

  • Jon (unregistered)

    HA! Good one.

    I once tried to explain why to management... I once tried to explain our reports and what they meant to management... Topic was "changes causing incidents" Later we hired a consulting firm to do some analytics on the same data... the conclusion they came to was that we made changes on fridays and Saturdays, and we had a lot of incidents on Mondays, so therefore most of our changes took two days before they caused an incident. Never mind that when you actually correlated the systems for the incidents and changes and excluded weekend the reality was we broke shit, constantly, because our change management team approved things they didn't really understand and the people implementing the change lied on the change record to make it look like their change was low risk.

    It's actually a fitting example to go along with your criminal recidivism notion, Did you know that the class of sex offenders who is deemed least likely to reoffend is statistically most likely to reoffend?

    The key to getting decisions made in a meaning full way is to play into the oracle effect. Being an expert, having rational explanations, unless you can manage to play off of something that management already believes to be true you're bound to fail.

  • Church (unregistered) in reply to Jon

    @Jon Playing into the oracle effect can be dangerous. Management may decide that the expensive consultant is better at divining than you.

  • Church (unregistered)

    Well put, Remy. I appreciate the work you put into your soapbox posts and videos. I find them to be insightful, honest, and thought provoking. Keep up the good work.

  • foxyshadis (unregistered)

    I can't be the only one who thought this would be about a certain California-based database company, and even after the real namesake was explained, still felt that the article applied to the House of Ellison just as much. This kind of handwaving is their bread and butter.

  • (nodebb) in reply to foxyshadis

    Remy's comments are relevant:

    Also work against the Oracle effect by avoiding products made by Oracle Corporation, because honestly, they're the absolute worst.

  • GrooveNeedle (unregistered)

    While I agree in full, when writing business processes or a new dashboard, my clients rarely like it when I ask "Why?" or anything along the lines of "How will this information help you?" I do it for their benefit, but they take it as me questioning their credentials/authority.

    They've already decided how best to do their job and they just need me to bang on the keyboard to make it happen since they can't be bothered.

  • Cynicus Maximus (unregistered)

    Following Remy's admonition to ask "Why?", I'll ask the following: Why is the "Oracle Effect" so commonplace?

    Clearly the "Oracle Effect" is directly caused by non-technical people putting faith in technical processes that appear sufficiently complex to them. "I don't really understand what you're talking about, but it sounds complicated, so you must know what you're talking about [and therefore be right]" is at once an argument from authority and an argument from ignorance. In fact, I'd say that the argument from ignorance is more fundamental here, as it's what leads to the argument from authority.

    Such ignorance is an enormous opportunity for exploitation. I firmly believe that the field (sub-field?) of decision-support software systems is full of people who don't really believe (at least not fully) in what they're selling. Oftentimes, the algorithms involved are originally/initially developed as R&D programs, whether in academia or elsewhere. It's all too easy to inflate the performance of an algorithm by using a carefully constructed dataset. If algorithm A achieves 98% accuracy using dataset D, one can report that without making any false statement. Non-technical decision-makers then believe - out of ignorance - that algorithm A achieves 98% accuracy in general. Due to their non-technical backgrounds, they have no idea that the algorithm is that accurate only with that dataset. As a result, the wool is successfully pulled over their eyes, and the algorithm/decision-support-system developers laugh all the way to the proverbial bank.

    One may notice that decision-support algorithms that turn out to be of dubious quality tend to cluster around "hot-button" industries. That's no coincidence. It directly supports my contention above. For example, over the past 15 years, threat-detection/assessment algorithms for anti-terrorism and other law-enforcement purposes have been a big market. Clearly that's because, in the aftermath of the September 11, 2001 terrorist attacks, a lot of money started being allocated by the US and allied governments for anti-terrorism. So as with many other situations, "follow the money" is key here. Given the proliferation of threat-detection/assessment algorithms over the past 15 years, clearly there've been a lot of people who've wanted to get a piece of the government-funding gravy train.

    At this point, I think the reason for the development of "wool-puller" decision-support algorithms should be simple to understand. It's really all about the money. This covers a spectrum from the up-and-coming entrepreneur looking to "make a killing", through the corporate big-wig looking to maximize his bonus, to the software R&D guy looking to get more funding or just keep his position/job. For these people, software development can become a "game" where the point is not to actually help people solve problems, but simply to make (more) money.

    "If you can't dazzle them with brilliance, baffle them with bullshit." Sadly, that's all too often the case these days, and all too many baffled people fork over their money anyways. The software industry and its customers are no exception to this.

  • Albert den Haan (unregistered)

    Thank you.

    You restarted my thinker for the day.

  • Unbeliever (unregistered)

    Forgive me if I run on, but what you're saying is that if we are asked to make important policy decisions that will result in drastic changes to our every day lives based on multiple closed source systems all of which predict massive harm to come in the long term yet none of which match short term observations yet are still considered valid by promoters of these policy decisions who denigrate opposition daily through appeals to the authority of these models and those who created them then we should not accept their pronouncements at face value but treat their claims with a healthy dose of ... skepticism?

  • Carl Witthoft (google)

    Hey, I remember one Oracle that was absolutely trustworthy. Any other Old Folks remember the usenet ReadNews (rn) group UsenetOracle?

    As to red/green/yellow: I had a manager who swore by that crud. We had to update a PPT slide every week or 2 ; this slide had about 20 "issues" each of which had 5 or 6 "status" types (risk assessment, schedule assessment, etc), and we were supposed to fill in every cell with the right color. I named it the "Crayons Table" but that didn't stop it from being (mis)used in every tech review.

  • Ex-lurker (unregistered)

    It's a interesting coincidence that just two days ago one of my favorite blogs published a post about a closed-source machine to compare two DNA samples which is used to determine whether someone should go to jail or not.: https://www.schneier.com/blog/archives/2016/05/the_fallibility.html . While not precisely the same problem, both are intimately related. The defendant asked to be allowed to review how the machine got to the conclusion his DNA was a match but the judge denied the appeal, because he/she certainly was a victim of the Oracle Effect.

    @Cinicus Maximus: "If you can't dazzle them with brilliance, baffle them with bullshit." I just fell in love with this quote :-)

  • Mr. AHoleDBA (unregistered) in reply to Yazeran

    Wait, you're actually upset that an operations person shut down your non sanctioned and non standard app on a PRODUCTION server because it was making the CPU in effect red line? One that you installed without informing anyone hoping no one was paying attention to the servers that the company needs for business purposes to pay you and their employees?

    This is part of the reason there is a divide between developers and operations. I'm sorry to break it to you but your actions no matter how minute increase the surface area of many issues:

    -If the operations teams manage a lot of different servers for different purposes how are they to know the server isn't actually going to be under legitimate load?

    -Do you care that you're making their baseline much less effective to use?

    -Do we care that no matter how minute it is, a non sanctioned app increases security and stability risks? How many times has a non benign little app actually ended up having a massive security risk due to the underlying technology it used?

    -Are you familiar with the companies availability requirements and all other departments, along with everything else that might be all of a sudden used on that server?

    You do realize you are setting yourself up for a WTF story, right? Queue: We had servers properly provisioned that we did endless performance testing on to get the budget as all the money was going to the development team so we could keep up with competitors and customer demands. We were going to load up a 2nd app on them and cluster them to handle nightly batch processes but spent 2 days troubleshooting what was going on that all of a sudden caused the operations team to get paged at 3am every night since the servers were red lining. Security operations got involved in case it was highjacked but it turned out a junior dev. installed his own number crunching app to talk to space aliens and increase his score. After that SecOps enforced the industry practice that developers will have no access to production servers which pissed off the senior developers who needed it to help support the app long term. Later it was found that this version of the space alien talking app also had a major security vulnerability. The end.

  • Yazeran (unregistered) in reply to Mr. AHoleDBA

    Whoa :-)

    I guess I stepped on someone toe.

    First of all, it was an internal server for ad-hoc use (not dedicated to any specific task). Secondly it is a university (aka research) Thirdly, although it was also for my personal amusement, it was primarily to help an other international research group. Fourthly, the ops people I talked to fully understood my reasons for running SETI, but they was overruled by higher-ups.

    Yazeran

  • Some dude (unregistered)

    And thus the bait was swallowed whole.

  • DontLookWeKnowWhatWeAreDoing (unregistered)

    On a related note - the FBI wants its biometrics (plus any other stuff they wish to throw in) databases exempted from the law that says you get to review and correct what information the government has on you (https://www.eff.org/deeplinks/2016/05/fbi-ngi-privacyact). Since we as technical professionals know maintaining a database that covers the whole country will have errors and that a correct database has a better chance of stopping the right people and leaving the rest alone I would urge readers to head on over to the Federal Register page (https://www.federalregister.gov/articles/2016/05/05/2016-10119/privacy-act-of-1974-implementation), read the discussion, and respond on the linked web form with your insights about why 'trust us' isn't good enough.

  • Dude (unregistered) in reply to Carl Witthoft

    In a previous life, management implemented a 'goal' system company-wide. The big-wigs set up a series of very high-level goals, and everybody in the organization (including those in manufacturing) had to create goals based on those goals (or based on goals based on those top-level goals, etc). Every week, everybody had to report how confident they were in each of their goals, and how likely they felt they would be accomplished within whatever time-frame. Both these attributes were on a scale of 1-8, and the results of the lower goals were rolled up (eventually) to the top-level goals.

    It was the stupidest exercise I had to deal with on a regular basis. Plus, trying to align an internal development-related goal with a top-level goals that had nothing to do with IT, development or even internal objectives in the first place was... fun. I was very happy when a new opportunity elsewhere became available shortly after participation in the system became mandatory.

  • Mr. AHoleDBA (unregistered) in reply to Yazeran

    I am Mr. AHoleDBA. Nothing is safe from my toes. NOTHING

  • _that_guy_ (unregistered) in reply to Yazeran

    You could have specified all that in your initial posts, I had the same red flags going off in my mind.

  • Herby (unregistered)

    one must ponder where resources are going. Someone will look at all the processes in a system and attempt to "optimize" the one that takes the most time, hoping that it will take less of a percentage of the machine's effort. This can lead to humorous instances (related to by a friend, so the story goes). On one system that had special hardware, they looked where the most time was being spent, and decided to "optimize" it. They devoted lots of time and effort to the task, and looked at the instructions that this process was doing deciding what to implement in said special hardware. They then fixed up the instructions and measured again, to no avail. Turns out that their optimization was making the idle loop run more efficient. One must learn that relying on gross indicators is a poor way of doing things. Then there is the risk of doing nothing, which NEVER seems to get factored in. Life goes on.

  • Appalled (unregistered)

    Back in the late 70's, as an entry-level programmer, one of my duties was to prepare transparency slides for Weekly IT Staff meetings. We could print the Project Status List with Descriptions and comments (2-5 pages) to a non-color Xerox device loaded with transparency paper. Then the IT Director would deliver his take on how the projects were doing that week. I would then sit down with Green, Yellow, and Red clear plastic stock and an Xacto knife. Next I'd cut to size and carefully rubber cement strips of the appropriate color on top of each Project. At least the Oracle algorithm was simple, hey boss waddya think about this one?

  • isthisunique (unregistered) in reply to Yazeran

    Great, put SETI on a quiet server. How clever of you.

    Sysadmins measure resource usage. You make it impossible for them to measure requirements for a machines role, software, load, etc. You also make it hard to detect problems. 100% CPU usage for a long time is usually an indication of something being up, even if its increased load that is important to have. It also makes it hard for sysadmins to measure if hardware is fast enough for its role.

    Also:

    1. Wasting power.
    2. Slight increased risk of system instability and security vulnerability.
    3. Stresses hardware.
    4. Puts bloat/noise/pollution on the system.

    I found one of our (quad CPU) servers doing absolutely nothing >>> (PS showed 99% idle) <<<

    This is just dumb. I strongly suspect that you are actually pulling our legs.

  • (nodebb) in reply to Yazeran

    Solution: Determine where the Red, Yellow, and Gree indicators bound, then set SETI's task to consume only just under the green threshold (by about 4%, just so the Nice can kick in when something actually starts happening).

  • John Pettitt (google)

    In another life I spent a number of years designing credit card fraud prevention software. One of the key things to understand about any sort of automated decision is that both false positive and false negative errors have a cost. You can eliminate all fraud by (or terrorist transactions, or whatever) by eliminating all transactions. In the real world systems should be optimized to reduce the overall error rate, taking into account the 'cost' of a false positive.

    Unfortunately when it comes to terrorism all the incentives are on reducing false negatives, that is catch all the terrorists. The political cost of false positives is minimal, politicians don't get slammed becasue Paypal can't process transactions for people on Isis St. They do get slammed if there is another attack that the magic oracle didn't catch This how we ended up with security theatre at the airport confiscating pocket knives and water bottles (trivial to bypass if you spend more than 10 minutes thinking about it).

    I don't see an easy solution to this while the public demands perfection, and the sales people pander to them.

  • I dunno LOL ¯\(°_o)/¯ (unregistered)

    I found it quite suspicious that Root didn't die "properly", in the cinematic sense. It was like the exact opposite of Chekov's Gun in its lack of showing proof to the viewer. Even Elias was more convincing last season.

  • Publius (unregistered) in reply to Unbeliever

    "we should not accept their pronouncements at face value but treat their claims with a healthy dose of ... skepticism?"

    I think the word you are looking for is "denier". I wondered if I was the only one linking this to perhaps the greatest Oracle Effect in history.

  • Ulysses (unregistered)

    RE: HTML comments. Some of us don't subscribe to the Kool-Aid that holds certain attributes (race) sacred over others, such that inherent statistical differences are verboten and blasphemous. As if the world is a nice flatline in ANY dimension, smh.

    I know, because... I built it.

  • Microsoft (unregistered)

    We do whatever it takes to reduce the number of bug reports. Our analysis showed that testing causes bug reports.

  • Bill C. (unregistered)

    "The Isis is the name given to the part of the River Thames above Iffley Lock which flows through the university city of Oxford".

    Unfortunately for those who live in an Isis Close, Street, Road or other equivalent, the label Isis has been appropriated by or conferred upon (by the media) a bunch of murderous extremists in the Middle East. So the word "Isis" has become somewhat toxic in the West.

    Now PayPal has decided that they are not prepared to facilitate payments for goods to be delivered to an address which includes the word "Isis".

    That's old news. Since the murderous extremists shortened their name to Islamic State, now everything depends on what your definition of is is.

  • Derf Skren (unregistered) in reply to isthisunique

    You should learn how task priorities work. Whether doing scientific research with idle CPUs is "wasting" power is a more philosophical question but not as open and shut as you're making out. And stressing hardware? If it's not capable of supporting full load when required what's it doing in your production server rack? Not to mention if this was 15 years ago there was not such thing as idle clock/power reduction anyway.

    Besides, a sysadmin who couldn't work out what was going on in this situation would hardly be trustworthy when it came to an actual problem, would they?

  • Foo AKA Fooo (unregistered) in reply to Unbeliever

    You must be talking about organized religion (drastic changes to our every day lives: commandments etc., closed source systems: Bible, massive harm to come: apocalypse, none match short term observations: failed doomsday predictions, denigrate opposition: obvious, appeals to authority: the very essence of organized religion).

    As opposed to say climate science (which has nothing to do with your comment, I know) which is published openly (to those who bother reading it), is constantly reviewed and rechecked, resulting in a wide consensus among those who actually do the science and does match observations (warmest year in history, ...).

  • (nodebb) in reply to Carl Witthoft

    @Carl Witthoft: Indeed, the Usenet Oracle is still active, though it's been rebranded as the Internet Oracle (https://internetoracle.org/). I don't often participate these days, but I still enjoy reading the digests.

  • Norman Diamond (unregistered)

    Not to mention if this was 15 years ago there was not such thing as idle clock/power reduction anyway.

    50 years ago there was. But there was no SETI@Home. Also, IBM charged more for the amount of time not in idle.

  • Norman Diamond (unregistered)

    https://groups.google.com/forum/#!forum/rec.humor.oracle

    I vaguely remember it not being under the rec.humor hierarchy though.

  • (nodebb)

    A lot of the reason behind the Oracle Effect is the truth of reality.

    I suppose the best way is to start with sort of an example. So let's say there is a process called "flimming the blivets" (I think that was what Mad Magazine called it).

    Management has become aware that sometimes the blivets aren't flimmed, and they want a dashboard to track this so they can exhort people (read: flog the slaves) to flim more blivets. Flim all the **** blivets!

    So now they have a dashboard and they are horrified to discover that there are 1,000 blivets not flimmed, every single day. Clearly not acceptable, so the exhortations are loud and firm.

    Pretty soon, the process has improved; now there are only 200 blivets not being flimmed every day. But the infernal dashboard light is still red. and upper management demands to know why there are still so many not being flimmed. Lower management flogs and flogs, and discovers that, sometimes, the blivets get recycled into the "blort." When that happens, the blivets physically can't be flimmed because they're stuck in the blort.

    Lower management appeals to upper management; a political struggle ensues, and the outcome is that, yes, upper management agrees blivets in the blort don't have to be flimmed. But the light is still red, and that must be fixed...

    ...and so the fix is to change the dashboard process to look to see if an unflimmed blivet is in the blort, which then means it won't be counted toward the red light on the dashboard. Depending on the nature of the political outcome, these may show as hot pink, orange or yellow, but no more red.

    And that is the source of the Oracle Effect. To a point, dashboards are constructed to allow management to recognize bad results and exhort good results. But the dashboard itself becomes a goal. The dashboard must be green. If there's anything intractable that makes the dashboard red, then that must be discounted for the purposes of the dashboard, so the dashboard is green.

    Now here's the part that is hard for technical people to understand: This is perfectly acceptable to management. The dashboard has served its purpose. It has promoted good behavior to the extent possible; and to the extent not possible, has been negotiated to a mutual consensus of acceptability.

    That is politics. Every dashboard, corporate or government, is about politics.

    This is an endless complaint of mine with business: that management wants a dashboard, but does not want to understand the business. They just want the dashboard to look good, whatever that takes.

    Take the Gallup survey given annually to every employee of my organization. It's supposed to detect management-employee communication problems. One question involves whether or not the employee feels informed by management.

    But there's an ugly reality for management, that we could express in terms of news: pretty much everything management tells employees is good/bad news for the company and/or good/bad news for the employee. Every employee wants to know bad news that affects them--as early as possible. Every manager is expressly prohibited from reporting bad news that affects the employees...until it is a fait accompli.

    Perhaps...team transfers. Employees must not be told about team transfers...until the day they show up and are told they work for a different office. (If they're told in advance...well, we don't know what might happen, so they just can't be told in advance.) This is best demonstrated by a real and interesting case a while back of one team at my organization being told on Thursday there were no plans to move them to a different office (don't worry) and then on Monday (yes, 4 days later) the same team being called into an emergency meeting to be informed they now work (past tense) at a different office. Seriously.

    At Gallup poll time, the management didn't score well on that communications question--and that is bad because scoring good on Gallup IS NOW THE GOAL. So now we have weekly corporate TV news, and weekly exhortation missives, and a monthly department newsletter, and a department blog, and ... and ... and...

    A flood--a deluge--of "pablum"/"feel-good" communications so "the employees cannot say they are not informed." But of course, as an employee, I still am not told the things most important to me...because management doesn't know what kind of disloyalty I might show if I am told in advance. The result is that the communication question in Gallup stubbornly scores bad...and management doesn't understand why, even though they continue to not tell bad news affecting the employees (which some people would call outright lying).

    This dashboard is out of reach of the management, so the fix in this case is simple: cross-team, department, division, and organization comparisons. "If you score low, we look bad." Hint, hint...get it? We are now making our TEAM LOOK BAD if we don't score high (so mark those scores high, flog, flog).

    So how is this like the other? Dashboard...unfixable problem...political/negotiated solution. The Oracle Effect.

  • dhasenan (unregistered)

    The third thing an expert system must do is allow humans to override its output.

  • Kathy (unregistered) in reply to Ex-lurker

    or as i say "if you can't convince them, confuse them"

  • Jeff Dege (unregistered)

    Phrenology?

    Maybe we should give retrophrenology a try - it's based on the premise by changing the bumps on someone's skull, you can change is personality.

    Think "Maxwell's Silver Hammer"

  • (nodebb)

    Good article. By the way, it is applicable much more widely, for example also when it comes to infrastructure planning (e.g. roads). The question about the role of expert consultant reports and decision-support systems has been covered multiple times in my town & country planning course at university.

  • (nodebb) in reply to Derf Skren

    "Not to mention if this was 15 years ago there was not such thing as idle clock/power reduction anyway."

    I'll expand on what was said earlier about this piece of nonsense.

    15 years ago would be 2001. I had been running the SETI client on my machine at home for a while, and the UPS monitoring software showed the power consumed by the load (i.e. by the PC). The difference between the power consumption levels when SETI was working and when it was waiting for a work unit? 16 watts. (AMD Duron 950.)

  • Mr. AHoleDBA (unregistered) in reply to Steve_The_Cynic

    AMD Durons though were extremely well written for power management if I remember right. I recall overclocking my 750 mhz to 1GHZ with just a good heatsink and upping the voltage, but it had room for a lot more. The 750 mhz CPU price: $35. The 750Mhz P3? $180

    The look on my face when I turned off my computer before it could even POST after turning it on without a heatsink plugged in looking at a fried $35 CPU: Priceless.

  • (nodebb) in reply to dkf

    I wish I could. But our software supplier only supports oracle so we need oracle. And it's no different for competitors.

  • eric bloedow (unregistered)

    reminds me of a scene in one of Piers Anthony's Xanth novels: a young woman is lowered into a pit full of odd fumes, then brought out. she would then babble completely meaningless nonsense for several minutes, and the "priests" would PRETEND to "interpret" that nonsense into a "prophecy"!

  • eric bloedow (unregistered)

    oh, another quote comes to mind: i think it was in the novel "bug park": "any legal case that makes it to the Highest Court MUST be completely impossible to solve by logic or reason, so they might as well flip a coin. the Legal system's obsession with Precedents is simply to cover up that fact that those decisions were completely arbitrary."

Leave a comment on “The Oracle Effect”

Log In or post as a guest

Replying to comment #465565:

« Return to Article