• Hanzito (unregistered)

    You're right, of course. Only the most star-eyed techno bro could disagree.

    And to those who reply: "that's what they said about the steam engine/the automobile/etc." The negative effects of the introduction of all kinds of technology were real. The nay sayers absolutely had a point. But economic growth outdid those effect. And that was a growth driven by population expansion, not by stock market indices. That growth is over.

    Some of that tech also had serious environmental issues. Living next to any factory isn't healthy, but right after the introduction of the steam engine, it was hell. It took a long time to overcome those effects.

    And the steam engine and the automobile were introduced relatively slowly. AI tech, on the other hand, is ubiquitous. There is no time to adapt to its use. You may laugh at students cheating, but remember that those students may become engineers and surgeons, and your life may depend on them.

    So yeah, don't cooperate with introducing AI everywhere, and make your kids aware of the harm to self and others.

  • Hans (unregistered)

    "wherever AI must be used for the time being, ensure that one or more humans review the results." - very true. Do not treat everything the computer says as gospel, but just as a pre-filter.

  • Jay (unregistered)

    Couldn't agree more. My organization is pushing it hard, even suggestion we use it to write our yearly goals. I find I have to turn Copilot off in VSCode because the suggestions are horrible. The world needs to take a step back and look at what AI could be useful for and focus on that.

  • (nodebb)

    First of all, I wish we didn't use the term "AI" for things like large language models. They are no more AI than Eliza was. Unfortunately, I think the ship has sailed, so I will use it in this comment to avoid confusion.

    I think the biggest problem with AI is that the people who are most likely to use it in any context are those who cannot distinguish between a convincing answer and a correct answer. For example, the code completion in Apple's Xcode is pretty extraordinary now. It will often suggest to me whole functions as a code completion. The suggestions can be divided into three categories: those that are correct and they take my breath away: how did it divine what my intent was ; those that are completely wrong as if the "AI" has misunderstood my intent ; those that are nearly right but contain subtle errors like an off by zero error. The last category can be really dangerous if you are feeling a bit lazy and don't check.

    I've read a few CVs of people looking to apply to my company in the past and I know it can often be quite a tedious and difficult task, more so for an HR person because it's unlikely they will understand the technical jargon. The temptation to use AI to cut down the workload must be insurmountable. It's even more problematic because you can't really check if the AI is throwing away the wrong CVs. Filtering job applications has always been more of an art than a science.

    The only thing to do, I think, is to wait for the fad to blow over. Management types can understand "this AI lets you shed x salaries" and the promised incentives make it difficult for them to take the "it's producing crap results" talk from the domain experts. seriously. There's got to be some high profile failures, not of companies supplying AI but of companies that use it. They will happen eventually, and when they do, the fad will be over.

  • sokobahn (unregistered)

    "only imagine how many more young people have been harmed" -- well, millions! maybe billions!! maybe even millions of billions!!!

    Behold the oldest clickbait of all repressive governments since the Ancient Greeks: children and youth must be protected from The Evil! And protect they do, despite also public is being protected from dissent voices harming the government narrative. Cheap trick, eh?

  • Robin (unregistered)

    I 100% agree with this (with the same caution as a previous poster that AI and LLM don't actually mean the same thing, despite what the insane hype wants you to think), and was wondering if I was just a miserable old fart (well, I am a few years the wrong side of 40!). I'm a bit surprised the bubble hasn't burst yet, but it is surely only a matter of time.

    Meanwhile, as a software developer, I'm assured that either my job is about to disappear altogether, or that at least if I don't use "AI tools" for everything then I'm a laughable relic who will never get another job - despite not seeing any use for it other than as a marginally-more-efficient-but-frequently-unreliable autocomplete. Thankfully, despite some frustrations I am currently fairly securely employed (at a large-ish software company - no, not one of the behemoths mentioned in the article - that ironically is currently pushing "AI" features in their products), but it frustrates me nonetheless.

  • (nodebb)

    Remember that FTX Super Bowl commercial with Larry David saying how it's never going to take off? Turns out Larry was right about cryptocurrency. It's going to be the same thing with AI. Don't buy into the hype.

  • (nodebb)

    AI is one of those subjects that seem to have only extremes of views, and very little moderate middle. Either it's completely going to reshape the world, put millions of people out of work, and/or destroy humanity, or it's a blip on the roadmap and won't do anything for anyone.

    Whenever I read a slavering, overhyped AI piece, I always scroll down to the byline to see who wrote it, and in essentially every single case, the person writing it owns a company building or consulting around AI. I think in the last six months I've read exactly one article about AI that seemed like it had any kind of brain, where it explored the things that AI was and was not good at, and how you could thread the needle to get actual productivity out of it.

    LLMs are a tool, and tools are good at some things and not others. The sooner the world realizes that, the better.

  • (nodebb)

    ChatGPT is useful to me and I'm paying for the plus plan. But, by no means it's ready to be trusted blindly; I agree the hype is cringe worthy; the energy usage is ridiculous. I've heard a number of stories of lawyers getting in trouble for submitting AI generated briefs without reviewing them - as in the judge "yells" at them for filing trash documents. (Somebody was sanctioned, too, beyond the verbal reprimand. )

  • (nodebb)

    those that are nearly right but contain subtle errors like an off by zero error

    Wow, there is actually a real software company named "Off By Zero". Maybe I should apply for a job there --- commuting to Melbourne Australia won't be a problem.

    Addendum 2025-05-07 09:31: (I was going to make a snide remark on the order of "being off by zero is an extremely subtle error" but finding that company distracted me.)

  • Frodo B. (unregistered)

    While I am completely on the "AI is overhyped and the environmental impact does not justify what little use we get out of it", I feel like this article is very disingenuous in its claim there is nothing good coming from it:

    • Whatever you might think about the quality images generated by AI, if you just want to whip up a cool little image it can be used for that. Need a random NPC in your D&D campaign? With a little bit of prompting you can generate a decent looking image for a random elf woman. Yes, you could also request one of those from an artist, but no-one is gonna pay a few dozens buck for some throwaway NPC that just shows up in one campaign.
    • Using Google is great when you roughly know what you're looking for and what words to use, but ChatGPT and alike can be a bit more useful if you can just vaguely describe your problem and prompt a solution for you. This extends to suggestion software libraries. Sure, you still need to do a bit of legwork yourself (or at the very least you SHOULD do that), but as a starting point it can be very decent.
    • Chat bots (text or voice) can be used to pre-sort customers into what kind of support they need. Again, not perfect and probably doable without LLMs, but it IS a use-case that companies are going for because it makes it easy and customers are somewhat more accepting of that than a complete robot voice.

    This is a bubble, sure, but this is far less of a "Solution looking for its problem" situation than blockchain. There IS genuine use you can get out of both the chat bots and the music/video/image generation. Not enough to move the needle way from "Guys, maybe we should stop burning up the whole biosphere for this crap", but it's not "all bad and only the cryptobros like it".

  • LionKor (unregistered)

    Lest we think the problem is contained to OpenAPI or LLMs

    Typo, not OpenAPI, the author likely means OpenAI

  • (nodebb)

    Being retired from programming, but still doing it everyday (for art), I am glad I don't have to deal with looking for jobs or interviewing candidates. I find AI mostly unimpressive, except for the novelty factor (ooh look, cool art! too bad it's commercially useless, and XCode's AI, which is randomly useful, and often insanely stupid). I do believe that eventually there will be true AI that could replace people, but that is not the current AI (neural net+transformer ~LLM). The whole "we are almost to AGI" mantra is a fantasy. All today's AI is is super scaled up because computing power is so great today, but in reality, other than the addition of transformers since 2017, the underlying system has not really change much. Training algorithms are about the only real advancement for the neural net, and transformers, while, transforming, are mostly token prediction schemes with longer and longer streams. It's like the perceptron from Nov, 1958, was just scaled to insane proportions. Radically new architectures that might actually make real A"I" don't exist yet. They will eventually, but I don't expect to live long enough to see one.

  • (nodebb)

    When my wife needs to write a difficult or sensitive letter and doesn't know how to begin, she'll use ChatGPT to write the framework of the letter and then alter that to fit the facts and rewrite it in her voice. That said, neither of us consider AI to be trustworthy.

  • TechHound (unregistered)

    Since even the President of the United States is using ChatGPT to cheat on his homework

    Whether or not you like him, this has been repeatedly debunked, and it is also quite laughable considering all the accounts of how negatively biased ChatGPT has been shown to be against him. This is yet another case showing how mixing politics into a tech site just does not work, especially when you evidently cannot to do any research of your own beforehand but will still write as if it was fact as I quoted above.

  • (nodebb) in reply to n9ds

    Does she find that it's faster to do it that way than to just search on one of those sites full of "model" letters for this or that thing (and then modify the model letter?

  • (nodebb)

    I almost think "the worst of AI" is that the marketers have finally found something that can support that label in the minds of the public. Micheal Flynn's 1996 novel, Firestar introduced the term "artificial stupid" for control systems based on some hand-waving technobabble, and that sounds like a better description of what we are dealing with.

    Or, you know, "Grand Theft Autocorrect".

    Meanwhile, much of the public thinks we are going to get Commander Data and Terminators next month.

  • PedanticRobot (unregistered)

    Would have been nice if the article included some of the cool stuff AI has been used for, like the recent breakthroughs in protein folding, but whatever.

    Personally, I've found Perplexity very helpful in rubber ducking ideas, as well as solving issues in languages I'm not fully familiar with. I don't use it to write code, but for things that would require finding another coder to talk to, they are absolutely good enough.

    More widely I find AI even more useful. LM Studio (basically a local chat bot program) has been great for helping with my hobby writing, helping to flesh out ideas, finding alternative wordings, and I've found it to be an almost perfect solution to the blank page problem. When I don't know what I want, even a bad answer from the AI is helpful because in explaining why I dislike what it gave me I can get an idea of what I do want.

    Locally hosted image gen AI's have seriously upped my game as far as being a DM, allowing me to show my group NPC headshots, setting background images, flavor images of things they find, ect. Image gen has also allowed me to create icons for the applications I make, and not just good enough icons like I would have to settle for when it came to finding premade icons with a suitable license, but actual icons that look exactly the way I want.

    Then there is text to speech generation with BARK, MIMIC, or the current one I'm using, PIPER, which are great for creating audio book versions of my writing or dialog for games and roleplay. It's obviously still not perfect, but the progress over just the past couple years has been astounding.

    I could go on, but the point is that LLMs/AI are tools and if you can't find some use for them, then that's not a problem with the tech.

    As for the moral/ethical talk, I support Free and Open Source because sharing is caring. If Free as in Beer and Free as in Freedom is a good enough principle for coders to build the modern world, then it should be good enough for artists and writers to help build the AI future.

  • Argle (unregistered)

    The timing of this article is great. My experiences lately back what was said completely.

    jeremypnet says:

    First of all, I wish we didn't use the term "AI" for things like large language models. They are no more AI than Eliza was. Unfortunately, I think the ship has sailed, so I will use it in this comment to avoid confusion.

    A couple of years back I had a comment of the day where I pointed out that it should be more accurately called "SI" (Simulated Intelligence), in part because we don't really understand real intelligence.

    Mr. TA adds:

    ChatGPT is useful to me and I'm paying for the plus plan. But, by no means it's ready to be trusted blindly;

    I agree and have done so myself. It's great for pattern matching and things you can turn loose on entry-level coders. "Here's an enum with 40 elements. Make me a CheckBox on a form for each of them. Make the variable name be "chk" followed by the enum and make the Text the correct English form with spaces in the right place." Then, behold! I get checkboxes that I just have to tweak for position. I've saved myself time.

    Sometimes it gives me ideas I would have found more slowly. On a personal project, I was looking at a better way to do some graphics and it suggested Skia. I had never heard of it and it seemed to be a fine idea. However, when it got to details, it got hallucinating and suggested Nuget packages that didn't exist. Oh, but it sure seems certain of itself! I remarked to it "sometimes you seem like an intern whose work I must constantly check and correct." Its only response was to produce a nominally corrected version of its last suggestion.

    dorner observes:

    Either it's completely going to reshape the world, put millions of people out of work, and/or destroy humanity...

    My 87-year-old father told me this morning on my drive to work that AI frightens him. I spelled out the many failures I have found and the nature of how they work. I said, "I'm no more afraid of AI than a mob of toddlers."

  • (nodebb) in reply to jeremypnet

    How do you distinguish between an "off by zero" error and dead accuracy? Would it help if I asked an AI?

  • (nodebb)

    I actually think that the anti-AI backlash (e.g., this very article) is starting to drown out the pro-AI messaging. I'm getting tired of being side-eyed because I use LLMs as code completion machines or to get a quick answer that Google won't give me.

  • BWill (unregistered)

    You sound like a coach builder after cars were invented. Does AI work all the time? No. Will it make people dumber? Yes, those who don't think. Does AI allow me to work faster? Yes.

  • A nunny moose (unregistered)

    AI is the latest in fads loudly championed by those that used to fawn over crypto currencies and subsequently NFTs. Only problem is they managed to convince others that AI is the next big thing (tm), getting them to go full in on AI.

    By far the worst offender in this AI slop arms race has to be Microsoft who clearly thinks that shoving AI into their OS will make people love it. Why would I want AI in motherlovin' notepad? The very notepad that can't even correctly figure out what text encoding I'm using?

    Hopefully that bubble will burst before governments start funnelling money into it hand-over-fist.

  • NaN (unregistered)

    LLMs are heuristics. So every time you add an extra 'fact' the error margin grows. Basic math really.

    It is pretty useful for convergent problems, and wildly unpredictable for divergent problems. For example, using it to make a summary of a text has a better chance* of giving the correct result that asking it to write a complex piece based on a short question.

    *chance, so you /always/ need a human to check.

  • GW (unregistered)

    I've been looking for a job since I was laid off in November 2024. Coming from Warehouse Automation, an industry that really doesn't need to use AI, it is very hard to get an application in the door. I've had 2 AI interviews where I sat at my computer and talked to it. I was impressed with the responses the AI gave to my answers, until I realized that it was just picking out actions and putting them into positive feedback. I wonder what it would come back with if I had used something criminal as an accomplishment.

  • Wayne (unregistered)

    Semi-retired IT guy, I am now a librarian at a university. I'm not sure how much you can say it's cheating for students to be using LLMs at universities when emails are going out for it to be incorporated into curriculum. There's a big fear about schools being 'left behind'. I was perusing our catalog database structures, and interestingly found several fields in various tables for AI-related data. They're unpopulated for now, I'm curious what will be filling them in the future - if anything. Room for future feature sets.

  • (nodebb) in reply to Hans

    Do not treat everything the computer says as gospel, but just as a pre-filter.

    You are aware that thirty, even forty, years ago already, people treated whatever results their computers produced as The Truth™ with no appeal possible, right?

  • (nodebb) in reply to Hans

    Do not treat everything the computer says as gospel, but just as a pre-filter.

    You are aware that thirty, even forty, years ago already, people treated whatever results their computers produced as The Truth™ with no appeal possible, right?

  • COBOL Dilettante (unregistered) in reply to Ross_Presser

    I'm definitely finding that ChatGPT gives me better answers than Google, but I'm not sure if that's because ChatGPT is amazing, or because Google has gone crap.

    If I search for something on Google, and my search terms are even tangentially related to something someone might want to sell me. The first 100 results are SEO-baiting slop, and then the Gemini AI summary is a distillation of that slop. ChatGPT, for the moment, seems to have less of a vested interest in selling me things.

    In a happy world, ChatGPT forces Google to up its game. In a miserable world, OpenAI realises it's losing money and starts offering sponsored targeted training data.

  • sudgy (unregistered)

    I wholeheartedly agree that most uses of AI are utter trash, but I think there are a few cases where it's useful. The one case I've used personally is removing vocals from songs. And this highlights what is the important point to me: AI should only be used in situations where you already know exactly what you want out of it, it's just that you don't know how to get that yourself.

  • Duke of New York (unregistered) in reply to BWill

    "Another thing was transformative" is the refuge of tech fad scoundrels.

  • (nodebb)

    Well, it's not like it's revolutionary new tech. It's actually 80s tech, when I learned in school about it, it was already consider a dead end because it did not scale well enough to ever become something that is similar or exceeds human intelligence (spoiler: That was the definition of AI before US universities rebranded it because funding dried up due to lack of progress). So yeah, "generative AI" will fail - in fact that has been no progress at all and all money spend so far has gone into post-processing to make the results look smarter.

    However there is obviously a risk about the damage done. It is similar to how for example auto correct tools had made the spelling of people worst (studies were actually quoted by Microsoft's own research paper on AI effects on society) - so there is a chance that before it crashes it will leave some damage to society.

    That said, generative AI is super well placed to replace jobs that are not required to be 100% accurate, are generally not based on real sensible data and produces output that actually nobody really cares about and ends up somewhere in a drawer. Yes, that's right, it's a perfect tool to replace middle managers. So if you really want to kill AI, just inform them what it can do and how to use it and suddenly the most excited managers get super, super indecisive over night if it's actually a good idea to get the shiny Microsoft "Data Analyst" powered by AI :-)

  • Bart (unregistered)

    This battle has been lost years ago already.
    No one was interested and we left it all to big-tech.

  • X User (unregistered)

    @Grok, summarize this for me.

    lol /s

  • (nodebb)

    The only time ChatGPT was useful was when it helped me reword one of the bullet points on my resume when I was job hunting. It was poorly worded and I just couldn't come up with a good way of wording it. ChatGPT did, but only for that one line.

    The rest of the output was a horrendous amount of halluncinations. I just cribbed the one point and ignored the rest.

    Don't let AI work on your original document.

  • xtal256 (unregistered)

    "I simply haven't found a single positive generative AI use-case that justifies all the harm taking place"

    https://en.wikipedia.org/wiki/AlphaFold

  • (nodebb) in reply to xtal256

    Ha, you don't want to go down the Alpha Fold rabbit hole. You should read recent studies that are way more critical after scientist found out that predictions are more wrong than right. In other words, a coin toss gives better result. And that has only improved in some areas with the latest version and it is all thank to selective post processing.

  • Duke of New York (unregistered) in reply to COBOL Dilettante

    The default results page from Google today is a combination of several modules. Looking at it now: two web results, videos, a shopping link, some images, short videos, some social site links. Google is designing its results page to be "fun" in a vacuous marketing sense, not to find answers.

    If you send a query directly to Google web search, while that is still possible, you get results that are more answer-oriented. Yes, Google has gotten worse, and it's not a secret to anyone, and Sundar dgaf.

  • (nodebb) in reply to MaxiTB

    Technically, predictions that are more wrong than right are still more useful than a coin toss, because you can listen to the prediction and then do the opposite of what it says.

  • (nodebb) in reply to Balor64

    No, they are actually not. Because you don't have only wrong-negatives, you also have false-positives which those untestable algorithms. So you can't trust the result at all and have to verify everything. And that's pretty much the major critic coming from those scientist in the field especially when it wastes again tons of times when a previously scientific paper is put into question due to some "AI"s random results (and yes, I refuse to use the rebranded term without quotation marks because every algorithm with at least two execution branches is now AI).

  • Tapio Peltonen (unregistered)

    Thank you thank you thank you.

    I hate the AI hype that's being pushed absolutely everywhere. Why would I consume something no one bothered to create.

  • Robin (unregistered) in reply to Argle

    "I'm no more afraid of AI than a mob of toddlers."

    I agree with your general sentiment, but my reaction to this (fair imo) comparison is likely the opposite of what you intended.

    I'm pretty sure that everyone here who has been a parent of toddlers, or worked with them, can think of reasons to be at least badly perturbed by, if not actually scared of, a whole "mob" of them.

    Now imagine such a mob put in charge of some critical safety system somewhere. Including heavy-duty weaponry - while obviously I don't know, I think you're being naive if you think LLM-based "AI" isn't being heavily used in various military applications already. Hopefully not yet actually in charge of nukes, but would you bet against it?

    Not to mention all kinds of other safety-critical systems (air traffic, medical applications...) where there is already a sordid enough history of buggy software costing countless lives. And more and more of this is in time going to heavily involve systems with the "reasoning" ability of, well, well below a human of any age, yet able to sound completely confident of all its actions...

    Not sure I really want to think about all the implications here, I'd far rather be happily ignorant...

  • (nodebb) in reply to MaxiTB

    That just means those LLMs aren't different enough from a coin toss to be useful. In principle, if you had one that was wrong 99.999% of the time, you could do the opposite of what it said, and you'd be doing the right thing 99.999% of the time.

  • (nodebb) in reply to Balor64

    Well, they have their uses:

    https://greekcitytimes.com/2025/04/26/greek-woman-files-for-divorce-after-chatgpt-reads-husbands-affair-in-coffee-cup/

  • (nodebb) in reply to MaxiTB

    You should read recent studies that are way more critical after scientist found out that predictions are more wrong than right. In other words, a coin toss gives better result.

    There are more than two possible candidates for how a protein could be folded, so being wrong more than right can definitely still be useful. That being said, do you have a link to a credible study?

  • (nodebb) in reply to Robin

    I'm pretty sure that everyone here who has been a parent of toddlers, or worked with them, can think of reasons to be at least badly perturbed by, if not actually scared of, a whole "mob" of them.

    Indeed; my first thought when I read that had actually been to go to Arthur C. Clarke's short story "Dial F For Frankenstein".

  • Neveranull (unregistered)

    My personal experience is that the Google AI summaries of search results often are the opposite of what is true and cannot be at all trusted. The Amazon summaries of customer reviews are pretty accurate and easy to verify. I tried and tried to get Microsoft’s Edge AI to draw a picture of a man with three arms, but it always drew six arms no matter how many different ways I worded my request, insisting that I only wanted exactly three arms.

  • (nodebb)

    I don't really see the parallel to the dot-com era just yet. We'd have to have millions of "prompt programmers" in offices allover the world for there to be that kind of "double whammy" when the company keels over and all those consumers goes from high paid to needing help.

    Having studied CS, AI and Psychology, it always cracks me up when people are talking about superintelligence being around the corner. So what? We're supposed to believe you can create something much better than the human brain when WE DON'T KNOW how the human brain works? Yeah, maybe in another 70 years or so. (Yeah, yeah, I know, this talk of super-intelligence is for the VCs...)

    That being said, if you listen closely to people warning about AI, you'll realize they aren't really warning about super-intelligent AI but super dumb AI (trying to turn everything into paperclips or what have you). The big problem with AI right now is that we risk using it in the wrong way, putting the wrong kind of power or responsibilities in its hands and causing havoc by doing so (ahem, social media, US justice system, etc.)

    The fact that OpenAI is cooperating with DoD on nukes, that would keep me awake at night. But I'm still naive enough to think that generals will piss on spark plugs before any AI will be allowed to launch anything nuclear...

    I'd say, think of AI as text generators, picture generators, music generators...

    It is not intelligent. It is not conscious. It learns, but that's just complex statistics. You cannot claim it's consuming anything the same way a human does. That's just utter BS and an attempt to avoid having to pay for the data scraping/data theft going on.

  • (nodebb) in reply to Robin

    well, I am a few years the wrong side of 40!

    So you're still in your late 30s, right?

Leave a comment on “AI: The Bad, the Worse, and the Ugly”

Log In or post as a guest

Replying to comment #679846:

« Return to Article