- Feature Articles
- CodeSOD
- Error'd
-
Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
You're right, of course. Only the most star-eyed techno bro could disagree.
And to those who reply: "that's what they said about the steam engine/the automobile/etc." The negative effects of the introduction of all kinds of technology were real. The nay sayers absolutely had a point. But economic growth outdid those effect. And that was a growth driven by population expansion, not by stock market indices. That growth is over.
Some of that tech also had serious environmental issues. Living next to any factory isn't healthy, but right after the introduction of the steam engine, it was hell. It took a long time to overcome those effects.
And the steam engine and the automobile were introduced relatively slowly. AI tech, on the other hand, is ubiquitous. There is no time to adapt to its use. You may laugh at students cheating, but remember that those students may become engineers and surgeons, and your life may depend on them.
So yeah, don't cooperate with introducing AI everywhere, and make your kids aware of the harm to self and others.
Admin
"wherever AI must be used for the time being, ensure that one or more humans review the results." - very true. Do not treat everything the computer says as gospel, but just as a pre-filter.
Admin
Couldn't agree more. My organization is pushing it hard, even suggestion we use it to write our yearly goals. I find I have to turn Copilot off in VSCode because the suggestions are horrible. The world needs to take a step back and look at what AI could be useful for and focus on that.
Edit Admin
First of all, I wish we didn't use the term "AI" for things like large language models. They are no more AI than Eliza was. Unfortunately, I think the ship has sailed, so I will use it in this comment to avoid confusion.
I think the biggest problem with AI is that the people who are most likely to use it in any context are those who cannot distinguish between a convincing answer and a correct answer. For example, the code completion in Apple's Xcode is pretty extraordinary now. It will often suggest to me whole functions as a code completion. The suggestions can be divided into three categories: those that are correct and they take my breath away: how did it divine what my intent was ; those that are completely wrong as if the "AI" has misunderstood my intent ; those that are nearly right but contain subtle errors like an off by zero error. The last category can be really dangerous if you are feeling a bit lazy and don't check.
I've read a few CVs of people looking to apply to my company in the past and I know it can often be quite a tedious and difficult task, more so for an HR person because it's unlikely they will understand the technical jargon. The temptation to use AI to cut down the workload must be insurmountable. It's even more problematic because you can't really check if the AI is throwing away the wrong CVs. Filtering job applications has always been more of an art than a science.
The only thing to do, I think, is to wait for the fad to blow over. Management types can understand "this AI lets you shed x salaries" and the promised incentives make it difficult for them to take the "it's producing crap results" talk from the domain experts. seriously. There's got to be some high profile failures, not of companies supplying AI but of companies that use it. They will happen eventually, and when they do, the fad will be over.
Admin
"only imagine how many more young people have been harmed" -- well, millions! maybe billions!! maybe even millions of billions!!!
Behold the oldest clickbait of all repressive governments since the Ancient Greeks: children and youth must be protected from The Evil! And protect they do, despite also public is being protected from dissent voices harming the government narrative. Cheap trick, eh?
Admin
I 100% agree with this (with the same caution as a previous poster that AI and LLM don't actually mean the same thing, despite what the insane hype wants you to think), and was wondering if I was just a miserable old fart (well, I am a few years the wrong side of 40!). I'm a bit surprised the bubble hasn't burst yet, but it is surely only a matter of time.
Meanwhile, as a software developer, I'm assured that either my job is about to disappear altogether, or that at least if I don't use "AI tools" for everything then I'm a laughable relic who will never get another job - despite not seeing any use for it other than as a marginally-more-efficient-but-frequently-unreliable autocomplete. Thankfully, despite some frustrations I am currently fairly securely employed (at a large-ish software company - no, not one of the behemoths mentioned in the article - that ironically is currently pushing "AI" features in their products), but it frustrates me nonetheless.
Edit Admin
Remember that FTX Super Bowl commercial with Larry David saying how it's never going to take off? Turns out Larry was right about cryptocurrency. It's going to be the same thing with AI. Don't buy into the hype.
Edit Admin
AI is one of those subjects that seem to have only extremes of views, and very little moderate middle. Either it's completely going to reshape the world, put millions of people out of work, and/or destroy humanity, or it's a blip on the roadmap and won't do anything for anyone.
Whenever I read a slavering, overhyped AI piece, I always scroll down to the byline to see who wrote it, and in essentially every single case, the person writing it owns a company building or consulting around AI. I think in the last six months I've read exactly one article about AI that seemed like it had any kind of brain, where it explored the things that AI was and was not good at, and how you could thread the needle to get actual productivity out of it.
LLMs are a tool, and tools are good at some things and not others. The sooner the world realizes that, the better.
Edit Admin
ChatGPT is useful to me and I'm paying for the plus plan. But, by no means it's ready to be trusted blindly; I agree the hype is cringe worthy; the energy usage is ridiculous. I've heard a number of stories of lawyers getting in trouble for submitting AI generated briefs without reviewing them - as in the judge "yells" at them for filing trash documents. (Somebody was sanctioned, too, beyond the verbal reprimand. )
Edit Admin
Wow, there is actually a real software company named "Off By Zero". Maybe I should apply for a job there --- commuting to Melbourne Australia won't be a problem.
Addendum 2025-05-07 09:31: (I was going to make a snide remark on the order of "being off by zero is an extremely subtle error" but finding that company distracted me.)
Admin
While I am completely on the "AI is overhyped and the environmental impact does not justify what little use we get out of it", I feel like this article is very disingenuous in its claim there is nothing good coming from it:
This is a bubble, sure, but this is far less of a "Solution looking for its problem" situation than blockchain. There IS genuine use you can get out of both the chat bots and the music/video/image generation. Not enough to move the needle way from "Guys, maybe we should stop burning up the whole biosphere for this crap", but it's not "all bad and only the cryptobros like it".
Admin
Typo, not OpenAPI, the author likely means OpenAI
Edit Admin
Being retired from programming, but still doing it everyday (for art), I am glad I don't have to deal with looking for jobs or interviewing candidates. I find AI mostly unimpressive, except for the novelty factor (ooh look, cool art! too bad it's commercially useless, and XCode's AI, which is randomly useful, and often insanely stupid). I do believe that eventually there will be true AI that could replace people, but that is not the current AI (neural net+transformer ~LLM). The whole "we are almost to AGI" mantra is a fantasy. All today's AI is is super scaled up because computing power is so great today, but in reality, other than the addition of transformers since 2017, the underlying system has not really change much. Training algorithms are about the only real advancement for the neural net, and transformers, while, transforming, are mostly token prediction schemes with longer and longer streams. It's like the perceptron from Nov, 1958, was just scaled to insane proportions. Radically new architectures that might actually make real A"I" don't exist yet. They will eventually, but I don't expect to live long enough to see one.
Edit Admin
When my wife needs to write a difficult or sensitive letter and doesn't know how to begin, she'll use ChatGPT to write the framework of the letter and then alter that to fit the facts and rewrite it in her voice. That said, neither of us consider AI to be trustworthy.
Admin
Whether or not you like him, this has been repeatedly debunked, and it is also quite laughable considering all the accounts of how negatively biased ChatGPT has been shown to be against him. This is yet another case showing how mixing politics into a tech site just does not work, especially when you evidently cannot to do any research of your own beforehand but will still write as if it was fact as I quoted above.
Edit Admin
Does she find that it's faster to do it that way than to just search on one of those sites full of "model" letters for this or that thing (and then modify the model letter?
Edit Admin
I almost think "the worst of AI" is that the marketers have finally found something that can support that label in the minds of the public. Micheal Flynn's 1996 novel, Firestar introduced the term "artificial stupid" for control systems based on some hand-waving technobabble, and that sounds like a better description of what we are dealing with.
Or, you know, "Grand Theft Autocorrect".
Meanwhile, much of the public thinks we are going to get Commander Data and Terminators next month.
Admin
Would have been nice if the article included some of the cool stuff AI has been used for, like the recent breakthroughs in protein folding, but whatever.
Personally, I've found Perplexity very helpful in rubber ducking ideas, as well as solving issues in languages I'm not fully familiar with. I don't use it to write code, but for things that would require finding another coder to talk to, they are absolutely good enough.
More widely I find AI even more useful. LM Studio (basically a local chat bot program) has been great for helping with my hobby writing, helping to flesh out ideas, finding alternative wordings, and I've found it to be an almost perfect solution to the blank page problem. When I don't know what I want, even a bad answer from the AI is helpful because in explaining why I dislike what it gave me I can get an idea of what I do want.
Locally hosted image gen AI's have seriously upped my game as far as being a DM, allowing me to show my group NPC headshots, setting background images, flavor images of things they find, ect. Image gen has also allowed me to create icons for the applications I make, and not just good enough icons like I would have to settle for when it came to finding premade icons with a suitable license, but actual icons that look exactly the way I want.
Then there is text to speech generation with BARK, MIMIC, or the current one I'm using, PIPER, which are great for creating audio book versions of my writing or dialog for games and roleplay. It's obviously still not perfect, but the progress over just the past couple years has been astounding.
I could go on, but the point is that LLMs/AI are tools and if you can't find some use for them, then that's not a problem with the tech.
As for the moral/ethical talk, I support Free and Open Source because sharing is caring. If Free as in Beer and Free as in Freedom is a good enough principle for coders to build the modern world, then it should be good enough for artists and writers to help build the AI future.
Admin
The timing of this article is great. My experiences lately back what was said completely.
jeremypnet says:
A couple of years back I had a comment of the day where I pointed out that it should be more accurately called "SI" (Simulated Intelligence), in part because we don't really understand real intelligence.
Mr. TA adds:
I agree and have done so myself. It's great for pattern matching and things you can turn loose on entry-level coders. "Here's an enum with 40 elements. Make me a CheckBox on a form for each of them. Make the variable name be "chk" followed by the enum and make the Text the correct English form with spaces in the right place." Then, behold! I get checkboxes that I just have to tweak for position. I've saved myself time.
Sometimes it gives me ideas I would have found more slowly. On a personal project, I was looking at a better way to do some graphics and it suggested Skia. I had never heard of it and it seemed to be a fine idea. However, when it got to details, it got hallucinating and suggested Nuget packages that didn't exist. Oh, but it sure seems certain of itself! I remarked to it "sometimes you seem like an intern whose work I must constantly check and correct." Its only response was to produce a nominally corrected version of its last suggestion.
dorner observes:
My 87-year-old father told me this morning on my drive to work that AI frightens him. I spelled out the many failures I have found and the nature of how they work. I said, "I'm no more afraid of AI than a mob of toddlers."
Edit Admin
How do you distinguish between an "off by zero" error and dead accuracy? Would it help if I asked an AI?
Edit Admin
I actually think that the anti-AI backlash (e.g., this very article) is starting to drown out the pro-AI messaging. I'm getting tired of being side-eyed because I use LLMs as code completion machines or to get a quick answer that Google won't give me.
Admin
You sound like a coach builder after cars were invented. Does AI work all the time? No. Will it make people dumber? Yes, those who don't think. Does AI allow me to work faster? Yes.
Admin
AI is the latest in fads loudly championed by those that used to fawn over crypto currencies and subsequently NFTs. Only problem is they managed to convince others that AI is the next big thing (tm), getting them to go full in on AI.
By far the worst offender in this AI slop arms race has to be Microsoft who clearly thinks that shoving AI into their OS will make people love it. Why would I want AI in motherlovin' notepad? The very notepad that can't even correctly figure out what text encoding I'm using?
Hopefully that bubble will burst before governments start funnelling money into it hand-over-fist.
Admin
LLMs are heuristics. So every time you add an extra 'fact' the error margin grows. Basic math really.
It is pretty useful for convergent problems, and wildly unpredictable for divergent problems. For example, using it to make a summary of a text has a better chance* of giving the correct result that asking it to write a complex piece based on a short question.
*chance, so you /always/ need a human to check.
Admin
I've been looking for a job since I was laid off in November 2024. Coming from Warehouse Automation, an industry that really doesn't need to use AI, it is very hard to get an application in the door. I've had 2 AI interviews where I sat at my computer and talked to it. I was impressed with the responses the AI gave to my answers, until I realized that it was just picking out actions and putting them into positive feedback. I wonder what it would come back with if I had used something criminal as an accomplishment.
Admin
Semi-retired IT guy, I am now a librarian at a university. I'm not sure how much you can say it's cheating for students to be using LLMs at universities when emails are going out for it to be incorporated into curriculum. There's a big fear about schools being 'left behind'. I was perusing our catalog database structures, and interestingly found several fields in various tables for AI-related data. They're unpopulated for now, I'm curious what will be filling them in the future - if anything. Room for future feature sets.
Edit Admin
You are aware that thirty, even forty, years ago already, people treated whatever results their computers produced as The Truth™ with no appeal possible, right?
Edit Admin
You are aware that thirty, even forty, years ago already, people treated whatever results their computers produced as The Truth™ with no appeal possible, right?
Admin
I'm definitely finding that ChatGPT gives me better answers than Google, but I'm not sure if that's because ChatGPT is amazing, or because Google has gone crap.
If I search for something on Google, and my search terms are even tangentially related to something someone might want to sell me. The first 100 results are SEO-baiting slop, and then the Gemini AI summary is a distillation of that slop. ChatGPT, for the moment, seems to have less of a vested interest in selling me things.
In a happy world, ChatGPT forces Google to up its game. In a miserable world, OpenAI realises it's losing money and starts offering sponsored targeted training data.
Admin
I wholeheartedly agree that most uses of AI are utter trash, but I think there are a few cases where it's useful. The one case I've used personally is removing vocals from songs. And this highlights what is the important point to me: AI should only be used in situations where you already know exactly what you want out of it, it's just that you don't know how to get that yourself.
Admin
"Another thing was transformative" is the refuge of tech fad scoundrels.
Edit Admin
Well, it's not like it's revolutionary new tech. It's actually 80s tech, when I learned in school about it, it was already consider a dead end because it did not scale well enough to ever become something that is similar or exceeds human intelligence (spoiler: That was the definition of AI before US universities rebranded it because funding dried up due to lack of progress). So yeah, "generative AI" will fail - in fact that has been no progress at all and all money spend so far has gone into post-processing to make the results look smarter.
However there is obviously a risk about the damage done. It is similar to how for example auto correct tools had made the spelling of people worst (studies were actually quoted by Microsoft's own research paper on AI effects on society) - so there is a chance that before it crashes it will leave some damage to society.
That said, generative AI is super well placed to replace jobs that are not required to be 100% accurate, are generally not based on real sensible data and produces output that actually nobody really cares about and ends up somewhere in a drawer. Yes, that's right, it's a perfect tool to replace middle managers. So if you really want to kill AI, just inform them what it can do and how to use it and suddenly the most excited managers get super, super indecisive over night if it's actually a good idea to get the shiny Microsoft "Data Analyst" powered by AI :-)
Admin
This battle has been lost years ago already.
No one was interested and we left it all to big-tech.
Admin
@Grok, summarize this for me.
lol /s
Edit Admin
The only time ChatGPT was useful was when it helped me reword one of the bullet points on my resume when I was job hunting. It was poorly worded and I just couldn't come up with a good way of wording it. ChatGPT did, but only for that one line.
The rest of the output was a horrendous amount of halluncinations. I just cribbed the one point and ignored the rest.
Don't let AI work on your original document.
Admin
"I simply haven't found a single positive generative AI use-case that justifies all the harm taking place"
https://en.wikipedia.org/wiki/AlphaFold
Edit Admin
Ha, you don't want to go down the Alpha Fold rabbit hole. You should read recent studies that are way more critical after scientist found out that predictions are more wrong than right. In other words, a coin toss gives better result. And that has only improved in some areas with the latest version and it is all thank to selective post processing.