- Feature Articles
- CodeSOD
-
Error'd
- Most Recent Articles
- Twofers
- Two-faced
- Boxing Day Math
- Michael's Holiday Snaps
- Anonymice
- A Horse With No Name
- On the Dark Side
- Untimely
-
Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
secnod
Admin
Perhaps Jimbo trusts Argle to produce solid code and takes it as a given that the code submitted will be perfectly fine.
Me? No, I wouldn't say I was naïve...
Admin
With most of the code in my organisation being spat out by LLMs now, my code review time has massively increased. When I use an LLM for code, as per the company mandate, I do a through review before I let it push or create a PR (except when that instruction is ignored and it does it anyway), which usually means the whole process takes longer than it would have done to write it by hand but who am I to question my instructions on how to waste my time at work
Edit Admin
😱
Edit Admin
Well, if you're trying to develop things well, so as to promote the company's interests, you absolutely should question instructions that require you to waste time producing stuff by "AI" and then spend more time cleaning it up than you would have spent writing the code correctly yourself in the first place.
And you should be strongly insulted that the company thinks you're incompetent(1) and, in particular, more incompetent than an "AI" that is well-known to be more incompetent than the depths of incompetence exhibited on this august journal.
(1) They insist that you are, by implication, not competent enough to do better than a known incompetent.
Edit Admin
Addendum to the above: if my company insisted that I use an "AI" to produce the code, I'd completely ignore the insistence, and the request, and develop it myself anyway.
Admin
I've found very specific tasks that I can give AI to do (mainly writing tests and documentation) while I'm off doing actual important things. Any time I give into the temptation to have AI do something fairly important thaat I can do but don't want to do, it ends up just bad enough that I definitely should have just bitten the bullet and done it myself.
Edit Admin
Does the company mandate refer to using AI for coding or doing a thorough review of it?
Edit Admin
if my company insisted that I use an "AI" to produce the code, I'd write it myself first, then use the AI, then check the output to see if there's any part where it did a better job then I did. It could happen, I ain't perfect.
Admin
I'm always flattered to see one of my stories turn up on this site. Since AI code generation is being discussed as a consequence of it, I have to put in my $0.02.
You people relying on vibe-coding, ARE YOU FREAKING INSANE?
Calm down, Argle.
I'm working on a new book. I've published a handful of puzzle books on Amazon, and I'm currently tackling a book with Cryptic Crossword puzzles. I've been using an AI to validate the clues. Currently, it handles most anagrams properly, but it's got a total blind-spot when it comes to spotting words within other words. But the damndest thing is that, on occasion, AIs can't count. Too many times I've been told things like "ratio" is a 6-letter word. And people want to let AIs make mission-critical code? Good luck with that.
Admin
Well really. If you write all the code yourself, how are Sales supposed to market your product as "AI-powered"?
Edit Admin
Going back to the article's point: it's my opinion people don't like to read "walls of text" in a code review. They are more likely to nitpick actual code and ignore a huge comment section.
I'm going to assume the document was a markdown added to the project.
And honestly code documentation is best done in the form of a design doc in a Google Doc that can be shared, commented collaboratively, archived, amended.
In both cases we all know the doc will never get updated when the code actually gets modified and thus will get out of sync as time goes.
Admin
TBF the article said that Argle wrote pages of documentation, not specifically comments. Detailed documentation for API consumers is fine and the world needs more of it. Block comments explaining code that isn't sufficiently clear on its own, not so.
Admin
If someone tells me to jerk off a chatbot or be fired, I have no financial dependents. Go ahead. Make my day.
Admin
One of the events that led me to leave the IT business was the occasion where I had been given 2 hours to do a code review of a complete (offshored) module of work. No, it was still shoddy (the previous points raised had not been accomplished for a start) and did not pass review. I was bluntly told no, that was unacceptable and I would be changing my assessment to "passed", as the code was to be shipped that evening.
Edit Admin
At a previous client I learned that the probability of a code review being accepted is proportional to the number of lines changed.
If you change one or two lines, chances are that you end up in an endless discussion whether it is the proper approach. Change the whole program, add two extra modules and rename all variables and your code review is accepted before you can blink twice
Admin
This sounds like the nuclear plant approach of Parkinson's law. I would greatly favour the two-line code and pass. The other review would get a whole avalanche of questions like "why new modules?".
The low-change code is most likely a sign of good modularization and separation of concerns. The "rewrite the whole thing" happens in pasta projects (spaghetti or ravioli, depending on the language).
Of course, the assumption here is that the problem was really understood and correctly fixed. A pretty strong assumption in my experience.
Admin
Depends how high up the management chain you ask. People who understand that typing code is a minimal part of a developer's time definitely want well tested, well reviewed output. Senior management have been fully sold on the hype and want to see the metrics of lines committed, AI usage and bug fixes shipped go up. This works out well since we have a lot more bugs to fix since going "AI" first so that number is very high. One small issue can result in a series of 3 or 4 bugs as the last fix turns out to not have considered some edge case or been put in a place that's far too generic