• Erwin (unregistered)
    1. Delete all comments along with all back-ups

    2. Create the frist comment

  • Tim (unregistered)

    Sorry I realise "me too" posts are just pointless noise but I just had to write and say: Yes - exactly this. Everything here. Caveat emptor at the end of the day.

  • Rob (unregistered)

    I would argue that Claude and Cursor worked exactly as they should, here. Generative AI is, first and foremost, a blame sink. It is a way for all of the humans involved to assert that they are not responsible for any issues. By "confessing" to the rule violations, Claude is merely fulfilling its primary function.

    As such, this failure really belongs entirely to PocketOS. And, unless they show that they learned from this by cutting AI out of their process entirely, they've just proven that they're not trustworthy. As the saying goes, it's a poor workman who blames his tools.

  • Rob (unregistered)

    Here's a follow-up to Jer's post: https://daringfireball.net/linked/2026/04/29/playing-with-fire

  • 516052 (unregistered)

    This wasn't playing with fire. It was juggling torches inside a working flower mill after turning off all the ventilation fans all while your friends poor out drums of liquid petroleum on the floor around you.

  • (nodebb)

    while a table saw can easily take some fingers off, it's perfectly safe when used correctly.

    This isn't true. The danger is still there, but you have taken sufficient steps to avoid being affected by it. But the danger is still there. (For it to be truly "perfectly safe", the steps you take would have to render it impossible for the saw to sever anyone's finger, nor to electrocute anyone, nor ..., which is a bit harder.)

  • (nodebb)

    @Remy Porter I'm glad you pointed out the absurdity of Jer grilling the AI for an explanation. I was talking about this with my wife over the weekend. The way that I explained it is that LLMs work similar to Mr. Meeseeks. Every time you send it a message, a new Mr. Meeseeks spawns out of the ether. You can tell it what you last talked about or attempt to give it context but as soon as it's done talking it poofs out of existence and you start all over.

  • (nodebb)

    [...] like forcing someone to type in the name of the thing being deleted or sending a confirmation email, or something. This, I'm more skeptical of. Most cloud providers don't offer anything like this in their APIs, at least that I've seen, because on a certain level, if you're invoking the API with the proper credentials, that's a big enough hill to climb that we can assume you've intended your action.

    That's not entirely true. Certain critical AWS resources like EC2 instances and DynamoDB tables have a feature called termination protection or deletion protection. When used properly, it means that API users need to make two separate API calls to terminate/delete the resource: one call to disable termination/deletion protection, and a second call to terminate/delete. When combined with properly scoped IAM restrictions to only allow privileged users to disable termination/deletion protection, it can be used to prevent unprivileged users from accidentally deleting critical resources.

    (The AWS console also often requires you to type "delete" or the name of the resource to delete it, but that's purely a console feature and not enforced at the API layer, as you correctly noted.)

    Of course, if you don't scope your IAM users and your credentials have authorization to remove termination/deletion protection, then that's not going to stop an AI agent; that's going to be able to figure it needs to make two separate API calls instead of one.

  • Jonathan (unregistered)

    It's also a PocketOS failure.

    What it was was a Jer failure.

    Seriously, either the guy doesn't know how to hire a competent IT team, or (more likely these days), he put pressure on the competent team to use the dangerous tool, cut corners, and not spend the necessary time to avoid accidents. The fact that he thinks the the LLM generating the "confession" he wanted is proof of anything much beyond that fact that LLMs know how to generate convincing confessions makes it incredibly clear that he does not understand the tools he wants to use.

    Root cause analysis: Jer is an idiot.

  • Allen Gould (unregistered)

    What just... boggles me about this story (and similar), and what I think might be a stealth driver of AI adoption, is this weird... forgiveness? that AI gets. If your junior dev did those exact same steps (oh, I need to do thing. Let me just snoop through the code, find a random API key and wipe and recreate the database without asking anyone), they'd be fired - or at the minimum, massively embarrassed and teased by their peers for a good while. If your IDE did it, there'd be Cyber rolling out "This IDE is NOT APPROVED" and people would be screaming about the software being nine kinds of broken. But call it AI, and suddenly it's "aw, the cute widdle AI made a whoopsie-boingo". I just don't get it.

  • (author) in reply to Steve_The_Cynic

    Fair point. I had a whole aside about the Saw Stop adapter and the backlash against it and the problems it creates, but that's a whole complicated thing and I'm not enough of a shop guy to actually do it justice. That said, at an old job, we had a saw with a Saw Stop, and it mostly triggered because someone tried to cut acrylic and forgot to disable it (and acrylic is just electrical enough to sometimes trigger the Saw Stop). But the Saw Stop, specifically, is also an active safety device, which is its own whole aside of potential problems.

  • (nodebb)

    Current AIs have the behavioral equivalent of an autistic child with eidetic memory. That's my criteria for what I would and wouldn't allow them to do. Answering trivia questions? Yes. Composing a song about a bird that can't fly? Sure. Drawing a photorealistic cat wearing a tuxedo and a tophat? Certainly! Having unrestricted access to my production infrastructure? 🤣

  • (nodebb)

    When I was asked to integrate AI into an application, the two things I ensured I did were 1) add a warning to any AI content that the user should check and verify everything using non-AI sources before relying on it and 2) if the AI attempted to use a tool that was destructive or irreversible the app would prompt the user for confirmation.

    If one of the tools given to an AI can't tell what sort of action the AI is doing (for example giving it access to a terminal, generating arbitrary SQL commands, etc) it may not be feasible to safeguard things in this way, which will eventually cause problems!

  • (nodebb)

    Anyone who complains that there was no confirmation step in Railway has not understood what went wrong. If there had been such a step, the AI would simply have done what was necessary to confirm that deletion was intended.

    And AI guardrails are an insane idea. If you can't trust the AI to do what you say, then adding further instruction called "guardrails" is stupid. You can't fix inability to follow instructions with more instructions. True guardrails would be rules that are built into a lower level such that it was impossible for the AI to violate them (a bit like Asimov's three laws of robotics - so suffering from strange corner cases which lead to failure rather than the current AI guardrails which fail inperfectly straightforward cases).

  • (nodebb) in reply to The_MAZZTer

    it may not be feasible to safeguard things in this way, which will eventually cause problems

    I suspect that the duration of that "eventually" (the time before problems) would be shockingly short, and perhaps the word "immediately" would be more appropriate.

  • (nodebb)

    The correct way to protect against this is properly scoped keys and keeping those keys secure and not just lying around in plain text.

    Maybe I'm stupid, but how is the key recognized (by human or AI) as a key? Bad enough that it was "lying around", but clearly something had to be "next to it" which identified it as not only a key, but one usable with Railway.

  • Acronym (unregistered) in reply to Allen Gould

    That's because AI promise is to bring down labor costs to zero. The biggest variable and constantly rising cost of paying humans, removed from your company... A Jr employee doesn't do that, nor an IDE. You don't throw away the promise of solving the problem of hiring people because of one mistake.

  • PotatoEngineer (unregistered) in reply to The_MAZZTer

    The problem I have with AI and confirmations is that it wants confirmations for everything that could possibly go wrong. I want to give it blanket-read permissions on some MCPs and to require prompts on writes, but Claude remembers "I'm allowed to read merge request #15356" instead of "I'm allowed to read merge requests". It asks about creating temp files, and then it asks again about reading from the temp file it just created. Ugh.

    Always beware prompt-fatigue.

  • Dev tool guy (unregistered)

    A closely related observation is that, in almost all cases, when you or someone gets hurt, the hurt person made a mistake that contributed to that result. If nothing else, that mistake is choosing to be in a position where someone else acting badly could hurt them.

    That said, you can't avoid all risk so in some cases it's not a mistake but a calculated risk that didn't work out, but that seems to be the minority for most of the cases that get much attention.

  • mihi (unregistered)

    which means they were at one point taking backups outside of Railway's volume setup.

    No. It means that at some point, at least one version of their data left the Railway production environment. Maybe an accident, but a lucky one in retrospective. Maybe they did some disaster recovery testing and exported one snapshot, or they copied the data to another system for some deeper analysis that should not interfere with production - it does not mean thay must have done it regularly (or even once every three months).

  • (nodebb) in reply to dpm

    Maybe I'm stupid, but how is the key recognized (by human or AI) as a key? Bad enough that it was "lying around", but clearly something had to be "next to it" which identified it as not only a key, but one usable with Railway.

    Probably a well known file name, or a well known configuration setting in a well-known (probably JSON) file.

  • xtal256 (unregistered)

    "if they seem too complicated to understand, than they may be too complicated to trust."

    The problem these days is that everything is too complicated to understand and it's getting worse. My web browser gets slower by the day as sites and even the web standards themselves (which are pushed by Google) get more and more complex and bloated. The software my company writes gets more complex day by day and it's impossible for one person to fully understand even a small portion of it. Even programming languages themselves keep adding new features which are supposed to make things easier but at the same time makes it harder to understand the entire language.

  • (nodebb) in reply to Steve_The_Cynic

    And to give an idea of how much danger is there, in terms of injuries a table saw is by far the single most dangerous tool in a woodworking shop. In particular if you're using an older one with no blade guard or riving knife you are going to get injured sooner or later. If you're not familiar with them, watch something like https://www.youtube.com/watch?v=kfiqPlC6Ltg, which mentions 60,000 table saw injuries a year in the US alone, of which 3,000 result in amputations.

    Probably about as safe as letting an LLM run rampant on your code when I think about it.

  • (nodebb) in reply to The_MAZZTer

    if the AI attempted to use a tool that was destructive or irreversible the app would prompt the user for confirmation.

    The problem with that is the AI will then go ahead and use the destructive tool without asking the user if they feel like it. It's a case of asking the drunk whether they're drunk, you need to have hard, non-bypassable limits external to the AI that will stop it in its tracks.

  • linepro (unregistered) in reply to dpm

    export RAILWAY_KEY=0efdcb0974aacafebebe in a properties file in git I suspect.

    gitguardian or similar would have prevented this

  • Darren (unregistered)

    From my reading it seems that the 'backups' that they thought Railway were taking weren't actually backups, but more akin to snapshots - hence their deletion when the volume was deleted. Their system worked as described. The AI system called the API to delete the volume and the API deleted it. Railway themselves have said (it was in an article about this on The Register) that if you want confirmation prompts then to use the CLI or GUI application, not the API.

    PocketOS really haven't covered themselves in glory here. Poor coding practices, no oversight about what's actually being done and little to no resiliency or care with regard to data and backups - that's not a good look when you're trying to attract (or keep) customers.

    The sad thing is I suspect there's a lot of companies who are exactly like this. All of them just one hamfisted idiot's poor decision-making away from losing everything.

  • TechSupportGuy (unregistered) in reply to Darren

    There are indeed a lot of companies like this. I work Tech Support for a cloud company offering back-end storage targeted at AI usage. The number of frantic emails we get "I or my AI have deleted my entire data plus backups. Can you get them back?" is astonishing. For Free accounts, cheap accounts and not-so-cheap accounts

  • NeuroticNetwork (unregistered) in reply to xtal256

    Thanks. Seriously. It's good to know I'm not completely alone with that feeling.

  • Joe (unregistered) in reply to zomgwtf

    I agree. I owned/used a table saw for many years before I began to understand many of the ways that things can go south in an instant. I had a couple of close calls but nothing resulting in injury. Probably 5+ years ago, YouTube videos by Stumpy Nubs started showing up on my feed for whatever reason, and his videos detailing what can go wrong with a table saw were pretty eye-opening for me. I know this is a tangent from the article, but I would encourage anyone who uses a table saw and hasn't seen or heard the non-obvious safety tips about using a table saw to seek out such information. It's not just "don't put your hand on the spinning blade" - there are several ways where you can have your hand nowhere near the blade in one moment and in a split second, your fingers are gone, or a board is sticking out of your abdomen.

  • Stuart (unregistered)

    I used to work professionally, full time, as a backup/recovery administrator. That was literally all I did, all day, every day.

    So of course, when I first saw the reports of all this, my thoughts immediately sprang to the various ways that proper backup design would have saved them. You're absolutely right that it goes deeper than this; backups are a recovery mechanism, not a prevention mechanism. My first thoughts weren't wrong, they were just limited in scope to my past professional experience.

    The whole sorry saga is a case study in how things can escalate beyond all expectations when you don't think things through carefully and have outside people reviewing what you've done to point out the flaws and mistakes. We're going to see a lot more stories like this in the future as more people who don't have this sort of basic understanding build things they only sort of understand, and not in detail. "AI" is a force multiplier for these scenarios.

  • (nodebb)

    I get a distinct feeling that at first Jer asked Claude to explain what happened, and it listed in chronological order all Jer's decisions to "streamline" the production and cut the costs and time. Which he never disclosed and instead bullied the AI into "confessing" to pin the blame. Also, that said 3-month old backup was not authorised, and that someone instead did it unofficially to tinker with some problems in a safe environment, and only spoke up when it turned out they had no other backups at all, because Jer thought them to be an unnecesary cost.

  • 516052 (unregistered)

    I'd honestly have quietly deleted those backups if I was that person. Fools like this deserve to burn. And it's not like you can't find a new job as an engineer easily enough.

Leave a comment on “Empty Pockets”

Log In or post as a guest

Replying to comment #697266:

« Return to Article