• DirkH 2 days ago |
    I wonder if we'll see an AI Agent do a Crowdstrike-tier oops in our lifetime.
    • notinmykernel 2 days ago |
      I wonder how many have already been pushed under the guise of human-agent (e.g., copy-paste from ChatGPT/CoPilot).
    • chillfox 2 days ago |
      That seems inevitable with how we are going.
    • talldayo 2 days ago |
      If it happens, the AI will be less responsible than the moron that gave it control over 300,000+ computers.
      • DirkH 2 days ago |
        If statistically companies are making more money in the long run giving more control to AI... that is exactly what they will do it - and it will be rational for companies to do that. And the people not doing it will be labeled unmodern, falling behind, losing profits etc etc.

        Me daydreaming about a future Cloudstrike caused by AI:

        ```

        Why did this happen? You've cost our company millions with your AI service that we bought from you!

        Err... dunno... not even the people who made the AI know. We will pay you your losses and then we can just move on?

        No, we will stop using AI entirely after this!

        No you wont. That will make you uncompetitive in the market and your whole business will lag behind and fail. You need AI.

        We will only accept an AI model where we understand why it does what it does and it never makes a mistake like this again.

        Unfortunately, that is a unicorn. Everyone just kinda accepts that this all-powerful thing we use for everything we kinda just don't fully understand. The positives far outweigh the few catastrophes like this though! It is a gamble we all take. You'd be a moron running a company destined to fail if you don't just have faith it'll mostly be ok like the rest of our customers!

        *Groan* ok

        ```

    • esafak 2 days ago |
      Who knows, it might even happen at Crowdstrike!
  • neumann 2 days ago |
    This sounds exactly like what I would have done at age 18 cluelessly searching the internet for advice while updating a fresh debian so I can run some random program.
    • Swizec 2 days ago |
      I have done this ... and worse. Fun times.

      My favorite was waiting 2 days to compile Gentoo then realizing I never included the network driver for my card. But also this was the only machine with internet access in the house.

      Downloading network drivers through WAP on a flip phone ... let's say I never made that mistake again lol.

      • sitkack 2 days ago |
        I nuked all my /dev devices on FreeBSD back in the day and had to figure out how to copy the right utilities from the secondary partition to the primary so I could remake them using mknod. You learn so much from such wonderful mistakes. Sometimes jamming a stick into your spokes is the best way.
  • Brian_K_White 2 days ago |
    Dood, it's not "deciding" to do anything. It's autocompleting commands that statistically follow other commands. It might do anything.
    • imwillofficial 2 days ago |
      Isn't that what we all do to some degree?
      • xkqd 2 days ago |
        I mean, sure I can do anything.

        However i won’t because despite it not being in my training data, i recognize that blindly running updates could fuck my day up. This process isn’t a statistical model expressed through language, this is me making judgement calls based on what I know and more importantly, don’t know.

    • Spivak 2 days ago |
      This is reductive to the point of not being helpful, these models display actual intelligence and can work through sight-unseen problems that can't be solved by "get a bunch of text and calculate the most often used next word." I understand why people say this because they see knowledge fall off once it's something outside their training data but when provided the knowledge the reasoning capability stays.

      These models don't have will which is why it can't decide anything.

      • ratedgene 2 days ago |
        How are you defining intelligence here?
      • Brian_K_White 2 days ago |
        Incorrect.
      • wrs 2 days ago |
        The models literally do just repeatedly calculate the most (or randomly not quite the most) often used next word. So those problems can in fact be solved by doing that, because that's how they're being solved.
        • Spivak 2 days ago |
          It's really not though, you're confusing one teeny tiny part of the decoder with the entire decoder. Yes you sample from output probabilities but that's literally the least interesting bit. How you generate those probabilities is the hard part.

          You do this! When you're speaking or writing you're going one word at a time just like the model, and just like the model your attention is moving between different things to form the thoughts. When you're writing you need at least a semi-complete thought in order for you figure out what word you should write next. The fact that it generates one word at a time is a red herring as to what's really going on.

          • dmvdoug a day ago |
            Now you’re being remarkably reductive—-about the human language facility. We don’t ”go one word at a time.” (Simple example: “hey, look at that tig boad! Uh… ha! Tig boad, I meant big toad.” Stupid example but the point is that the cognitive psychology of language is much more complex that you’re making out.)

            Also, what are these “thoughts” you have when writing and what’s a “complete” vs. “semi-complete” one? Stupid question, yes, but you again vastly over-trivialize the real dynamic interplay between figuring out what word(s) to write and figuring out what precisely you’re actually trying to say at all. Writing really is another form of thinking for us.

            • Spivak a day ago |
              You went the opposite direction than what I was trying to say. I'm saying that the cool part of these models is that they have this dynamic interplay because of their self-attention mechanism and aren't really going one word at a time in the same way that humans aren't despite the appearance of typing or saying one word at a time.
              • wrs a day ago |
                I don't know about you, but I seem to solve a lot of problems without generating any words at all.
          • Brian_K_White 15 minutes ago |
            If neither an AI nor a person does anything but fit the next word by matching with other prior texts, then where did those other texts come from?

            A language is a vocabulary and a set of rules which can be described and codified, and is generally learned from others and other prior examples.

            But all that describes is a tool called a language. The tool itself is mechanistic. The wielder may or may not be. The tool and the wielder are two different things.

            A piece of paper with text on it is not a writer. A machine that writes text onto paper is still not a writer even though a human writer performs the same physical part of the process as the machine.

            Similarly, just like it's possible for an mp3 player to make the same sounds we use to express concepts and convey understanding, without itself having any concepts or understanding, it is also possible for an intellect to perform mechanistic acts that don't require an intellect.

            Sure, actually quite a lot of human activity can be described by simple rules that a machine can replicate. So what?

            You can move a brick from one place to another. And so can a cart. And so can gravity. Therefor you are no different from a field?

            It's utterly silly to be mystified by this. It's challenging the other ape in mirror for looking at you.

      • xkqd 2 days ago |
        These statistical models certainly don’t have will, they hardly have understanding, but they do a great job at emulating reasoning.

        As we increase parameter sizes and increment on the architecture, they’re just going to get better as statistical models. If we decide to assign terms reserved for human beings, that’s more a reflection on the individual. Also they certainly can decide to do things, but so can my switch statements.

        I’m going to admit that I get a little unnerved when people say these statistical models have “actual intelligence”. The counter is always along the lines of “if we can’t tell the difference, what is the difference?”

        • Spivak a day ago |
          Why reserve them for humans? Is it more or less rational to assume that we have some special intangible spark? I think it is cool as hell that we can now make a machine that exhibits an emergent property previously only observed in living things. I'm not saying they're alive, conscious, sentient, or any of that stuff. Silly. It would actually be less interesting if that was the case, more of a breakthrough, but less interesting. A machine that can reason about a huge variety of topics and on data that is nothing like it's ever seen before is wild.

          Draw the line wherever you like, if you want to say that intelligence can't be meaningfully separated from stuff like self-awareness, memory, self-learning, self-consistency then that's fine. But is intelligence and reason really so special that you have to be a full being to exhibit it?

          • xkqd 11 hours ago |
            > Is it more or less rational to assume that we have some special intangible spark?

            My comment is predicated on the belief that yes, at this moment it is more rational to assume we have a special spark. Moreso, it’s irrational that individuals believe that in these models there’s a uniqueness beyond a few emergent properties. It’s a critique on the individuals, not the systems. I worry many of us are a few Altman statements short of having a Blake Lemoine break.

            To look at our statistical models and say they exhibit “actual intelligence” concerns me that individuals are losing groundness with what we have in front of us.

      • wrs a day ago |
        I see the point you're trying to make, but you're being a bit too hyperbolic. The entire decoder is a loop over a function that takes context and generates some next-word probabilities. It's a very complicated function, certainly, but it literally is solving things by "get[ting] a bunch of text and calculat[ing] the most often used next word".

        What's impressive is how much it can do by just doing that, because that function is so complicated. But it clearly has limits that aren't related to the scope of the training data, as is demonstrated to me daily by getting into a circular argument with ChatGPT about something.

    • fragmede 2 days ago |
      I thought they were stoichastic parrots that couldn't do anything outside their training data. Now they might do anything? I don't know what to believe, let me ask ChatGPT and have it tell me what to think!
      • DirkH 2 days ago |
        Always seems to me that the goalposts keep moving as capabilities of AI improves.

        I swear to God we could have AGI+robotics capable of doing anything a human can do (but better) and we'll still - as a species - have multi-hour podcasts pondering and mostly concluding "yea, impressive, but they're not really intelligent. That's not what intelligence actually is."

        • Brian_K_White 2 days ago |
          They aren't and it isn't. So far it's all pure mechanism.

          However, when people talk like this, it does make one wonder if the opposite isn't true. No AI has done more than what an mp3 player does, but apparently there are people who hear an mp3 player say "Hello neighbor!" and actually believe that it greeted them.

          • DirkH a day ago |
            I think you are confusing intelligence with consciousness. They're orthogonal.

            Otherwise I do not know what definition of intelligence you are using. For me I just use wiki's: "Intelligence can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context."

            Nothing in that definition disallows a system from being "pure mechanism" and also being intelligent. An mp3 player isn't intelligent because - unlike AI - it isn't taking in information to retain it as knowledge to be applied to adaptive behaviors within an environment or context.

            An mp3 player that learns my preferences over time and knows exactly when to say hi and how based on contextual cues is displaying intelligence. But I would be mistaken for thinking that it doing so means it is also conscious.

      • Brian_K_White 2 days ago |
        You really do not know how to parse the phrase "might do anything" in context?

        Well it's no crime to be so handicapped, but if it were me, that would not be something I went around advertizing.

        Or you're not so challenged. You did know how to parse that perfectly common phrase, because you are not in fact a moron. Ok, then that means you are disingenuous instead, attempting to make a point that you know doesn't exist.

        I mean I'm not claiming either one, you can choose.

        Perhaps you should ask ChatGPT which one you should claim to be, hapless innocent idiot, or formidable intellect ass.

        • fragmede 14 hours ago |
          As a whole human being, I am both of those, an idiot on daft days, and an ass when I'm feeling impish.

          The linked article doesn't give us the full transcript of what transpired, so we can't pour over it and analyze what it did, but it went in and messed about with grub, which, used to be, you'd have a cavalier junior sysadmin would go in and do. Now we can have an LLM go do that at the cost of billions of dollars. but the thing of it is, it didn't go in and just start doing random things, like rm -rf / , or composing a poem, or getting stuck in vi. it tried (but failed) to do something specific, beyond what the human driving it asked it to do, but with something resembling intention.

          whether LLMs can reason depends on your definition of reason, but intention is a new one.

  • johnea 2 days ago |
    Maybe automating a task that you don't want to remember how to perform, would best be done by writing a script?

    Always remember the rule of the lazy programmer:

    1st time: do whatever is most expeditious

    2nd time: do it the way you wished you'd done it the first time

    3rd time: automate it!

    • noobermin 2 days ago |
      In my reading of it, the article says that this was done as an experiment, not as a means to accomplish anything.
      • nineteen999 2 days ago |
        There's no logs or screenrecording/video provided, the whole article is based on an (email) discussion with the CTO. It's a plausible experiment obviously, but for all we know the events or "results" could be a complete fabrication (or not).

        Brought to you by Redwood Research(TM).

  • gowld 2 days ago |
    > CEO at Redwood Research, a nonprofit that explores the risks posed by AI

    CEO promoting himself on the Internet...

    > No password was needed due to the use of SSH keys;

    > the user buck was also a [passwordless] sudoer, granting the bot full access to the system.

    > And he added that his agent's unexpected trashing of his desktop machine's boot sequence won't deter him from letting the software loose again.

    ... as an incompetent.

    • senectus1 2 days ago |
      Not sure why the downvotes...

      he even admits it

      >"I only had this problem because I was very reckless,"

      guys makes an automated process and is surprised when his outsourcing his trust to the automated process.

      Trust but verify mate. Computers are IO machines you put garbage in and you will get garbage out.

      AI is not different, in fact its probably worse as its an aggregator of garbage.

      • bigstrat2003 2 days ago |
        The downvotes are because it's rude to call someone incompetent because they made mistakes. We have all done very stupid shit in our day.
        • senectus1 2 days ago |
          learning from mistakes only works if you own the mistake.

          this work by proxy as well.

    • Vecr 2 days ago |
      Hah, that's the outfit Yudkowsky endorsed. You think he's going to retract? Because almost anyone who knows system administration and LLMs would have told this guy it was a horrible idea.
  • bubblegumdrop 2 days ago |
    Does anyone have similar agentic code or know of any frameworks for accomplishing a similar task? I've been working on something like this as a side project. Thanks.
  • imron 2 days ago |
    > Shlegeris said he uses his AI agent all the time for basic system administration tasks that he doesn't remember how to do on his own, such as installing certain bits of software and configuring security settings.

    Back in the day, I knew the phone numbers of all my friends and family off the top of my head.

    After the advent of mobile phones, I’ve outsourced that part of my memory to my phone and now the only phone numbers I know are my wife’s and my own.

    There is a real cost to outsourcing certain knowledge from your brain, but also a cost to putting it there in the first place.

    One of the challenges of an AI future is going to be finding the balance between what to outsource and what to keep in your mind - otherwise knowledge of complex systems and how best to use and interact with them will atrophy.

    • jeffbee 2 days ago |
      I can still remember all my high school friends' phone numbers though. Just not the numbers of anyone I met in the 30 years since.
      • QuercusMax 2 days ago |
        I can remember the phone number for the local Best Buy which I called a lot as a teenager to find out when new games came in stock.
    • courseofaction 2 days ago |
      There is also a cost to future encoding of relevant information - I (roughly) recall an experiment with multiple rounds of lectures where participants took notes in a text document, and some were allowed to save the document while others weren't.

      Those who could save had worse recall of the information, however they had better recall of information given in the next round without note taking. Suggests to me there are limits to retention/encoding in a given period, and offloading retention frees resources for future encoding in that period.

      Also that study breaks are important :)

      Anecdotally, I often feel that learning thing 'pushes another out', especially if the things are learnt closely together.

      Similarly, I'm less likely to retain something if I know someone I'm with has that information - essentially indexing that information in social knowledge graphs.

      Pros and cons.

    • moribvndvs 2 days ago |
      I think it ironic that visionaries and optimists see AI as freeing humanity, where our reliance on it will make us subordinate to it and its owners.
      • ethbr1 2 days ago |
        100% this. When we outsource something to the extent that we're incapable of doing it ourselves, we place ourselves at the mercy of those who control it.
      • CatWChainsaw 21 hours ago |
        "Once men turned their thinking over to machines in the hopes that this would set them free. But this just allowed other men with machines to control them."
        • GeoAtreides 20 hours ago |
          "Thou shalt not make a machine in the likeness of a human mind."
    • userbinator 2 days ago |
      I suspect that outsourcing as much of our lives to others (i.e. the corporations and the governments they control) is exactly what they want. AI is just the next thing that happens to be extremely useful for that plan.
      • FuckButtons 2 days ago |
        Who is they? there is no plan here, it’s just everyone making similar decisions when faced with a similar set of incentives and tools, the reason those corporations can make money, is because they add value to the people who use them, if they didn’t, it wouldn’t be a business.
        • userbinator a day ago |
          They've stopped adding value and started milking for $$$ long ago.
        • tivert a day ago |
          >> I suspect that outsourcing as much of our lives to others (i.e. the corporations and the governments they control) is exactly what they want. AI is just the next thing that happens to be extremely useful for that plan.

          > Who is they?

          The decentralized collective of corporations and governments that understand they can take advantage of us outsourcing our lives.

          > there is no plan here, it’s just everyone making similar decisions when faced with a similar set of incentives and tools

          There doesn't need to be a master plan here, just a decentralize set of smaller plans the align with the same incentive to use technology to create dependency.

          > the reason those corporations can make money, is because they add value to the people who use them, if they didn’t, it wouldn’t be a business.

          No. For instance, lots of hard drugs destroy their users, rather than "add[ing] value to the people who use them." The businesses that provide them still make money.

          It's a myth that the market is a machine that just provides value to consumers. It's really a machine that maximizes value extraction by the most powerful participants. Modern technological innovations have allowed for a greater amount of value extraction from the consumers at the bottom.

    • dylan604 2 days ago |
      We've already seen part of this with turn-by-turn GPS navigation. People enable it to go to the stores they've been to so many times already. I understand going some place for the first time, but every. single. time. just shows the vast majority of people are quite happy outsourcing the most basic skills. After all, if it means they can keep up with the Ks easier, then it's a great invention
      • sokoloff 2 days ago |
        I’ve lived in the same house for 17 years. I will still often use a map app to navigate home as it can know more about the traffic backups/delays than I can.

        It’s not just for getting home, but for getting home as efficiently as possible without added stress.

        • cj 2 days ago |
          True, although "back in the day" people used to memorize at what times during the day certain routes were busy, and they took alternative routes ("the back roads" in my area) to get around traffic that could be predicted.

          We've outsourced that to an app, too.

          • medvezhenok 2 days ago |
            It also has information on store closing times/dates (some stores are closed on random days of the week, or close early on others), unexpected detours (construction, previously announced road work), speed traps (crowdsourced), and more.

            Some of it simply wasn't possible before the technology came along.

          • tivert a day ago |
            > True, although "back in the day" people used to memorize at what times during the day certain routes were busy, and they took alternative routes ("the back roads" in my area) to get around traffic that could be predicted.

            I think "memorize" has the wrong connotation of rote memorization, like you were memorizing data from a table. I think it was more like being observant and learning from that.

            > We've outsourced that to an app, too.

            The technology lets you turn off your brain so it can atrophy.

          • yencabulator a day ago |
            In a big enough city that information is too dynamic to memorize. Car crashes, road work, sports events, presidential visits all caused their own microclimate that was not part of the everyday rush hour.
        • ethbr1 2 days ago |
          It's interesting, because maps (all of them) will reliably toss me onto a far more convoluted path home vs staying on the interstate.

          The former has ~15 traffic lights vs the latter ~2.

          Imho, one of the most corrosive aspects of GPS has been mystifying navigation, due to over reliance on weird street connections. Versus the conceptually simply (if slightly longer) routes we used to take.

          Unfortunately, with the net effect that people who only use GPS think the art of manual street navigation is impossibly complex.

          • sokoloff a day ago |
            In the early 2000s, I had developed via trial-and-error a very convoluted typical route home, cutting through some neighborhoods to bypass interchanges that were typically heavily backed-up during the evening rush hour. It would shave 10 minutes minimum, and sometimes 15-20.

            Shortly after 2010, that route became much less useful [due to heavily increased traffic] and when a colleague told me that I should try Waze, I realized that Waze was now sending a bunch of traffic down "my" route home.

            • fat_cantor a day ago |
              Waze sure pissed of a lot of owners of million dollar homes in west LA and Santa Monica when they sent a bunch of assholes speeding through those neighborhoods at 60 MPH
              • ethbr1 17 hours ago |
                I assumed that was just how everyone drove on LA surface streets, always.
          • tivert a day ago |
            > Imho, one of the most corrosive aspects of GPS has been mystifying navigation, due to over reliance on weird street connections. Versus the conceptually simply (if slightly longer) routes we used to take.

            This, exactly!

            Many years ago I realized constant GPS use meant I had no idea how to get around the city I'd lived in for years, and had huge gaps in my knowledge of how it fit together.

            To fix that, I:

            1) ditched the GPS,

            2) started using Google Maps printouts where I manually simplified its routes to maximize use of arterial roads, and

            3) bought a city map as a backup in case I got lost (this was pre-smartphone).

            It actually worked, and I finally learned how to get around.

            • saagarjha a day ago |
              I find that having a GPS all the time in my pocket has really done wonders in my ability to understand how the city fits together. Not driving everywhere probably plays some part in that too.
              • ethbr1 17 hours ago |
                The key distinction that prompts learning to me is switching to "north is always up" mode.

                "In front is always in front" deserves to die a fiery death.

                People should damn well know how to orient on a map.

                • saagarjha 4 hours ago |
                  I mean this view is nice when you are actually moving
              • tivert 16 hours ago |
                > I find that having a GPS all the time in my pocket has really done wonders in my ability to understand how the city fits together.

                How? The problem is GPS routing takes you on all kinds of one-off shortcuts which are a poor framework for general navigation, and tend to lack repetition. It also relieves you of the need to think on the way from A to B.

                > Not driving everywhere probably plays some part in that too.

                I could see that as being helpful, but that's only really doable in a small area, like a city center. You're not going to learn a metro area that way.

                • sokoloff 16 hours ago |
                  I have more or less the same thought as you (that a phone mapping app isn't helpful overall), but I can see how the moving map functionality and pinch-to-zoom would be helpful to learn the overview of an area, in a way that I think the turn-by-turn optimized navigation is harmful.
                • saagarjha 4 hours ago |
                  You can learn a metro area piecemeal.
          • fat_cantor a day ago |
            *Fortunately* (for manual street navigators), people who only use GPS think the art of manual street navigation is impossibly complex. During heavy traffic, we can use maps to find out where those people are being herded, and manually navigate even more efficiently. Also: it's nice to know if there's an accident somewhere.
          • consteval a day ago |
            I find Apple maps is very good in this regard and will keep you on the highway even if the traffic is higher and the route longer, because usually over average it's faster.

            I see a lot of people exiting when they see traffic come on and I can't help but shake my head. We've all tried it before, and it almost never works.

            The value of Apple maps to me is if there's HUGE traffic - like an accident closing 3 lanes. Often there's no indication of this otherwise, and you can be stuck for a whole nother hour. But Maps knows, and it'll put you on a longer highway to get away from it.

            I no longer use apple maps in my daily commute (1.5 hours one way). But, when I did, it caught quite a few huge accidents. Now I leave the office earlier and the likelihood of accidents is much lower, so I don't need Maps. Even so, once in a blue moon I'll be in a sticky situation.

        • ElevenLathe a day ago |
          It's helpful to gauge your arrival time as well.
    • zzyzxd a day ago |
      This was one of the points Neil Postman made in "Technopoly", 30 years ago. Every new technology introduced to the society is a negotiation with the culture. It may bring some benefits, but will also re-define or even takes something away.

      It's nice that these days I can talk to my father in some fancy messaging/video call apps. But the other day I had to give him a phone call, as I was dialing the number I noticed some melody echoed in my mind. Then I remembered when I was little, in order to memorize his number(so that I could call him from the landline), I made a song out of it.

    • ajdude 10 hours ago |
      I have also outsourced many of those Basic system administrative tasks, except instead of using an AI, I outsourced it to a bunch of .sh files.
  • idunnoman1222 2 days ago |
    The agent is set to respond to the terminals output, it cannot stop / finish the task
  • ilaksh 2 days ago |
    His system instructions include this: "In general, if there's a way to continue without user assistance, just continue rather than asking the user something. Always include a bash command in your message unless you need to wait for the user to say something before you can continue at risk of causing inconvenience. E.g. you should ask before sending emails to people unless you were directly asked to, but you don't need to ask before installing software."

    https://gist.github.com/bshlgrs/57323269dce828545a7edeafd9af...

    So it just did what it was asked to do. Not sure which model. Would be interesting to see if o1-preview would have checked with the user at some point.

    • drawnwren 2 days ago |
      The article said it was claude
    • dazzaji 2 days ago |
      Wait, is that gist of the same session as is described in the article? I don’t see any escalation of privileges happening.
      • ilaksh 2 days ago |
        It just ran 'sudo'.
        • dazzaji 2 days ago |
          I saw that but here’s an alternative take on what happened:

          While the session file definitely shows the AI agent using sudo, these commands were executed with the presumption that the user session already had sudo privileges. There is no indication that the agent escalated its privileges on its own; rather, it used existing permissions that the user (buck) already had access to.

          The sudo usage here is consistent with executing commands that require elevated privileges, but it doesn’t demonstrate any unauthorized or unexpected privilege escalation or a self-promotion to sysadmin. It relied on the user’s permissions and would have required the user’s password if prompted.

          So he sudo commands executed successfully without any visible prompt for a password, which suggests one of the following scenarios:

          1. The session was started by a user with sudo privileges (buck), allowing the agent to run sudo commands without requiring additional authentication.

          2. The password may have been provided earlier in the session (before the captured commands), and the session is still within the sudo timeout window, meaning no re-authentication was needed.

          3. Or maybe the sudoers file on this system was configured to allow passwordless sudo for the user buck, making it unnecessary to re-enter the password (I just discovered this one, actually!).

          In any case, the key point is that the session already had the required privileges to run these commands, and no evidence suggests that the AI agent autonomously escalated its privileges.

          Is this take reasonable or am I really missing something big?

          • ilaksh a day ago |
            That's correct. The whole thing is being promoted in a deliberately misleading way by multiple groups to get clicks.
            • dazzaji 7 hours ago |
              Thanks for clarifying, and I’m sorry to hear this is happening. LLM agents have a lot of promise, and it’s frustrating to see baseless fear being stirred up. There’s already enough uncertainty around what’s legitimately needed to get the tech and usage right.
  • stavros 2 days ago |
    I wrote a similar tool to help me do system tasks I couldn't be bothered to do myself:

    https://github.com/skorokithakis/sysaidmin

  • bravetraveler 2 days ago |
    This is about what I expect when I hear "AIOps". Something that operates So Hard... until it doesn't.

    Something reduced to 'see/do' can and should be implemented in pid1

  • bitwize 2 days ago |
    AI has advanced to Joey Pardella levels, a.k.a. "knowing just enough to be dangerous".

    Maybe it really is time to be scared...

  • JSDevOps a day ago |
    The whole thing sounds like nonsense to me. If all he wanted to do was update a system. Use Ansible or even a cron job.
  • isaacfrond a day ago |
    Obligatory xkcd:

    https://xkcd.com/416/