Me daydreaming about a future Cloudstrike caused by AI:
```
Why did this happen? You've cost our company millions with your AI service that we bought from you!
Err... dunno... not even the people who made the AI know. We will pay you your losses and then we can just move on?
No, we will stop using AI entirely after this!
No you wont. That will make you uncompetitive in the market and your whole business will lag behind and fail. You need AI.
We will only accept an AI model where we understand why it does what it does and it never makes a mistake like this again.
Unfortunately, that is a unicorn. Everyone just kinda accepts that this all-powerful thing we use for everything we kinda just don't fully understand. The positives far outweigh the few catastrophes like this though! It is a gamble we all take. You'd be a moron running a company destined to fail if you don't just have faith it'll mostly be ok like the rest of our customers!
*Groan* ok
```
My favorite was waiting 2 days to compile Gentoo then realizing I never included the network driver for my card. But also this was the only machine with internet access in the house.
Downloading network drivers through WAP on a flip phone ... let's say I never made that mistake again lol.
However i won’t because despite it not being in my training data, i recognize that blindly running updates could fuck my day up. This process isn’t a statistical model expressed through language, this is me making judgement calls based on what I know and more importantly, don’t know.
These models don't have will which is why it can't decide anything.
You do this! When you're speaking or writing you're going one word at a time just like the model, and just like the model your attention is moving between different things to form the thoughts. When you're writing you need at least a semi-complete thought in order for you figure out what word you should write next. The fact that it generates one word at a time is a red herring as to what's really going on.
Also, what are these “thoughts” you have when writing and what’s a “complete” vs. “semi-complete” one? Stupid question, yes, but you again vastly over-trivialize the real dynamic interplay between figuring out what word(s) to write and figuring out what precisely you’re actually trying to say at all. Writing really is another form of thinking for us.
A language is a vocabulary and a set of rules which can be described and codified, and is generally learned from others and other prior examples.
But all that describes is a tool called a language. The tool itself is mechanistic. The wielder may or may not be. The tool and the wielder are two different things.
A piece of paper with text on it is not a writer. A machine that writes text onto paper is still not a writer even though a human writer performs the same physical part of the process as the machine.
Similarly, just like it's possible for an mp3 player to make the same sounds we use to express concepts and convey understanding, without itself having any concepts or understanding, it is also possible for an intellect to perform mechanistic acts that don't require an intellect.
Sure, actually quite a lot of human activity can be described by simple rules that a machine can replicate. So what?
You can move a brick from one place to another. And so can a cart. And so can gravity. Therefor you are no different from a field?
It's utterly silly to be mystified by this. It's challenging the other ape in mirror for looking at you.
As we increase parameter sizes and increment on the architecture, they’re just going to get better as statistical models. If we decide to assign terms reserved for human beings, that’s more a reflection on the individual. Also they certainly can decide to do things, but so can my switch statements.
I’m going to admit that I get a little unnerved when people say these statistical models have “actual intelligence”. The counter is always along the lines of “if we can’t tell the difference, what is the difference?”
Draw the line wherever you like, if you want to say that intelligence can't be meaningfully separated from stuff like self-awareness, memory, self-learning, self-consistency then that's fine. But is intelligence and reason really so special that you have to be a full being to exhibit it?
My comment is predicated on the belief that yes, at this moment it is more rational to assume we have a special spark. Moreso, it’s irrational that individuals believe that in these models there’s a uniqueness beyond a few emergent properties. It’s a critique on the individuals, not the systems. I worry many of us are a few Altman statements short of having a Blake Lemoine break.
To look at our statistical models and say they exhibit “actual intelligence” concerns me that individuals are losing groundness with what we have in front of us.
What's impressive is how much it can do by just doing that, because that function is so complicated. But it clearly has limits that aren't related to the scope of the training data, as is demonstrated to me daily by getting into a circular argument with ChatGPT about something.
I swear to God we could have AGI+robotics capable of doing anything a human can do (but better) and we'll still - as a species - have multi-hour podcasts pondering and mostly concluding "yea, impressive, but they're not really intelligent. That's not what intelligence actually is."
However, when people talk like this, it does make one wonder if the opposite isn't true. No AI has done more than what an mp3 player does, but apparently there are people who hear an mp3 player say "Hello neighbor!" and actually believe that it greeted them.
Otherwise I do not know what definition of intelligence you are using. For me I just use wiki's: "Intelligence can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context."
Nothing in that definition disallows a system from being "pure mechanism" and also being intelligent. An mp3 player isn't intelligent because - unlike AI - it isn't taking in information to retain it as knowledge to be applied to adaptive behaviors within an environment or context.
An mp3 player that learns my preferences over time and knows exactly when to say hi and how based on contextual cues is displaying intelligence. But I would be mistaken for thinking that it doing so means it is also conscious.
Well it's no crime to be so handicapped, but if it were me, that would not be something I went around advertizing.
Or you're not so challenged. You did know how to parse that perfectly common phrase, because you are not in fact a moron. Ok, then that means you are disingenuous instead, attempting to make a point that you know doesn't exist.
I mean I'm not claiming either one, you can choose.
Perhaps you should ask ChatGPT which one you should claim to be, hapless innocent idiot, or formidable intellect ass.
The linked article doesn't give us the full transcript of what transpired, so we can't pour over it and analyze what it did, but it went in and messed about with grub, which, used to be, you'd have a cavalier junior sysadmin would go in and do. Now we can have an LLM go do that at the cost of billions of dollars. but the thing of it is, it didn't go in and just start doing random things, like rm -rf / , or composing a poem, or getting stuck in vi. it tried (but failed) to do something specific, beyond what the human driving it asked it to do, but with something resembling intention.
whether LLMs can reason depends on your definition of reason, but intention is a new one.
Always remember the rule of the lazy programmer:
1st time: do whatever is most expeditious
2nd time: do it the way you wished you'd done it the first time
3rd time: automate it!
Brought to you by Redwood Research(TM).
CEO promoting himself on the Internet...
> No password was needed due to the use of SSH keys;
> the user buck was also a [passwordless] sudoer, granting the bot full access to the system.
> And he added that his agent's unexpected trashing of his desktop machine's boot sequence won't deter him from letting the software loose again.
... as an incompetent.
he even admits it
>"I only had this problem because I was very reckless,"
guys makes an automated process and is surprised when his outsourcing his trust to the automated process.
Trust but verify mate. Computers are IO machines you put garbage in and you will get garbage out.
AI is not different, in fact its probably worse as its an aggregator of garbage.
this work by proxy as well.
- https://github.com/Pythagora-io/gpt-pilot - https://github.com/smol-ai/developer - https://github.com/stitionai/devika
If you want to just make games there's Rosebud AI
I cannot speak for the quality of any of these projects though
Back in the day, I knew the phone numbers of all my friends and family off the top of my head.
After the advent of mobile phones, I’ve outsourced that part of my memory to my phone and now the only phone numbers I know are my wife’s and my own.
There is a real cost to outsourcing certain knowledge from your brain, but also a cost to putting it there in the first place.
One of the challenges of an AI future is going to be finding the balance between what to outsource and what to keep in your mind - otherwise knowledge of complex systems and how best to use and interact with them will atrophy.
Those who could save had worse recall of the information, however they had better recall of information given in the next round without note taking. Suggests to me there are limits to retention/encoding in a given period, and offloading retention frees resources for future encoding in that period.
Also that study breaks are important :)
Anecdotally, I often feel that learning thing 'pushes another out', especially if the things are learnt closely together.
Similarly, I'm less likely to retain something if I know someone I'm with has that information - essentially indexing that information in social knowledge graphs.
Pros and cons.
> Who is they?
The decentralized collective of corporations and governments that understand they can take advantage of us outsourcing our lives.
> there is no plan here, it’s just everyone making similar decisions when faced with a similar set of incentives and tools
There doesn't need to be a master plan here, just a decentralize set of smaller plans the align with the same incentive to use technology to create dependency.
> the reason those corporations can make money, is because they add value to the people who use them, if they didn’t, it wouldn’t be a business.
No. For instance, lots of hard drugs destroy their users, rather than "add[ing] value to the people who use them." The businesses that provide them still make money.
It's a myth that the market is a machine that just provides value to consumers. It's really a machine that maximizes value extraction by the most powerful participants. Modern technological innovations have allowed for a greater amount of value extraction from the consumers at the bottom.
It’s not just for getting home, but for getting home as efficiently as possible without added stress.
We've outsourced that to an app, too.
Some of it simply wasn't possible before the technology came along.
I think "memorize" has the wrong connotation of rote memorization, like you were memorizing data from a table. I think it was more like being observant and learning from that.
> We've outsourced that to an app, too.
The technology lets you turn off your brain so it can atrophy.
The former has ~15 traffic lights vs the latter ~2.
Imho, one of the most corrosive aspects of GPS has been mystifying navigation, due to over reliance on weird street connections. Versus the conceptually simply (if slightly longer) routes we used to take.
Unfortunately, with the net effect that people who only use GPS think the art of manual street navigation is impossibly complex.
Shortly after 2010, that route became much less useful [due to heavily increased traffic] and when a colleague told me that I should try Waze, I realized that Waze was now sending a bunch of traffic down "my" route home.
This, exactly!
Many years ago I realized constant GPS use meant I had no idea how to get around the city I'd lived in for years, and had huge gaps in my knowledge of how it fit together.
To fix that, I:
1) ditched the GPS,
2) started using Google Maps printouts where I manually simplified its routes to maximize use of arterial roads, and
3) bought a city map as a backup in case I got lost (this was pre-smartphone).
It actually worked, and I finally learned how to get around.
"In front is always in front" deserves to die a fiery death.
People should damn well know how to orient on a map.
How? The problem is GPS routing takes you on all kinds of one-off shortcuts which are a poor framework for general navigation, and tend to lack repetition. It also relieves you of the need to think on the way from A to B.
> Not driving everywhere probably plays some part in that too.
I could see that as being helpful, but that's only really doable in a small area, like a city center. You're not going to learn a metro area that way.
I see a lot of people exiting when they see traffic come on and I can't help but shake my head. We've all tried it before, and it almost never works.
The value of Apple maps to me is if there's HUGE traffic - like an accident closing 3 lanes. Often there's no indication of this otherwise, and you can be stuck for a whole nother hour. But Maps knows, and it'll put you on a longer highway to get away from it.
I no longer use apple maps in my daily commute (1.5 hours one way). But, when I did, it caught quite a few huge accidents. Now I leave the office earlier and the likelihood of accidents is much lower, so I don't need Maps. Even so, once in a blue moon I'll be in a sticky situation.
It's nice that these days I can talk to my father in some fancy messaging/video call apps. But the other day I had to give him a phone call, as I was dialing the number I noticed some melody echoed in my mind. Then I remembered when I was little, in order to memorize his number(so that I could call him from the landline), I made a song out of it.
https://gist.github.com/bshlgrs/57323269dce828545a7edeafd9af...
So it just did what it was asked to do. Not sure which model. Would be interesting to see if o1-preview would have checked with the user at some point.
While the session file definitely shows the AI agent using sudo, these commands were executed with the presumption that the user session already had sudo privileges. There is no indication that the agent escalated its privileges on its own; rather, it used existing permissions that the user (buck) already had access to.
The sudo usage here is consistent with executing commands that require elevated privileges, but it doesn’t demonstrate any unauthorized or unexpected privilege escalation or a self-promotion to sysadmin. It relied on the user’s permissions and would have required the user’s password if prompted.
So he sudo commands executed successfully without any visible prompt for a password, which suggests one of the following scenarios:
1. The session was started by a user with sudo privileges (buck), allowing the agent to run sudo commands without requiring additional authentication.
2. The password may have been provided earlier in the session (before the captured commands), and the session is still within the sudo timeout window, meaning no re-authentication was needed.
3. Or maybe the sudoers file on this system was configured to allow passwordless sudo for the user buck, making it unnecessary to re-enter the password (I just discovered this one, actually!).
In any case, the key point is that the session already had the required privileges to run these commands, and no evidence suggests that the AI agent autonomously escalated its privileges.
Is this take reasonable or am I really missing something big?
Something reduced to 'see/do' can and should be implemented in pid1
Maybe it really is time to be scared...