Are you wishing that he had tighter confidence intervals?
For example, saying that flying cars will be in widespread use NET 2025 is not much of a prediction. I think we can all say that if flying cars will be in widespread use, it will happen No Earlier Than 2025. It could happen in 2060, and that NET 2025 prediction would still be true. He could mark it green in 2026 and say he was right, that, yes, there are no flying cars, and so mark his scorecard another point in the correct column. But is that really a prediction?
A bolder prediction would be, say "Within 1-2 yrs of XX".
So what is Rodney Brooks really trying to predict and say? I'd rather read about what the necessary gating conditions are for something significant and prediction-worthy to occur, or what the intractable problems are that would make something not be possible within a predicted time, rather than reading about him complain about how much overhype and media sensation there is in the AI and robotics (and space) fields. Yes, there is, but that's not much of a prediction or statement either, as it's fairly obvious.
There's also a bit of an undercurrent of complaint in this long article about how the not-as-sexy or hyped work he has done for all those years has gone relatively unrewarded and "undeserving types" are getting all the attention (and money). And as such, many of the predictions and commentary on them read more as rant than as prediction.
In that context, I’d say his predictions are neither obvious nor lacking boldness when we have influential people running around claiming that AGI is here today, AI agents will enter the workforce this year, and we should be prepared for AI-enabled layoffs.
Rodney Brooks Predictions Scorecard - https://news.ycombinator.com/item?id=34477124 - Jan 2023 (41 comments)
Predictions Scorecard, 2021 January 01 - https://news.ycombinator.com/item?id=25706436 - Jan 2021 (12 comments)
Predictions Scorecard - https://news.ycombinator.com/item?id=18889719 - Jan 2019 (4 comments)
Then I bet Rodney can just fiddle with goal post and say that 3.26 trillion miles were driven in US in 2024, and having a human intervene 1000 miles would mean 3.26 billion interventions, and that this is clearly not self driving. In fact until Waymo disables Internet on all cars and prices it never needs any intervention ever, Rodney can claim he’s right, even then maybe not stopping exactly where Rodney wanted it to, might be proof that self driving doesn’t work.
Next big thing after deep learning prediction is clearly false. LLM is deep learning, scaled up, we are not in any sense looking past deep learning. Rodney I bet wanted it to be symbolic AI, but that is most likely a dead end, and the bitter lesson actually holds. In fact we have been riding this deep learning wave since Alex-Net 2012. OpenAI talked about scaling since 2016 and during that time the naysayers could be very confident and claim we needed something more, but OpenAI went ahead and proved out the scaling hypothesis and passed the language Turing test. We haven’t needed anything more except scale and reasoning has also turned out to be similar. Just an LLM trained to reason, no symbolic merger, not even a search step it seems like.
Human driving isn't a solved problem either; the difference is that when a human driver needs intervention it just crashes.
The remote operation seems to be more about navigational issues and reading the road conditions. Things like accidentally looping, or not knowing how to proceed with an unexpected obstacle. Things that don't really happen to human drivers, even the greenest of new drivers.
"Nothing ever happens"... until it does, and it seems Brooks's prediction roundups can now be conveniently replaced with a little rock on it with "nothing in AI ever works" written on it without anything of value being lost.
They don't run to SFO because SF hasn't approved them for airport service.
Waymo's app only shows the areas accessible to you. Different users can have different accessible areas, though in the Bay area it's currently just the two divisions I'm aware of.
I just want to highlight that the only mechanism by which this eventually produces cheaper rates is by removing having to pay a human driver.
I’m not one to forestall technological progress, but there are a huge number of people already living on the margins who will lose one of their few remaining options for income as this expands. AI will inevitably create jobs, but it’s hard to see how it will—in the short term at least—do anything to help the enormous numbers of people who are going to be put out of work.
I’m not saying we should stop the inevitable forward march of technology. But at the same time it’s hard for me to “very much look forward to” the flip side of being able to take robocabs everywhere.
Let's say AV development stops tomorrow though. Is continuing to grind workers down under the boot of the gig economy really a preferred solution here or just a way to avoid the difficult political discussion we need to have either way?
All I'm asking is that we take a moment to reflect on the people who won't be winners. Which is going to be a hell of a lot of people. And right now there is absolutely zero plan for what to do when these folks have one of the few remaining opportunities taken away from them.
As awful as the gig economy has been it's better than the "no economy" we're about to drive them to.
You haven’t paid attention to how VC companies work.
The promise has been that self-driving would replace driving in general because it’d be safer, more economical, etc. The promise has been that you’d be able to send your autonomous car from city to city without a driver present, possibly to pick up your child from school, and bring them back home.
In that sense, yes, Waymo is nonexistent. As the article author points out, lifetime miles for “self-driving” vehicles (70M) accounts for less than 1% of daily driving miles in the US (9B).
Even if we suspend that perspective, and look at the ride-hailing market, in 2018 Uber/Lyft accounted for ~1-2% of miles driven in the top 10 US metros. [1] So, Waymo is a tiny part of a tiny market in a single nation in the world.
Self-driving isn’t “here” in any meaningful sense and it won’t be in the near-term. If it were, we’d see Alphabet pouring much more of its war chest into Waymo to capture what stands to be a multi-trillion dollar market. But they’re not, so clearly they see the same risks that Brooks is highlighting.
[1]: https://drive.google.com/file/d/1FIUskVkj9lsAnWJQ6kLhAhNoVLj...
I think that's a bit of a silly standard to set for hopefully obvious reasons.
Secondly, if we throw a dart on a map: 1) what are the chances Waymo can deploy there, 2) how much money would they have to invest to deploy, and 3) how long would it take?
Waymo is nowhere near a turn-key system where they can setup in any city without investing in the infrastructure underlying Waymo’s system. See [1] which details the amount of manual work and coordination with local officials that Waymo has to do per city.
And that’s just to deploy an operator-assisted semi-autonomous vehicle in the US. EU, China, and India aren’t even on the roadmap yet. These locations will take many more billions worth of investment.
Not to mention Waymo hasn’t even addressed long-haul trucking, an industry ripe for automation that makes cold, calculated, rational business decisions based on economics. Waymo had a brief foray in the industry and then gave up. Because they haven’t solved autonomous driving yet and it’s not even on the horizon.
Whereas we can drop most humans in any of these locations and they’ll mostly figure it out within the week.
Far more than lowering the cost, there are fundamental technological problems that remain unsolved.
[1]: https://waymo.com/blog/2020/09/the-waymo-driver-handbook-map...
> with no evidence that the system can generalize, profitably, outside the limited areas it’s currently in
That argument doesn't seem horribly compelling given the regular expansions to new areas.
It’s safe to assume that a company’s ownership takes the decisions that they believe will maximize the value of their company. Therefore, we can look at Alphabet’s capital allocation decisions, with respect to Waymo, to see what they think about Waymo’s opportunity.
In the past five years, Alphabet has spent >$100B to buyback their stock; retained ~100B in cash. In 2024, they issued their first dividend to investors and authorized up to $70B more in stock buybacks.
Over that same time period they’ve invested <$5B in Waymo, and committed to investing $5B more over the next few years (no timeline was given).
This tells us that Alphabet believes their money is better spent buying back their stock, paying back their investors, or sitting in the bank, when compared to investing more in Waymo.
Either they believe Waymo’s opportunity is too small (unlikely) to warrant further investment, or when adjusted for the remaining risk/uncertainty (research, technology, product, market, execution, etc) they feel the venture needs to be de-risked further before investing more.
Alphabet has to buy back their stock because of the massive amount of stock comp they award.
DeepMind RL/MCTS can succeed in fairly open-ended settings like StarCraft and shit.
Brain/DeepMind still knocks hard. They under-invested in LLMs and remain kind of half-hearted around it because they think it’s a dumbass sideshow because it is a dumbass sideshow.
They train on TPU which costs less than chips made of Rhodium like a rapper’s sunglasses, they fixed the structural limits in TF2 and PyTorch via the Jax ecosystem.
If I ever get interested in making some money again Google is the only FAANG outfit I’d look at.
It’s like being in the back seat of Nikki Lauda’s car.
Just to make sure we're applying our rubric fairly and universally: Has anyone else been in an Uber where you wished you were able to intervene in the driving a few times, or at least apply RLHF to the driver?
(In other words: Waymo may be imperfect to the point where corrections are sometimes warranted; that doesn't mean they're not already driving at a superhuman level, for most humans. Just because there is no way for remote advisors to provide better decisions for human drivers doesn't mean that human-driven cars would not benefit from that, if it were available.).
You'd also have to believe that when you wished to change how your Uber driver drove, you'd actually have improved things rather than worsened them.
However, if the real number is something like an intervention every 20 or 100 miles, and so an operator is likely passively monitoring dozens of cars, and the cars themselves ask for operator assistance rather than the operator actively monitoring them, then I would agree with you that Waymo has really achieved full self driving and his predicitons on the basic viability have turned out wrong.
I have no idea though which is the case. I would be very interested if there are any reliable resources pointing one way or the other.
No-one would have equated the phrase "we'll have self-driving cars" with "some taxis in a few of US cities"
Not really wanting to have this argument a second time in a week (seriously- just look at my past comments instead of replying here as I said all I care to say https://news.ycombinator.com/item?id=42588699), but he is totally wrong about LLMs just looking up answers in their weights- they can correctly answer questions about totally fabricated new scenarios, such as solving simple physics questions that require tracking the location of objects and reporting where they will likely end up based on modeling the interactions involved. If you absolutely must reply that I am wrong at least try it yourself first in a recent model like GPT-4o and post the prompt you tried.
I was curious so I looked up how much you can buy the cheapest new helicopters for, and they are cheaper than an eVTOL right now- the XE composite is $68k new, and things like that can be ~25k used. I'm shocked one can in principle own a working helicopter for less than the price of a 7 year old Toyota Camry.
Skill level needed for "driving" would increase by a lot, noise levels would be abysmal, security implications would be severe (be they intentional or mechanical in nature), privacy implications would result in nobody wanting to have windows.
This is all more-or-less true for drones as well, but their weight is comparable to a todler, not to a polar bear. I firmly believe they'll never reach mass usage, but not because they're impossible to make.
Tell that to someone laid off when replaced by some "AI" system.
> Waymo not autonomous enough
It's not clear how often Waymo cars need remote attention, but it's not every 1-2 miles. Customers would notice the vehicle being stopped and stuck during the wait for customer service. There are many videos of people driving in Waymos for hours without any sign of a situation that required remote intervention.
Tesla and Baidu do use remote drivers.
The situations where Waymo cars get stuck are now somewhat obscure cases. Yesterday, the new mayor of SF had two limos double-parked, and a Waymo got stuck behind that. A Waymo got stuck in a parade that hadn't been listed on Muni's street closure list.
> Flying cars
Probably at the 2028 Olympics in Los Angeles. They won't be cost-effective, but it will be a cool demo. EHang recently put solid state batteries into their flying car and got 48 minutes of flight time, instead of their previous 25 minutes. Ehang is basically a scaled-up quadrotor drone, with 16 motors and props. EHang been flying for years, but not for very long per recharge. Better batteries will help a lot.
[1] https://aerospaceamerica.aiaa.org/electric-air-taxi-flights-...
Some companies may claim they are replacing devs with AI. I take it with a grain of salt. I believe some devs were probably replaced by AI, but not a large amount.
I think there may be a lot more layoffs in the future, but AI will probably account for a very small fraction of those.
I'm not even sold on the idea that there were any. The media likes to blame AI for the developer layoffs because it makes a much more exciting story than interest rates and arcane tax code changes.
But the fact is that we don't need more than the Section 174 changes and the end of ZIRP to explain what's happened in tech. Federal economic policy was set up to direct massive amounts of investment into software development. Now it's not. That's a real, quantifiable impact that can readily explain what we've seen in a way that the current productivity gains from these tools simply can't.
Now, I'll definitely accept that many companies are attributing their layoffs to AI, but that's for much the same reason that the media laps the story up: it's a far better line to feed investors than that the financial environment has changed for the worse.
>In 2024: At least 95,667 workers at U.S.-based tech companies have lost their jobs so far in the year, according to a Crunchbase News tally.
The problem is different in the meantime: nobody wants to be paying for training of those new devs. Juniors don’t have the experience to call LLM’s bullshit and seniors don’t get paid to teach them since LLMs replaced interns churning out boilerplate.
No, they are saying that the reason for the layoffs is not AI, it is financial changes making devs too expensive.
> If that is true then you need way less devs.
This does not follow. First of all, companies take a long time to measure dev output, it's not like you can look at a burn down chart over two sprints and decide to fire half the team because it seems they're working twice as fast. So any productivity gains will show up as layoffs only after a long time.
Secondly, dev productivity is very rarely significantly bounded by how long boilerplate takes to write. Having a more efficient way to write boilerplate, even massively more efficient, say 8h down to 1h, will only marginally improve your overall throughput, at least at the senior level: all that does is free you to think more about the complex issues you needed to solve. So if the task would have previously taken you 10 days, of which one day was spent on boilerplate, it may now take you, say, 8-9 days, because you've saved one day on boilerplate, plus some more minor gains here and there. So far from firing 7 out of every 8 devs, the 8h-to-1h boilerplate solution might allow you to fire 1 dev in a team of 10.
Sure, in the same sense that editors and compilers mean you need way less devs.
1. leaders notice they were wrong, start to increase human headcount again 2. human work is seen as boutique and premium, used for marketing and market placement 3. we just accept the sub-par quality of AI and go with it (quite likely with copywriting I guess)
I'd like to compare it with cinema and Netflix. There was a time where lost of stuff was mindless shit, but there was still place for A24 and it took the world by storm. What's gonna happen? No one knows.
But anyway, I figure that 90% of "laid off because of AI" is just regular lay-offs with a nice sounding reason. You don't loose anything by saying that and only gain in stakeholder trust.
Maybe a 2x improvement in kwh/kg. Much less risk of fire or thermal runaway. Charging in < 10 mins.
It would be unfortunate if we get solid state batteries that have the great features that you describe but they're limited to 2x or so power density. Twice the power density opens a lot of doors for technology improvements and innovation but it's still limiting for really cool things like humanoid robotics and large scale battery powered aircraft.
There are, of course, small startups promising usable lithium-air batteries Real Soon Now.[2]
[1] https://en.wikipedia.org/wiki/Lithium%E2%80%93air_battery
1. Solid state batteries. Likely to be expensive, but promise better energy density.
2. Some really good grid storage battery. Likely made with iron or molten salt or something like that. Dirt cheap, but horrible energy density.
3. Continued Lithium ion battery improvements, e.g. cheaper, more durable etc.
[1] https://newatlas.com/energy/worlds-largest-flow-battery-grid...
1. A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing.
2. The reasonable person adapts themselves to the world: the unreasonable one persists in trying to adapt the world to themself. Therefore all progress depends on the unreasonable person. (Changed man to person as I feel it should be gender neutral)
In hindsight when we look back, everything looks like we anticipated, so predictions are no different some pans out some doesn't. My feeling after reading prediction scorecard is that you need a right balance between risk averse (who are either doubtful or do not have faith things will happen quickly enough) and risk takers (one who is extremely positive) for anything good to happen. Both help humanity to move forward and are necessary part of nature.It is possible AGI might replace humans in a short term and then new kind of work emerges and humans again find something different. There is always a disruption with new changes and some survive and some can't, even if nothing much happens its worth trying as said in quote 1.
It really doesn't matter what prestigious lab you ran, as that apparently didn't impart the ability to think critically about engineering problems.
[Hint: Flying takes 10x the energy of driving, and the cost/weight/volume of 1 MJ hasn't changed in close to a hundred years. Flying cars require a 10x energy breakthrough.]
The author also expands on this:
> Don’t hold your breath. They are not here. They are not coming soon.
> Nothing has changed. Billions of dollars have been spent on this fantasy of personal flying cars. It is just that, a fantasy, largely fueled by spending by billionaires.
It’s worth actually reading the article before trashing someone’s career and engineering skills!
Not to mention, since we do have helicopters, the engineering challange of flying cars is almost entirely unrelated to energy costs (at least for the super rich, the equivalent of, say, a Rolls Royce, not of a Toyota). The thing stopping flying cars from existing is that it is extremely hard to make an easy to pilot flying vehicle, given the numerous degrees of freedom (and potential catastrophic failure modes); and the significantly higher impredictability and variance of the medium (air vs road surface).
Plus, the major problem of noise pollution, which gets to extreme levels for somewhat fundamental reasons (you have to diaplace a whole lot of air to fly; which is very close to having to create sound waves).
So, overall, the energy problem is already fixed, we already have point-to-point flying vehicles usable, and occasionally used, in urban areas, helicopters. Making them safe when operated by a very lightly trained pilot, and silent enough to not wake up a neighborhood, are the real issues that will persist even if we had mini fusion reactors.
A modern car might easily have 130 kW or more, and that's what a Cessna 172 has (around 180 hp). (Sure, a plane cruises at the higher end of that, while a car only uses that much to accelerate and cruises at the lower end of the range - still not a factor of 10x.)
As another datapoint, a Diamond DA40 does around 28 miles per gallon (< 9 litres per 100 km) at 60% power cruise.
But this is the whole point of VC investing. It is not normal distribution investing.
It seems to me we’re at the very least close to this, unless you hold unproven beliefs about grey matter vs silicon.
Technically true but I'm not convinced it matters that much. The reason autonomation took over in manufacturing was not that they could fire the operator entirely, but that one operator could man 8 machines simultaneously instead of just one.
If you took a transcript of a conversation with Claude 3.6 Sonnet, and sent it back in time even five years ago (just before the GPT-3 paper was published), nobody would believe it was real. They would say that it was fake, or that it was witchcraft. And whoever believed it was real would instantly acknowledge that the Turing test had been passed. This refusal to update beliefs on new evidence is very tiresome.
The NotebookLM “podcasters” would have been equally convincing to me.
Glad Polymarket (and other related markets) exist so they can put actual goal posts in place with mechanisms that require certain outcomes in order to finalize on a prediction result.
Wait, What now?
I have never heard this, but from the founder of CSAIL I am going to take it as a statement of fact and proof that basically every AI company is flat out lying.
I mean the difference between remote piloting a drone that has some autonomous flying features (which they do to handle lag etc) and remote driving a car is … semantics?
But yeah it’s just moving jobs from one location to another.
This is more like telling your units to go somewhere in a video game, and they mostly do it right, but occasionally you have to take a look and help them because they got stuck in a particularly narrow corridor or something.
Predict the future, Mr. Brooks!
I'm curious where this idea even came from, not sure who the customer would be, it's a little disappointing he doesn't mention mag-lev trains in a discussion about future rapid transit. I'd much rather ride a smooth mag-lev across town than an underground pallet system.
You are not predicting just daydreaming.
> Let’s Continue a Noble Tradition!
> The billionaire founders of both Virgin Galactic and Blue Origin had faith in the systems they had created. They both personally flew on the first operational flights of their sub-orbital launch systems. They went way beyond simply talking about how great their technology was, they believed in it, and flew in it.
> Let’s hope this tradition continues. Let’s hope the billionaire founder/CEO of SpaceX will be onboard the first crewed flight of Starship to Mars, and that it happens sooner than I expect. We can all cheer for that.