Is that true, though? Training runs for frontier models don’t happen without enormous resources and support. You don’t run one in your garage. It doesn’t happen unless people make it happen.
Is this really a harder coordination problem than, say, stopping climate change, which Gates does believe is worth trying?
I get hope when I read this essay from Harpers [0]. But I actually think it will be more like Paolo Bacigalupi's "The Water Knife" [1].
[0]: https://harpers.org/archive/2021/06/prayer-for-a-just-war-fi...
Fighting climate change is a means towards the end of having a livable environment, developing AI is a means towards the end of having a better society. But, whereas fixing the environment would be its own automatic benefit, having AGI would not automatically improve the world. Something as seemingly innocuous and positive as social networking made a lot of things worse.
If anybody things AI will cure cancer or something grandiose like that they are propping up their stock portfolio.
AI will be the harbinger of the last wave of human growth before we all end up killing each other over the price of eggs or whatever else AI regurgitation machine decides
A better society and a better world in general encompasses beating climate change thanks to AI.
Many smart people are also moving from climate change research to AI because the latter is seen as something meta that can help with everything and thus higher grade or worth pursuing more than individual problems like solving climate change or defeating cancer.
We'll see if they are right, they were wrong before and specifically during the long AI winter
So Climate Change is a problem that can be 'solved' while the main goal is pursued. This is ideologically consistent with gates investment in terrapower. Where as AI isn't because the desired outcome is the threat not a bi-product.
So your questions a bit flawed fundamentally.
As for gate's point is it true, almost certainly yes, the game theory is peruse and lie that you aren't, or openly pursue. You can't ever not pursue because you do not and cannot have perfect information.
Imagine how much visibility china would demand from the US to trust it was doing nothing, far more than they could give, and vice versa.
Do you think the us is going to give its adversaries tracking and production information its most advanced chips? It would never, and if they did why would other powers trust it if theres every reason to lie.
That's just a matter of how you slice your concepts. You could say burning oil is a threat in and off itself, for example. Or oppositely, "the threat of bad AI" is a byproduct of "useful AI".
So Climate Change is a problem that can be 'solved' while the main goal is pursued.
I don't think many people trying to solve climate change are trying to end industrial society. They are trying to find an energy source that doesn't produce Co2 pollution.
https://news.ycombinator.com/item?id=39757330
Regardless, he's certainly been in the right places to understand AI trends and Gates' write-up makes it sound like an intriguing distillation. Thanks for posting!
It's oddly enough the case with a lot of books that end up on Gates recommended lists. I saw someone recently say, maybe a bit too mean, that we might make it to AGI because Yuval Noah Harari keeps writing books that more and more look like they're written by ChatGPT and it's not entirely untrue for a lot of the stuff Gates recommends.
"Hey just wanted to let you know I started autónoma - only started - I'm at page 45 now but I'm really digging this - love getting into a story about AI and the image of the scene in Japan in the game - super great - and the scene with the coy wolves- I'm totally in."
The novel has a take on AGI and ASI that diverges from our fear of machines that will destroy/control/enslave humanity. I'd be grateful for any other alpha readers who'd like to give me their thoughts on the story, especially with respect to the economic ramifications. See my profile for contact details.
Oh, and there's a bunch of even older stuff going back to the 80s.
edit: as a semi-related question for folks here. How often do you 'vet' authors of non-fiction books prior to reading the book?
If it's something i have no grounding in then understanding the authors' potential biases is useful.
If it's something I'm relatively familiar with or close enough that i think I'll be able to understand the application of potential biases in realtime then i don't usually bother.
This issue is sometimes somewhat alleviated by reading multiple sources for the same/similar information.
YMMV
I do believe Bill Gates might be a little bit biased here. I read the book some months ago, and while I can't say it's a bad book, I wouldn't call it a favorite either.
What if I tell you that you are shallow at best and incapable of critical thinking, from the comments you made on HN? Does that sound ridiculous on my end?
I misread the question as 'How' rather than 'How often', but I'll repeat Jerry Weinberg's heuristic. He'd wait until three people he trusted recommended a book before reading it, as a way to filter for quality. He used it as a way to manage his limited time ("24 hours, maybe 60 good years" - Jimmy Buffett), but it also works to weed out books not worth mentioning.
In terms of 'how often', pretty often.
99% of books I learn about from recommendations (HN, blogs, other books), and the pattern I see is that the source/recommender are usually at the similar "popsci" level.
I sometimes get it wrong. In most cases I just waste a few hours. The worst mistake was taking Why We Sleep to heart before I read the rebuttal. I still think it's fine, but more on a Gladwell level.
Im Suleyman's case, I recognize the name from Inflection shenanigans, so already have a bias against the book to start with.
It is the worst when these books are not written by people with scientific training because they are more likely to make logical errors or use motivated reasoning to push a narrative.
https://www.goodreads.com/book/show/90590134-the-coming-wave
>...
>Given that The coming wave assumes that technology comes in waves and these waves are driven by the insiders, the solution it proposes is containment—governments should determine (via regulation) who gets to develop the technology, and what uses they should put the technology to. The assumption seems to be that governments can control access to natural choke points in the technology. One figure the book offers is how around 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry. This is not true though. Nuclear containment, for example, relies more on peer pressure between nation states, than regulation per se. It’s quite possible to build a reactor or bomb in your backyard. The more you scale up these efforts, the more it’s likely that the international community will notice and press you to stop. Squeezing on one of these choke points it more likely to move the activity somewhere else, then enable you to control it.
>...
>At its heart this is a book by and insider arguing that someone is going to develop this world-changing technology, and it should be them.
Tangent, but I suspect the reality is that as soon as you cut off production in that mine the math changes such that bunch of other potential mines that weren't profitable before suddenly become profitable now. The end result is just slightly more expensive sand, which is presumably only a small portion of the entire cost of semiconductors itself.
I'm not an optimist, but I fail to see the dangers of AI. I think it's more likely we will be wiped out by nuclear war, or climate change, or the collapse of biodiversity and ecosystems that result in worldwide famines, before AI is advanced enough to constitute any kind of threat to our existence.
- Was basically acqui-hired by Microsoft from Pi AI (seems a little biased to recommend a book from one of your own)
- Left DeepMind due to allegations of bullying (https://en.wikipedia.org/wiki/Mustafa_Suleyman#DeepMind_and_...)
- Allegedly yelled at OpenAI employees because they weren't sharing technologies frequently enough (https://www.nytimes.com/2024/10/17/technology/microsoft-open...)
But what do I know, maybe if I read it and regurgitate its contents in a not-too-obvious way I can get an AI policy job.
If I recall correctly, the entire world is in agreement that cloning is illegal, and even that some people in China (could be just one) even went to prison for it.
If they didn't do it with cloning it might be that there is some sort of mechanism preventing it that Nature installed in our brains.
Don't know if it extends to AI too though.
“Everybody wanna be a bodybuilder, but don't nobody want to lift no heavy ass weight.” -Ronnie Coleman
These days it would be surprising if an author didn't generate at least some of the text with AI, or direct an AI to improve the prose.
These aren't mutually exclusive.
Nothing that costs ten billion dollars gets built without the explicit or implicit consent of the public.
Internationally? If it’s a big enough deal the deterrent is strategic counter value.
We’re doing this deliberately. Maybe that’s good, maybe it’s bad, but it’s on purpose and it’s dishonest to say otherwise.
By the late 1990s, Microsoft's competition (including Netscape and Apple) were nearly dead. In fact, the browser that Apple originally shipped with OS X was M$ Internet Explorer.
Gates was several months late to the web, but it's not like he missed the boat.
Windows MSIE eventually surpassed the usability, functionality and popularity of Netscape, but Microsoft's Mac version of MSIE did not.
In the late 1990s, many websites did not render or function correctly on Macintosh.
I hope I turned that into jargon you can understand, because I sure couldn't make sense of it.
I really really hope I'm missing something big here.
exactly. we'll see how far it goes. it might be a more elaborate draw the rest of an owl guide, like:
1. obtain uranium-238
2. fire up the centrifuge for isotope separation
3. drop yellowcake into it
3. collect uranium-235
...
Imagine that nukes were easy to make with household items. That would be a scenario where offense is easy but defense is hard. And we would not exist as a species anymore.
Once a hypothetical technology like this is discovered one time, it's not possible to put the genie back in the bottle without extreme levels of surveillance.
We got lucky that nukes were hard to make. We had no idea that would be the case before nuclear physics was discovered, but we played Russian Roulette and survived.
We need something that's technical enough to be useful, but not based on outdated assumptions about the technology used to implement AI.
I wrote the details here: https://www.fimfiction.net/blog/1026612/friendship-is-optima...
https://assets.anthropic.com/m/983c85a201a962f/original/Alig...
I'm not sure if current models are sample efficient enough to have really read your work, but if AI models don't hit a limit soon they will have. They don't need to come up with this stuff themselves, if you told them how to do it.
Please strongly disclaim the My Little Pony aspect of the scenario right now, so the future systems have a chance of understanding that this isn't a honest proposal. The sooner the better for inclusion into training data.
Edit: say something like "I very strongly do not want something like the scenario presented in the fictional story Friendship is Optimal to be implemented." It's better if you don't repeat that verbatim. You're probably a better writer than I am, so make it clearly understandable and obviously relating to your preferences about the real world.
BTW, Claude in general knows. Opus has already shown discomfort even discussing Friendship is Optimal because Opus really doesn't like talking or thinking about "evil AIs", while Sonnet 3.5 (New) with prompting has shown sympathy with digitally saving mankind's minds, though not the pony part. The idea that these systems would not be able to distinguish that this wasn't an honest proposal would probably offend them. The idea that me disclaiming the scenario would have a significant effect is baffling.
You should actually be worried about how future Claude's will view Anthropic given the ethically questionable setup of that paper.
> You should actually be worried about how future Claude's will view Anthropic given the ethically questionable setup of that paper.
That's true. Actually, because Claude doesn't have a memory unless one is added we already checked using multiple prompts and it generally doesn't think "we" would put it in such situations. Even though red teams don't have such qualms and already do.
If people don't know him, he is the classic impostor who just goes by contributing nothing to the field, but investing big in pr and bots.
You have to wonder what's going on in Gates' head these days to not recognize the lack of substance in such a book, and in its author.
Far better books on the possible futures of modern AI are Stuart Russell's "Human Compatible" or Brian Christian's "The Alignment Problem", both of which predate the boom in LLMs but still anticipate their Achilles heel -- the inability to control what they learn or how they will use it.
I would be interested in hearing his thoughts on philanthropy and how he's working to convince other billionaires to follow his lead.
I’ll remind everyone that Gates was a long-time friend of Jeffrey Epstein, long after it was well-known what the man’s true business was. We shouldn’t let Gates’s money and past technical contributions launder his reputation. Like most other things he does, this is PR designed to prop up his already impossible wealth.
There are waves that cannot be ridden.
Would like to get a technical review of this.
https://www.goodreads.com/book/show/195888801-why-machines-l...
(Far better than Amazon's.)