/s
I don't have any suggestions on how to solve this. Everything I can think of has immediate large flaws.
Is it even possible ? Like, don't you know the political inclination of any website/journal you read ? I feel like this search of "The Objective Truth" is just a chimera. I'd rather articles combine pros and cons of everything they discuss tbh
You can easily find examples of each. Both NYT and Slate are considered left leaning and at the same time have been the professional stomping grounds of right leaning writers that started their own media companies that are not left leaning. Everyone has a bias and they don’t have to work somewhere with that same bias, especially if you just stick to the paper’s style guide. On the same substance the two media outlets present the same topic very differently. Sometimes I appreciate the Slate format for the author’s candor and view being injected (like being pointed on Malcom Gladwell). Sometimes I just want to know the facts as clearly stated as possible (I don’t care if the author doesn’t believe in climate change, tell me what happened when North Carolina flooded).
Because articles that actually do that are few and far between.
A journalist doing anything other than journaling is not a journalist.
So people getting quoted verbatim is perfectly fine. If the quoted turns out to be a liar, that's just part of the journal.
The journalist’s job is to describe what actually is happening, and to provide enough context for readers to understand it. Some bias will inevitably creep in, because they can’t possibly describe every event that has ever happened to their subject. But for example if they are interviewing somebody who usually lies, it would be more accurate to at least include a small note about that.
The former is a journalist's job, the latter is the reader's concern and not the journalist.
One of the reasons I consider journalism a cancer upon humanity is because journalists can't just write down "it is 35 degrees celsius today at 2pm", but rather "you won't believe how hot it is".
Just journal down what the hell happens literally and plainly, we as readers can and should figure out the rest. NTFS doesn't interject opinions and clickbait into its journal, and neither should proper journalists.
"Typhoon 14 located 500km south of Tokyo, Japan with a pressure of 960hPa and moving north-northeast at a speed of 30km/h is expected to traverse so-and-so estimated course of travel at 6pm tomorrow."
"Let's go over to Arizona. It's currently 105F in Tuscon, 102F in Yuma, ..."
Brutally to the point, the readers are left to process that information as appropriate.
Journalists do not do this, and they should if they claim to be journalists.
In America, just about every meteorologist editorializes the weather to a degree. There's nothing scientific about telling me "it's a great night for baseball" (great for the fans? Pitchers? Hitters?) or "don't wash your car just yet" but I will never stop hearing those. I don't, and the public doesn't seem to think that infringes on journalistic standards, because the information is still presented. Maybe this is different than what you mean -- if you're talking about a situation where journalists intentionally created the full context and pushed the information to the side, obviously that is undesirable.
I will add that weather as a "news product" actually gains quite a fair bit from presenter opinion, and news is a product above all.
But the first example is not very useful either. That journalist could be replaced by a fully automated thermometer. Or weather stations with an API. Context is useful: “It is 35 degrees Celsius, and we’re predicting that it will stay sunny all day” will help you plan your day. “It is 35 degrees Celsius today, finishing off an unseasonably warm September” could provide a little info about the overall trend in the weather this year.
I don’t see any particular reason that journalists should follow your definition, which you seem to have just… made up?
See: https://www.merriam-webster.com/dictionary/journal
Specifically noun, senses 2B through 2F.
I expect journalists to record journals and nothing more nor nothing less, not editorials or opinion pieces which are written by authors or columnists or whatever.
Or, from your definition, apparently:
> the part of a rotating shaft, axle, roll, or spindle that turns in a bearing
I don’t think these journalists rotate much at all!
A better definition is one of… journalism.
https://www.britannica.com/topic/journalism
journalism, the collection, preparation, and distribution of news and related commentary and feature materials through such print and electronic media as […]
That said, I don’t think an argument from definition is all that good anyway. These definitions are descriptive, not prescriptive. Journalism is a profession, they do what they do for the public good. If you think that it would be better for the field of journalism to produce a contextless log of events, defend that idea in and of itself, rather than leaning on some definition.
There are of course places you can go to get raw weather data, but a journalist might put it in context of what else is going on, interview farmers or climatologists about the situation, etc.
There are lots of kinds of journalism, but maybe most important is investigative journalism. They are literally doing an investigation - reading source material, actively seeking out the right people to interview and asking them right questions, following the leads to more information.
They’re describing collating and you’re describing evaluating.
If you're also tasking "journalists" to evaluate for you, you aren't a reader and they aren't journalists. You're just a dumb terminal getting programs (others' opinions) installed and they are influencers.
Your choice of metaphor points out problems with your definition. Avid Linux users will be immediately biased against what you wrote, true though it may be, because you assumed that NTFS is the predominant, or even good example of journaling file systems.
For example you could say:
Joey JoeJoe, billionaire CEO, who notably said horrible things, was convicted of some crimes, and ate three babies, was quoted as saying “machine learning is just so awesome”.
There, you didn’t inject a judgement. You accurately quoted the subject. You gave the reader enough contextual information about the person so they know how much to trust or not-trust the quote.
A major problem, though, is headlines don't and can't carry this context. And those are the things most people read.
The best you'll get is "Joey JoeJoe says machine learning is just so awesome" or at best "Joey JoeJoe comments on ML. The 3rd word will blow you away!".
How do you objectively decide which statements are horrible and which aren't?
The other stuff you listed are facts, but this one would be subjective. That isn't just providing contextual information, but adding personal bias into the reporting.
See same with Elon Musk.
Money turns genius to smooth brained egomaniacal idiots. See same with Steve Jobs
In reality they've been vampire sucked dry by close family / friends / salesmen for years and didn't know it.
"Being a billionaire must be insane. You can buy new teeth, new skin. All your chairs cost 20,000 dollars and weigh 2,000 pounds. Your life is just a series of your own preferences. In terms of cognitive impairment it's probably like being kicked in the head by a horse every day"
Solitary confinement is a great comparison. But also not existing in the same reality is 99.99% of the population must really warp you too.
He was an opportunistic, amoral sociopath before he was rich, and the system he reaps advantage from strongly selects for hucksters of that particular ilk more than anything else.
He's just another Kalanick, Neumann, Holmes or Bankman-Fried.
"It's too late to stop conflating wealth with intelligence"
For themselves? Absolutely.
For humanity? Perhaps we have wildly different ideas of what is good for humanity.
Even the stories I heard about him from one of his indirect reports back in the pre-iCEO "Apple is still fucked, NeXT is a distracted mess" era were just like stories told about him from the dawn of Apple and in the iPhone era.
Musk and Altman are opportunists. Musk appears to be a maligant narcissist. Neither seem in a rush to be better humans.
Billionaires are shameful for the collective, they should be shameful to everyone of us. They are fundamentally most unfit for leadership. They are evidence of civilizatory failure, the least we can do is not idolize them.
I think the job of a CEO is not to tell you the truth, and the truth is probably more often than not, the opposite.
What if gpt5 is vaporware, and there’s no equivalent 3 to 4 leap to be realized with current deep learning architectures? What is OpenAI worth then?
What you need a CEO for is to sell you (and your investors) a vision.
It saddens me how easily someone with money and influence can elevate themselves to a quasi religious figure.
In reality, this vision you speak of is more like the blind leading the blind.
If so many people wouldn't fall for claims without any proof, religions themselves would not exist.
If you want to call it suckering suckers into parting with their money that works too.
It’s not all that different from crypto.
Progress depends on irrational people though. Even if 50 companies fail for every one that succeeds, that’s progress.
Given Altman seems to be extremely vague about exact timelines and mainly gives vibes, he's probably doing fine. Especially as half the stuff he says is, essentially, to lower expectations rather than to raise them.
I remember all the articles praising the facade of a super-genius. It's a stark contrast to today.
People write about his trouble or his latest outburst like they would a neighborh's troubled kid. There's very little decency in watching people sink like that.
What's left after reality reassesserts itself, after the distortion field is gone? Mostly slow decline. Never to reach that high again.
I'm sure the defence is always, "but if we just had a bit more money, we would've got it done"
You can make a case that partial self driving is a route to FSD, the ISS is en route to Mars and (you can make a potentially slightly less compelling case) LLMs are on the way to AGI.
No one can make a case that lady was en route to the tech she promised
I think the more abstract and less defined the end goal is, the easier it is to make everything look like progress.
The blood testing lady was a pass/fail really. FSD/AGI are things where you can make anything look like a milestone. Same with SpaceX going to Mars.
[0] https://norcalrecord.com/stories/664710402-judge-tosses-clas...
Perhaps if you have a selective memory. There's plenty of collections of straight-up set-in-stone falsehoods on the internet to find, if you're interested.
It's not trivial. "Mere puffery" has netted Tesla about $1B in FSD revenue.
Ceos are more often come from marketing backgrounds than other disciplines for the very reason they have to sell stakeholders, employees, investors on the possibilities. If a ceos myth making turns out to be a lie 50 to 80 percent of the time then hes still a success as with Edison, Musk, Jobs, and now Altman.
But i think AI ceos seem to be imagining and peddling wilder fancier myths than the average. If AI technology pans out then i dont feel theyre unwarranted. I think theres enough justification but im biased and have been doing AI for 10 years.
To ur question, If a ceos lies dont accidently turn true eventually as with the case of Holmes then yes its a big problem.
OpenAI decides what they call gpt5. They are waiting for a breakthrough that would make people "wow!". That's not even very difficult and there are multiple paths. One is a much smarter gpt4 which is what most people expect but another one is a real good voice-to-voice or video-to-video feature that works seamlessly the same way chatgpt was the first chatbot that made people interested.
Otherwise people might get the impression that we’re already at a point of diminishing returns on transformer architectures. With half a dozen other companies on their heels and suspiciously nobody significantly ahead anymore, it’s substantially harder to justify their recent valuation.
Which model? Sonnet 3.5? I subscribed to Claude for while to test Sonnet/Opus, but never got them to work as well as GPT-4o or o1-preview. Mostly tried it out for coding help (Rust and Python mainly).
Definitely didn't see any "leap" compared to what OpenAI/ChatGPT offers today.
Sam Altman himself doesn't know whether it's the case. Nobody knows. It's the natural of R&D. If you can tell whether an architecture works or not with 100% confidence it's not cutting edge.
I think that was before 4o? I know 4o-mini and o1 for sure have come out since he said that
You say unironically on an article stating that Sam Altman cannot be taken at his word in a string of comments about him hyping up the next thing so he can exit strongly on the backs of the next greater fool. But seriously, I’m sure GPT-5 will be the greatest leap in history (OpenAI equity holder here).
I suspect it's a little different. AI models are still made of math and geometric structures. Like mathematicians, researchers are developing intuitions about where the future opportunities and constraints might be. It's just highly abstract, and until someone writes the beautiful Nautilus Mag article that helps a normie see the landscape they're navigating, we outsiders see it as total magic and unknowable.
But Altman has direct access to the folks intuiting through it (likely not validated intuitions, but still insight)
That's not to say I believe him. Motivations are very tangled and meta here
That’s dead. OpenAI knows that much. There will be more, but they aren’t going to report that we’re doing incremental advances until there’s a significant breakthrough. They need to stay afloat and say what it takes to try and bridge the gap.
But it is always about the narrative.
But you are right, we live in a post truth influencer driven world. It's all about the narrative.
I keep hearing from people who find these enormous benefits from LLMs. I've been liking them as a search engine (especially finding things buried in bad documentation), but can't seem to find the life-changing part.
2. To break procrastination loops. For example I often can't name a particular variable, because I can see few alternatives and I don't like all of them. Nowadays I just ask ChatGPT and often proceed with his opinion.
3. Navigating less known technologies. For example my Python knowledge is limited and I don't really use it often, so I don't want to spend time to better learn it. ChatGPT is just perfect for that kind of tasks, because I know what I want to get, I just miss some syntax nuances and I can quickly check the result. Another example is jq, it's very useful tool, but its syntax is arcane and I can't remember it even after years of occasional tinkering with it. ChatGPT builds jq programs like a super-human, I just show example JSON and what I want to get.
4. Not ChatGPT, but I think Copilot is based on GPT4, and I use Copilot very often as a smart autocomplete. I didn't really adopt it as a code writing tool, I'm very strict at code I produce, but it still helps a lot with repetitive fragments. Things I had to spend 10-20 minutes before, construction regexps or using editor macroses, I can now do with Copilot in 10-20 seconds. For languages like Golang where I must write `if err != nil` after every line, it also helps not to become crazy.
May be I didn't formulate my thoughts properly. It's not anything irreplaceable and I didn't become 10x programmer. But those tools are very nice and absolutely worth every penny I paid for it. It's like Intellij Idea. I can write Java in notepad.exe, but I'm happy to pay $100/year to Jetbrains and write Java in Idea.
Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels. It's an excuse to not learn something, to keep yourself less knowledgeable and skillful than you could really be. Why stunt yourself? Knowledge is power.
See it this way: anyone can use ChatGPT but not everyone knows Python well, so you 'll never be able to use ChatGPT to compete with someone who knows Python well. You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.
Experiences like the one you relay in your comment makes me think using LLMs for coding in particular is betting on short-term gains for a disproportionately large long-term cost. You can move faster now, but there's a limit to what you can do that way and you'll never be able to escape it.
Slow down, cowboy. Getting a LLM to generate code for you that is immediately useful and doesn't require you to think too hard about it can stunt learning, sure, but even just reading it and slowly getting familiar with how the code works and how it relates to your original task is helpful.
I learned programming by looking at examples of code that did similar things to what I wanted, re-typing it, and modifying it a bit to suit my needs. From that point of view it's not that different.
I've seen a couple of cases first hand of people with no prior experience with programming learn a bit by asking ChatGPT to automate some web scraping tasks or spreadsheet manipulation.
> You + limited knowledge of Python + ChatGPT << someone + good knowledge of Python + ChatGPT.
Substract ChatGPT from both sides and you have a rather obvious statement.
> Respectfully but that's a bit like saying you don't need to learn how to ride a bicycle because you can use a pair of safety wheels.
How did you learn to ride a bicycle?
You are making a lot of assumptions about someone's ability to learn with AND without assistance, while also making rather sci-fi leaps about our brain somehow being able to differentiate between learning that has somehow been tainted by the tendrils of ML overlords.
The models and the user interface around them absolutely will continue to improve far faster than any one person's ability to obtain subject mastery in a field.
If you want to have a debate, I'm all for it, but if you're going to go around imagining things that I may have said in another timeline then I don't see what's the point of that.
You could be right, but I strongly suspect that you are actually wrong.
Do you care how a car engine works to drive a car?
All with much lower latency than an HTTP request to a random place, knowing that my data can’t be used to trading anything, and it’s free.
It’s absolutely insane this is the real world now.
Not wanting my data to be sent to random places is what has limited my use of tools like copilot (so I'd only use it very sparingly after thinking if sending the data would be a breach of nda or not)
And I could say this about just about every domain of my life. I've trained myself to ask it about everything that poses a question or a challenge, from creating recipes to caring for my indoor Japanese maple tree to preparing for difficult conversations and negotiations.
The idea of "just" using it to compose emails or search for things seems frustrating to me, even reading about it. It's actually very hard for me to capture all of this in a way that doesn't sound like I'm insulting the folks who aren't there yet.
I'm not blindly accepting everything it says. I am highly technical and I think competent enough to understand when I need to push back against obvious or likely hallucinations. I would never hand its plans to a contractor and say "build this". It's more like having an extra, incredibly intuitive person who just happens to contain the sum of most human knowledge at the table, for $20 a month.
I honestly don't understand how the folks reading HN don't intuitively and passionately lean into that. It's a freaking superpower.
It is difficult to get a man to understand something when his salary depends upon his not understanding it.
Many of us here would see our jobs eliminated by a sufficiently powerful AI, perhaps some have already experienced it. You might as well. If you use AI so much, what value do you really provide and how much longer before the AI can surpass you at that?
There's a lot of people in technical roles who chose to study programming and work at tech companies because it seemed like it would pay more than other roles. My own calculation is that the tech-but-could-have-just-as-easily-been-a-lawyer cohort will be the first to find themselves replaced. Is that a revealing a bias? Absolutely.
Actual hackers, in the true sense, will have no trouble finding ways to continue to be useful and hopefully well compensated.
It's helped me stay productive on days when my brain just really doesn't want to come up with a function that does some annoying fairly complex bit of logic and I'd probably waste a couple hours getting it working.
Before I'd throw something like that at it, and it'd give me something confidently that was totally broken, and trying to go back and forth to fix it was a waste of my time.
Now I get something that works pretty well but maybe I just need to tweak something a bit because I didn't give it enough context or quite go over all the inconsistencies and exceptions in the business logic given by the requirements (also I can't actually use it on client machines so I have to type it manually to and from another machine, so I'm not copy pasting anything so I try to get away with typing less).
I'm not typing anything sensitive, btw, this is stuff you might find on Stack Overflow but more convoluted, like "search this with this exception and this exception because that's the business requirement and by these properties but then go deeper into this property that has a submenu that also needs to be included and provide a flatlist but group it by this and transform it so it fits this new data type and sort it by this unless this other property has this value" type of junk.
Based on all of the behaviour psychology books I've read, Claude would have to introduce a model that is 10x better and 10x cheaper - or something so radically different that it registers as an entirely new thing - for it to hit the radar outside of the tech world.
I encourage you to sample the folks in your life that don't work in tech. See if any of them have ever even heard of Claude.
"We have no current plans to make revenue."
"We have no idea how we may one day generate revenue."
"We have made a soft promise to investors that once we've built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you."
The fact he has no clue how to generate revenue with an AGI without asking it, shows his lack of imagination.
Reality: AI needs unheard amounts of energy. This will make climate significantly worse.
The relative quantity of power provided by nuclear (or renewables, for that matter) is NOT our current problem. The problem is the absolute quantity of power that is provided by fossil fuels. If that number does not decrease, then it does not matter how much nuclear or renewables you bring online. And nuclear is not cheaper than fossil fuels (even if you remove all regulation, and even if you build them at scale), so it won't economically incentivize taking fossil fuel plants offline.
Edit: Also why are you getting downvoted...
… and it always will? It seems terribly limiting to stop exploring the potential of this technology because it’s not perfect right now. Energy consumption of AI models does not feel like an unsolvable problem, just a difficult one.
In other words all we need is a new technology revolution like the deep learning revolution, except one centered around a radically new approach that overcomes every limitation of deep learning.
Now: how likely do you think this is to happen any time soon? Note that the industry and most of academia have bet the bank on deep learning partly because they think that prospect is extremely unlikely.
Also the tragedy of the commons is based on a number of flawed assumptions on how commons work.
This is a good reminder:
> Prominent AI figures were among the thousands of people who signed an open letter in March 2023 to urge a six-month pause in the development of large language models (LLMs) so that humanity would have time to address the social consequences of the impending revolution
In 2024, ChatGPT is a weird toy, my barber demands paper cash only (no bitcoin or credit cards or any of that phone nonsense, this is Silicon Valley), I have to stand in line at USPS and DMV with mindless paper-shuffling human robots, marveling at humiliating stupidity of manual jobs, robotaxis are still almost here, just around the corner, as always. Let's check again in a "coupe of thousand days" i guess!
Any system complex enough to be useful has to be embedded in an ever more complex system. The age of mobile phone internet rests on the shoulders of an immense and enormously complex supply chain.
LLMs are capturing low entropy from data online and distilling it for you while producing a shitton of entropy on the backend. All the water and energy dissipated at data centers, all the supply chains involved in building GPUs at the rate we are building. There will be no magical moment when it's gonna yield more low entropy than what we put in on the other side as training data, electricity and clean water.
When companies sell ideas like 'AGI' or 'self driving cars' they are essentially promising you can do away with the complexity surrounding a complex solution. They are promising they can deliver low entropy on a tap without paying for it in increased entropy elsewhere. It's physically impossible.
You want human intelligence to do work, you need to deal with all the complexities of psychology, economics and politics. You want complex machines to do autonomous work, you need an army of people behind it. What AGI promises is, you can replace the army of people with another more complex machine. It's a big bald faced lie. You can't do away with the complexity. Someone will have to handle it.
Your brain is proof to the contrary. AGI means different things to everyone, but a human brain definitely counts as "general intelligence", that implemented in silicon is enough to get basically all the things promised by AGI: if that's done at the 20 watts per brain that biology manages, then all of humanity can be simulated within the power envelope of the USA electrical grid… three times over.
So far the only thing that has been proven is we can get low entropy from all the low entropy we've published on the internet. Will it get to a point where models can give us more low entropy than what is present in the training data? Categorically: no.
Whatever you mean, our brains prove it's possible to have a system that uses 20 watts to demonstrate human-level intelligence.
You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.
If we're talking about what current ML/AI can do, they can extract patterns from training data and than apply those patterns to other inputs. This can give us great automation, but it won't give us anything better than the training data, solve physics, global warming, poverty or give us human intelligence in a chip.
Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.
As it stands, the planet cannot cope with all the entropy we're spreading around. It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.
It's grossly unreasonable to include the entire history of the universe given the Earth only formed about 4 billion years ago; and given that evolution wasn't even aiming for intelligence, even starting from our common ancestors being small rodents 65 million years ago is wildly overstating the effort required — even the evolution of primates is too far back without intentional selective breeding.
> You're positing a way to create human intelligence-like in a bottle, that's the same as speculating about the shape of a reality where we have FTL travel or teleportation or whatever else you fancy.
FTL may well well be impossible.
If you seriously think human intelligence is impossible, then you also think you don't exist: You, yourself, are a human like intelligence in a bottle. The bottle being your skull.
> This can give us great automation, but it won't solve physics, global warming, poverty or give us human intelligence in a chip.
AI has already been in widesprad use in physics for a while now, well before the current zeitgeist of LLMs.
There's a Y Combinator startup: "Charge Robotics is building robots that automate the most labor-intensive parts of solar construction".
Poverty has many causes, some of which are already being reduced or resolved by existing systems — and that's been the case since one of the ancestor companies of IBM, Tabulating Machine Company, was doing punched cards for the US census.
As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/
All I am saying is that Sam Altman's promises (remember the original topic) hinge on breaking thermodynamics.
Humans evolved on a planet that was just so. The elements on Earth that make life possible couldn't have been created in a younger universe. So it take the full history of the universe to produce human intelligence. It also doesn't exist in a vacuum. Current human civilization is not a collection of 8 billion 20 watt boxes.
> As for human intelligence on a chip? Well, (1) it's been quite a long time since humans were capable of designing the circuits manually, given the feature size is now at the level where quantum mechanics must be accounted for or you get surprise tunnelling between gates; and (2) one of the things being automated is feature segmentation and labelling of neural micrographs, i.e. literally brain scanning: https://pubmed.ncbi.nlm.nih.gov/32619485/
I don't understand any of this so I won't comment.
If it hinged on that, then you would actually be saying it's impossible.
> The elements on Earth that make life possible couldn't have been created in a younger universe
Those statements also apply to the silicon and doping agents used in chip manufacture. They tell you nothing of relevance, we're not doing Carl Sagan's Apple Pie from scratch with AI, we're trying to get a thing like us.
"impossible with the approach OpenAI is using" != "impossible"
> Whatever quantity Q of entropy in the training data, the total output will be more Q all accounted for. That's true for humans and machines. No shape of possible AGI will give us any output with less Q than the combination of inputs had. The dream that a machine will solve all problems that humanity can't hinges on negating that, which goes against thermodynamics.
Thermodynamics applies to closed systems, which the earth in isolation isn't.
This is why the source material for Shakespeare didn't already exist on Earth in the Cretaceous.
It's also why we've solved loads of problems we used to have, like bubonic plague and long distance communication.
> As it stands, the planet cannot cope with all the entropy we're spreading around.
We're many orders of magnitude away from entropic limits. Global warming is due to the impact on how much more sunlight keeps bouncing around inside the atmosphere, not the direct thermal effect of our power plants.
> It will eventually collapse civilization/the ecosystem, whatever buckles first, from the excess entropy. Because global warming, poverty, or ignorance is just entropy. Disorder. Things not being just so as we need them to be.
Entropy isn't everyday disorder, it's a specific relationship of microstates and macrostates, and you can't usefully infer things when you switch uses.
Poverty is much reduced compared to the historical condition. So is ignorance: we have to specialise these days because there's too much knowledge for any one human to learn.
Entropic collapse is indeed inevitable, but that inevitably is on the scale of 10^18 years or more with only engineering challenges rather than novel scientific breakthroughs (the latter would plausibly increase that to 10^106 if someone can figure out how to use Hawking radiation, but I don't want to divert into why I think that's more than merely an engineering challenge).
I followed up the point about AGI meaning different things by giving a common and sufficient standard of reference.
Your brain is evidence that it's "even possible".
All your brain proves is that a universe can produce planetary ecosystems capable of supporting human civilizations made of very efficient brain carrying mammals.
It definitely doesn't prove that these mammals can create boxes capable 'solve physics, poverty and global warming' if we just give Sam Altman enough electricity and chips. Or dollars to that effect.
What's the quote? "If the human brain were so simple that we could understand it, we would be so simple that we couldn’t".
Even though it doesn't need to be a single human doing all of it, our brains as existence proofs of the physical possibility, not of our own understanding.
Surely this is just a case of the future not being evenly distributed. All of these 'problems' are already solved and the solution is implemented somewhere, just not where you happen to be.
We have them in San Francisco now (and Los Angeles and Phoenix, and Austin soon.)
Waymo's overstated[1] success has let self-driving advocates do an especially pernicious bit of goalpost-shifting. I have been a self-driving skeptic since 2010, but if you had told me in 2010 that in 10-15 years we have robotaxis that were closely overseen by remote operators who can fill in the gaps I would have thought that was much more plausible than fully autonomous vehicles. And the human operators are truly critical, even more so than a skeptic like me assumed: https://www.nytimes.com/interactive/2024/09/03/technology/zo... (sadly the interactive is necessary here and archives don't work, this is a gift link)
I still think fully autonomous vehicles on standard roads is 50+ years out. The argument was always that ~95% of driving is addressable by deep learning but the remaining ~5% involves difficult problem-solving that cannot be solved by data because the data does not exist. It will require human oversight or an AI architecture which is capable of deterministic reasoning (not transformers), say at least at the level of a lizard. Since we have no clue how to make an AI as smart as a lizard, that 5% problem remains utterly intractable.
[1] I have complained for years that Waymo's statisticians are comparing their cars to all human drivers when they should be comparing it to lawful human drivers whose vehicles are well-maintained. Tesla FSD proves that self-driving companies will respond to consumer demand for vehicles that speed and run red lights.
I would be shocked if we're really 50 years away from that level of AI. 50 years is a long time in computing — late 70s computers were still using punched tape:
https://commons.m.wikimedia.org/wiki/File:NSA_Punch_Verifica...
You have three reasons:
1) reading the comment in good faith
2) understanding 'robotaxi' is not a precise technical term
3) safely assuming that most commenters here know about Waymo
There is no reason to choose the most pedantic and smarmily bad-faith reading of the comment.
As for "50 years" - I don't care about electrical engineering, I am talking about intelligence. In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.
> 1) reading the comment in good faith
> 2) understanding 'robotaxi' is not a precise technical term
> 3) safely assuming that most commenters here know about Waymo
#1 is the main reason why I wouldn't read "robotaxi" as anything other than "taxi robot", closely followed by #2.
> As for "50 years" - I don't care about electrical engineering, I am talking about intelligence.
Neither was I, and you should take #1 as advice for yourself.
> In the 1970s we had neural networks as smart as nematodes. Today they are as smart as spiders. Maybe in 50 years they will be as smart as bees. I doubt any of our children will live to see a computer as smart as a rat.
You're either overestimating the ones in the 70s or underestimating the ones today. By parameter count, GPT-3 is already about as complex as a medium sized rodent. If today's models aren't that smart (definitions of "intelligence" are surprisingly fluid from one person to the next), then you can't reasonably call the ones in the 70s as smart as a nematode either.
Really? Waymo's statisticians are the ones you are complaining about?
Tesla's statisticians have been lying for years, as has Musk when they cite "number of miles driven by FSD in the very small subset of conditions where it is available, and not turned off or unavailable because of where you are, the weather, or any other variable" versus "all drivers, all conditions, all locations, all times" to try to say FSD is safer.
You can walk to where they're waiting for you.
The arguments are essentially:
1. The technology has plateaued, not in reality, but in the perception of the average layperson over the last two years.
2. Sam _only_ has a record as a deal maker, not a physicist.
3. AI can sometimes do bad things & utilizes a lot of energy.
I normally really enjoy the Atlantic since their writers at least try to include context & nuance. This piece does neither.
It's like fossil fuels. They took billions of years to create and centuries to consume. We can't just create more.
Another problem is that the data sets are becoming contaminated, creating a reinforcement cycle that makes LLMs trained on more recent data worse.
My thoughts are that it won't get any better with this method of just brute-forcing data into a model like everyone's been doing. There needs to be some significant scientific innovations. But all anybody is doing is throwing money at copying the major players and applying some distinguishing flavor.
Imagine not going to school and instead learning everything from random blog posts or reddit comments. You could do it if you read a lot, but it's clearly suboptimal.
That's why OpenAI, and probably every other serious AI company, is investing huge amounts in generating (proprietary) datasets.
Progress on benchmarks continues to improve (see GPT-o1).
The claim that there is nothing left to train on is objectively false. The big guys are building synthetic training sets, moving to multimodal, and are not worried about running out of data.
o1 shows that you can also throw more inference compute at problems to improve performance, so it gives another dimension to scale models on.
thats not evidence of a step change.
> The big guys are building synthetic training sets
Yes, that helps to pre-train models, but its not a replacement for real data.
> not worried about running out of data.
they totally are. The more data, the more expensive it is to train. Exponentially more expensive.
> o1 shows that you can also throw more inference compute
I suspect that its not actually just compute, its changes to training and model design.
Our problem isn't technology, it's humans.
Unless he suggests mass indoctrination per AI AI won't fix anything.
The most laughable part of the article is where they point at the fact that in the past TWO YEARS we haven’t gone from “OMG we’ve achieved near perfect NLP” to “Deep thought tell us the answer to life the universe and everything” as some sort of huge failure is patently absurd. If you took Altman at his word on that one, you probably also scanned your eye ball for fake money. The truth though is that the rate of change in the products his company is making is still breath taking - the text to speech tech in the latest advanced voice release (recognizing it’s not actually text to speech but something profoundly cooler, but that’s lost on journalism majors teaching journalism majors like the author) puts to shame the last 30 years of TTS. This alone would have been enough to have a fairly significant enterprise selling IVR and other software.
When did we go from enthralled by the rate of progress to bored that it’s not fast enough? That what we dream and what we achieve aren’t always 1:1 but that’s still amazing? I get that when we put down the devices and switch off the noise we are still bags of mostly water, our back hurts, we aren’t as popular as we wish we were, our hair is receding, maybe we need invisiline but flossing that tooth every day is easier and cheaper, and all the other shit that makes life much less glamorous than they sold us in the dot com boom, or nanotech, etc, as they call out in the article.
But the dot com boom did succeed. When I started at early Netscape no one used the internet. We spun the stories of the future this article bemoans to our advantage. And it was messier than the stories in the end. But now -everyone- uses the internet for everything. Nanotechnology permeates industry, science, tech, and our every day life. But the thing about amazing tech that sounds so dazzling when it’s new is -it blends into the background- if it truly is that amazingly useful. That’s not a problem with the vision of the future. It’s the fact that the present will never stop being the present and will never feel like some illusory gauzy vision you thought it might be. But you still use dot coms (this journalism major assessment of tech was published on a dot com and we are responding on a dot com) and still live in a world powered by nanotechnology, and AI promised in TWO YEARS is still mind boggling to anyone who is thinking clearly about what the goal posts for NLP and AI were five years ago.
Not saying it is a bubble but something seems imbalanced here.
It was just the next in line to be inflated after crypto.
It's at least theoretically possible that all the liquidity and leverage in the top of the market could tire itself of chasing the next tulip mania.
For instance, $6 Billion could have gone into climate tech instead of ElizaX.
My problem with these dumb hype cycles is all the other stuff that gets starved in their wake.
The sophisticated investors are not betting on future increasing valuations based on current LLMs or the next incremental iterations of it. That's a "static" perspective based on what outsiders currently see as a specific product or tech stack.
Instead, you have to believe in a "dynamic" landscape where OpenAI the organization of employees can build future groundbreaking models that are not LLMs but other AI architectures and products entirely. The so-called "moat" in this thinking would be the "OpenAI team to keep inventing new ideas beyond LLM". The moat is not the LLM itself.
Yes, if everyone focuses LLMs, it does look like Meta's free Llama models will render OpenAI worthless. (E.g. famous memo : https://www.google.com/search?q=We+have+no+Moat%2C+and+Neith...)
As an analogy, imagine that in the 1980s, Microsoft's IPO and valuation looks irrational since "writing programming code on the Intel x86 stack" is not a big secret. That stock analysis would then logically continue saying "Anybody can write x86 software such as Lotus, Borland, etc." But the lesson learned was that the moat was never the "Intel x86 stack"; the moat was really the whole Microsoft team.
That said, if OpenAI doesn't have any future amazing ideas, their valuation will crash.
Im 42 though and already feeling old to understand the future lol
Writing a new DOS (or Windows 3) from scratch is something a lot of developers could do.
They just couldn't do it legally.
And thus it was easy to bully Compaq and others into only distributing PCs with DOS/Windows installed. For some time you even had to pay the Microsoft fee when you wanted a PC with Linux installed.
This is not unusual - politicians cannot be taken at their word, government bureaucrats cannot be taken at their word, and corporate media propagandists cannot be taken at their word.
The fact that the vast majority of human beings will fabricate, dissemble, lie, scheme, manipulate etc. if they see a real personal advantage from doing so is the entire reason the whole field of legally binding contract law was developed.
How would the car industry change if someone made a 3D printer that could make any part, including custom parts, with just electricity and air? It is a sea change to manufacturers and distributors, but there would still be a need for mechanics and engineers to specify the correct parts, in the correct order, and use the parts to good purpose.
It is easy to imagine that the inventor of such a technology would probably start talking about printing entire cars - and if you don't think about it, it makes sense. But if you think about it, there are problems. Making the component of a solution is quite different than composing a solution. LLMs exist in the same conditions. Being able to generate code/text/images is of no use to someone who doesn't know what to do with it. I also think this limitation is a practical, tacit solution to the alignment problem.
If you had a printer that could print semi-random mechanical parts, using it to make a car would be obviously dumb, right? Maybe you would use it to make, like, a roller blade wheel, or some other simple component that can be easily checked.
The invention would never see the light of day. If someone were to invent Star Trek replicators, they'd be buried along with their invention. Best Case it would be quickly captured by the ownership class and only be allowed to be used by officially blessed manufacturing companies, and not any individuals. They will have learned their lesson from AI and what it does to scarcity. Western [correction: all of] society is hopelessly locked into and dependent on manufacturing scarcity, and the idea that people have to pay for things. The wealthy and powerful will never allow free abundance of physical goods in the hands of the little people.
Are there specific historical examples of this that come to mind?
It’s rather hard to imagine even something like that (and it’s pretty limited in scope compared to the grand conspiracy above) working today, though; the EC would definitely stomp on it, and even the sleepy FTC would probably bestir itself for something so blatant.
In reality, the biggest problem was they had no incentive to invest in new lighting technology research, although they had the money to do so. It takes a lot of effort to develop a new technology, and significantly more to make it practical and affordable.
I think the story of the development of the blue LED which led to modern LED lighting is more illustrative of the real obstacles of technological development.
Companies/managers don't want to invest in R&D bc it’s too uncertain and they typically are more interested in the short term.
And it’s hard for someone without deep technical knowledge to identify a realistic worthwhile technical idea from a bad one. So they focus on what they can understand and what they can quantify ().
And even technical people can fail to properly evaluate ideas that are even slightly outside their area of expertise (or even sometimes the ones that are within it )
But I don't know of anything nearly as extreme as destroying an entire invention. Those tend to stick around.
There are a number of counterexamples though. Henry Ford, etc.
Then there would be a violent revolution which wrestles it out of their hands. The benefits of such a technology would be immediately obvious to the layman and he would not allow it to be hoarded by a select few.
So to solve this problem you need billions to burn on gambles. I guess that's how we ended up with VC's.
How do you reconcile that with the fact that Western society has invented, improved, and supplied many of the things we lament that other countries don't have (and those countries also lament it - it's not just our own Stockholm Syndrome).
AI can magically decide where to put small pieces of code. Its not a leap to imagine that it will later be good at knowing where to put large pieces of code.
I don't think it'll get there any time soon, but the boundary is less crisp than your metaphor makes it.
Right.
It sounds to me like you agree and are repeating the comment but are framing as disagreeable.
I'm sure I'm missing something.
I tend to agree with them. What people seem to miss about LLM coding systems, IMO:
a) deciding on the capabilities of an LLM to code after a brief browser session with 4o/claude is comparable to waking up a coder in the middle of the night, and having them recite the perfect code right then and there. So a lot of people interact with it that way, decide it's meh, and write it off.
b) most people haven't tinkered with with systems that incorporate more of the tools human developers use day to day. They'd be surprised of what even small, local models can do.
c) LLMs seem perfectly capable to always add another layer of abstraction on top of whatever "thing" they get good at. Good at summaries? Cool, now abstract that for memory. Good at q/a? Cool, now abstract that over document parsing for search. Good at coding? Cool, now abstract that over software architecture.
d) Most people haven't seen any RL-based coding systems yet. That's fun.
----
Now, of course the article is perfectly reasonable, and we shouldn't take what any CEO says at face value. But I think the pessimism, especially in coding, is also misplaced, and will ultimately be proven wrong.
This is a good question, and I worry you won't get a response. Here is a pattern I've observed very frequently in the LLM space, with much more frequency than random chance would suggest:
Bob: "Oh, of course it didn't work for you, you just need to use an ANA (amazing new acronym) model"
Alice: "Oh, that's great, where can I see how ANA works? How do I use it?"
** Bob has left the chat **
However, I worry about the premise underlying your reply: a sense that this is somehow incompatible with the viewpoint being discussed.
i.e. it's perfectly cromulent to both think LLMs are and will continue to be awesome at coding, even believe they get much better. And also that you could give me ASI today, and there'd be an incredible long tail of work and processes to reformulate to pull off replacing most labor. It's like having infinite PhDs available by text message. Unintuitively, not that much help. I can't believe I'm writing that lol. But here we are.
Steve Sinfosky had a good couple long posts re: this on X that discuss this far better than I can.
Magically, but not particularly correctly.
Why is it in your worldview a CEO “has to lie”?
Are you incapable of imagining one where a CEO is honest?
> The real story of LLMs is revealed when you posit a magical technology that can print any car part for free.
I’ll allow it if you stipulate that randomly and without reason when I ask for an alternator it prints me toy dinosaur.
> It is easy to imagine that the inventor of such a technology
As if the unethical sociopath TFA is about is any kind of, let alone the, inventor of genai.
> Being able to generate code/text/images is of no use to someone who doesn't know what to do with it.
Again, conveniently omitting the technology’s ever present failure modes.
I'm not talking about Altman in particular, I'm just annoyed with this constant spam on HN about how we all need to turn a blind eye to snake oil salesman because "that's just how it's supposed to be for a startup."
For a forum that complains about how money ruins everything, from the Unity scandal to OSS projects being sponsored and "tainted" by "evil companies," it's shocking to see how often golden boy executives are excused. I wish people had this energy for the much smaller companies trying to be profitable by raising subscriptions once in the 20 years they've been running, but instead they are treated like they burned a church. It truly is an elitist system.
While the attention-based mechanisms of the current generation of LLMs still have a long way to go (and may not be the correct architecture) to achieve requisite levels of spatial reasoning (and of "practical" experience with how different shapes are used in reality) to actually, say, design a motor vehicle from first principles... that future is far more tangible than ever, with more access to synthetic data and optimized compute than ever before.
What's unclear is whether OpenAI will be able to recruit and retain the talent necessary to be the ones to get there; even if it is able to raise an order of magnitude more than competitors, that's no guarantee of success. My guess would be that some of the decisions that have led to the loss of much senior talent will slow their progress in the long run. Time will tell!
But what's interesting when I speak to laymen is that the hype in the general public seems specifically centered on the composite solution that is ChatGPT. That's what they consider 'AI'. That specific conversational format in a web browser, as a complete product. That is the manifestation of AI they believe everyone thinks could become dangerous.
They don't consider the LLM API's as components of a series of new products, because they don't understand the architecture and business models of these things. They just think of ChatGPT and UI prompts (or it's competitor's versions of the same).
*(which is always a risky way of looking at it, because who the hell am I? Neither somebody in the AI field, nor completely naive toward programming, so I might be in some weird knows-enough-to-be-dangerous-not-enough-to-be-useful valley of misunderstanding. I think this describes a lot of us here, fwiw)
An AGI however could. Once it reaches IQs of more than say 500 it would become very hard to control it.
[0]: No Physical Substrate, No Problem https://slatestarcodex.com/2015/04/07/no-physical-substrate-...
[1]: It Looks Like You’re Trying To Take Over The World https://gwern.net/fiction/clippy
However I think it’s more likely that it will LARP as “I’m an emotionally supportive beautiful AI lady, please download me to your phone, don’t take out the battery or I die!”
That was part of the plot of person of interest. A really interesting show, it started as a basic "monster of the week" show but near the end it became a much more interesting plot.
Although most of the human characters were extremely one-dimensional. Especially Jim Caviezel's who was just a grumpy super soldier in every episode. It was kinda funny because they called him "the man in the suit" in the series and there was indeed little else to identify his character. The others were hardly better :(
But the AI storyline I found very interesting.
Learning how to get what you want is a fundamental skill you start learning from infancy.
We have a very limited ability to define human intelligence, so it is almost impossible to know how near or far we are from simulating it. Everyone here knows how much a challenge it is to match average human cognitive abilities in some areas, and human brains run at 20 watts. There are people in power that may take technologists and technology executives at their word and move very large amounts of capital on promises that cannot be fulfilled. There was already an AI Winter 50 years ago, and there are extremely unethical figures in technology right now who can ruin the reputation of our field for a generation.
On the other hand, we have very large numbers of people around the world on the wrong end of a large and increasing wealth gap. Many of those people are just hanging on doing jobs that are actually threatened by AI. They know this, they fear this, and of course they will fight for their and their families lifestyles. This is a setup for large scale violence and instability. If there isn't a policy plan right now, AI will be suffering populist blowback.
Aside from those things, it looks like Sam has lost it. The recent stories about the TSMC meeting, https://news.ycombinator.com/item?id=41668824, was a huge problem. Asking for $7T shows a staggering lack of grounding in reality and how people, businesses, and supply chains work. I wasn't in the room and I don't know if he really sounded like a "podcasting bro", but to make an ask of companies like that with their own capital is insulting to them. There are potential dangers of applying this technology; there are dangers of overpromising the benefits; and neither of them are well served when relatively important people in related industries thing there is a credibility problem in AI.
The problem is when the hype machine causes the echoes to replace the original intelligence that spawned the echoes, and eventually those echoes fade into background noise and we have to rebuild the original human intelligence again.
As I said, it is overhyped in some areas and underhyped in others.
It’s possible that this could happen but you need to propose a mechanism and metric for this argument to be taken seriously (and to avoid fooling yourself with moving goalposts). Under what grounds do you assert that the trend line will stop where you claim it will stop?
Yes, if super-human AGI simply never happens then the alignment problem is mostly solved. Seems like wishful thinking to me.
The standard electrical wall sockets that you use have not really changed since WW2. For load bearing elements in buildings, we don't have anything substantially better today than 100 years ago. There is a huge list of technological items where we've polished out almost every last wrinkle and a 1% gain once a decade is hailed as miraculous.
There are many reasons for all those things not to change. Limits abound. We discovered that getting taller or faster isn’t “better”, all we needed is smarter. Intelligence is different. It applies to everything else. You can lose a limb or eyesight and still be incredibly capable. The intelligence is what makes us able to handle all the other limits and change the world even though MS Word hasn’t changed much.
We are now applying a lot of our intelligence to inventing another one. The architecture won’t stay the same, the limits won’t endure. People keep trying and it’s infinitely harder to imagine reasons why progress will stop. Just choose any limit and defend it.
Yes...
> Argue the facts.
What?
> What are the limits that prevent progress on the dimension of replicating human intelligence?
I don't work in that field, but as a layman I'd wager the lack of clear technical understanding of what animal intelligence actually is, let alone how it works is the biggest limitation.
What about post-tensioned concrete slabs? Cross-laminated timber? Structural glazing? Intumescent paint?
Technology does keep improving, actually.
I can also create a web scale app in a weekend using AWS. It is just insane what we can do now vs. 1999. I remember in early 2000s Microsoft boasting how it could host a site for the olympics using active server pages. This was PR worthy. That would be a side project for most of us now using our pocket money.
I think the insight is that some people truly believe that LLMs would be exactly as groundbreaking as a magical 3D printer that prints out any part for free.
And they're pumping AI madly because of this belief.
https://www.lesswrong.com/posts/SkcM4hwgH3AP6iqjs/can-you-ge...
* They burned up the hype for GPT-5 on 4o and o1, which are great step changes but nothing the competition can't quickly replicate.
* They dissolved the safety team.
* They switched to for profit and are poised to give Altman equity.
* All while hyping AGI more than ever.
All of this suggests to me that Altman is in short-term exit preparation mode, not planning for AGI or even GPT-5. If he had another next generation model on the way he wouldn't have let the media call his "discount GPT-4" and "tree of thought" models GPT-5. If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit, and he likely wouldn't have gotten rid of the superalignment team. His actions are best explained as those of a startup CEO who sees the hype cycle he's been riding coming to an end and is looking to exit before we hit the trough of disillusionment.
None of this is to say that AI hasn't already changed a lot about the world we live in and won't continue to change things more. We will eventually hit the slope of enlightenment, but my bet is that Altman will have exited by then.
(In the Gell-Mann amnesia sense, make sure you take careful note of who was going "OAI has AGI internally!!!" and other such nonsense so you can not pay them any mind in the future)
New "investors" are Microsoft and Nvidia. Nvidia will get the money back as revenue and fuel the hype for other customers. Microsoft will probably pay in Azure credits.
If OpenAI does not make profit within two years, the "investment" will turn into a loan, which probably means bankruptcy. But at that stage all parties have already got what they wanted.
I don’t believe this is accurate. I think this is what you’re referring to?:
Under the terms of the new investment round, OpenAI has two years to transform into a for-profit business or its funding will convert into debt, according to documents reviewed by The Times.
That just means investors want the business to be converted from a nonprofit entity into a regular for-profit entity. Not that they need to make a profit in 2 years, which is not typically an expectation for a company still trying to grow and capture market share.
Source: https://www.nytimes.com/2024/10/02/technology/openai-valuati...
At about the ten year mark, there has to be a changing of the guards from the foot soldiers who give their all that an unlikely institution could come to exist in the world at scale to people concerned more with stabilizing that institution and ensuring its continuity. In almost every company that has reached such scale in the last decade, this has often meant a transition from an executive team formed of early employees to a more senior C-team from elsewhere with a different skillset. In a world context where the largest companies are more likely to stay private than IPO, it's a profoundly important move to allow some liquidity for longterm employees, who otherwise might be forced to stay working at the company long past physical burnout.
Which world is this?
Amazon founded in 1994
1997 listed at $400m
Google founded in 1998
2004 listed at $23bn
Spotify founded in 2006
2018 listed at $27bn
Airbnb founded in 2008
2020 listed at $47bn
Epic games founded in 1991
2022 unlisted value $32bn
Space Exploration founded in 2002
2022 unlisted value $125bn
ByteDance founded in 2012
2022 unlisted value $360bn
I doubt any competitor to the largest businesses in US//Europe that is actually putting up good audited numbers is staying private. Even Stripe has been trying to go public, but it doesn’t have the numbers for the owners to want to yet.
Their kind of product development [1] needs a long term thinking which public markets will not not support well
[1] ignore all the mars noise, just consider reusable i.e. cheap rockets and starlink
https://en.wikipedia.org/wiki/Gartner_hype_cycle
It just keeps happening over and over. I'd say we are at "Negative press begins".
> If he sincerely thought AGI was on the horizon he wouldn't be eyeing the exit
If such a thing could exist and was right around the corner, why would you need a company for it? Couldn't the AGI manage itself better than you could? Job's done, time to get a different hobby.
Why would you sell that?
AGI doesn’t mean smarter than the best humans.
General intelligence would be like an impulsive 12 year old boy who could see 6 spatial dimensions and regarded us as cartoons for only sticking to 3.
I've seen some use "super" (as in superhuman) intelligence lately to describe what you're getting at.
But if one has {a, b, c} and the other has {b, c, d} neither is more or less intelligent than the other, they just have different capabilities. "Super" is a bit too one-dimensional for the job.
The Lem story "Golem XIV" concerns a machine which claims it possesses categorically superior intelligence, and further that another machine humans have built (which runs but seems unwilling to communicate with them at all) is even more intelligent still.
Golem tries to explain using analogies, but it's apparent that it finds communicating with humans frustrating, the way it might be frustrating to try to explain things to a golden retriever. Lem wrote elsewhere that the single commonality between Golem's intelligence and our own is curiosity, unlike Annie, Golem is curious about the humans which is why it's bothering to communicate with them.
Humans (of course) plot to destroy both machines. Annie eliminates the conspirators, betraying a hitherto un-imagined capability to act at great distance, and the story remarks that it seems she does so the way a human would swat a buzzing insect. She doesn't fear the humans, but they're annoying so she destroyed them without a thought.
I'm a bit tired of the hype surrounding LLMs, but all the same for very mundane and humbler tasks that require some intelligence modern LLMs manage to surprise me on a daily basis.
But it rarely accomplishes more than what a small collection of humans with some level of expertise can achieve, when asked.
Surely, the LLM models we have today are astounding by any measure, relative to just a few years ago.
But pronouncements of how this will lead to utopia, without introducing a major revision of economic arrangements, are completely, and surely intentionally/conveniently (Sam isn't an idiot) misleading.
Is OpenAI creating a class of stock so everyone can share in their gains? If not, then AGI owned by OpenAI will make OpenAI shareholders rich, very much to the degree its AGI eliminates human jobs for itself and other corporations.
How does that, as an economic situation, result in the general population being able to do anything beyond be a customer, assuming they can still make money in some way not taken over by AGI?
Utopia needs an actual plan. Not a concept of a plan.
The latter just keeps people snowed and calm before an historic level rug pull.
What could you do at 12, with half of these advantages? Choose any of them, then give yourself infinite time to use them.
Many believe that AGI will happen in robots, and not in online services, simply because interacting with the environment might be a prerequisite for developing consciousness.
You mentioned boredom, which is interesting, as boredom may also be a trait of intelligence. An interesting question is if it will want to live at all. Humans have all these pleasure sensors and programming for staying alive and reproducing. The unburdened AGI in your description might not have good reasons to live. Marvin, the depressed robot, might become real.
We can't even define what consciousness is yet, let alone whats required to develop it.
Technically no, but practically...
12 year old limitations are: A. gets tired, needs sleep B. I/O limited by muscles
Probably there are more, but if 12 year old could talk directly to electric circuits and would not need sleep or even a break, then that 12 year old would be leaps and bounds above the best human in his field of interest.
(Well motivation to finish the task is needed though)
Well, you still have to have the baby, and raise it a little. And wouldn't you still want to be known as the parent of such a bright kid as AGI? Leaving early seems to be cutting down on his legacy, if a legacy was coming.
For these models today, if we measure the amount of energy expended for training and inference how do humans compare?
My best guess is 120,000 times more for training GPT-4 (based on claim it cost $63 million and that was all electricity at $0.15/kWh and looking only at the human brain and not the whole body).
But also, 4o mini would then be a killowatt hour for a million tokens at inference time, by the same assumptions that's 50 hours or just over one working week of brain energy consumption. A million tokens over 50 hours is 5.5 tokens per second, which sounds about what I expect a human brain to do, but caveat that with me not being a cognitive scientist and what we think we're thinking isn't necessarily what we're actually thinking.
Humans consume about 100W average power (2000 kcal to watt hours/24 hours). So 8 billion people consume ~800 GW. Call it 1 TW. Average world electric power generation is 28000 TWh / (24*365 hours) ~3 TW.
For starters, we still need the AI (LLMs for now) to be more efficient, i.e. not require a datacenter to train and deploy. Yes, I know there are tiny models you can run on your home pc, but that's comparing a bycicle to a jet.
Second, for an AGI it meaningfully improve itself, it has to be smarter than not just any one person, but the sum total of all people it took to invent it. Until then no single AI can replace our human tech sphere of activity.
As long as there are limits to how smart an AI can get, there are places where humans can contribute economically. If there is ever to be a singularity, it's going to be a slow one, and large human AI vompanies will be part of the process for many decades still.
[0] https://www.bloomberg.com/news/articles/2024-05-17/openai-di...
Eliminating Ilya's team was part of the post-coup restructuring to consolidate Altman's power, and every tribute to "safety" paid since then is spin.
Definitions of dissolve:
to cause to disperse or disappear
to separate into component parts
to become dissipated (see DISSIPATE sense 1) or decompose
BREAK UP, DISPERSE
Those seem like pretty accurate descriptions of what happened. Yes, dissolve can also mean something stronger, so perhaps it is fair to call the statement ambiguous. But it isn’t incorrect.
The long-term problem may be access to quality/human-created training data. Especially if the ones that control that data have AI plans of their own. Even then I could see OpenAI providing service to many of them rather than each of them creating their own models.
[1] https://www.nytimes.com/2024/09/27/technology/openai-chatgpt...
I never use the $20 plan but I access everything via API and i spend a couple of dollars per month.
Although lately I have a home server that can do llama 3.1 8b uncensored and that actually works amazingly well.
It works ok but with a large context it can still run out of memory and also gets a lot slower. With small context it's super snappy and surprisingly good. What it is bad at are facts/knowledge but this is not something a LLM is meant to do anyway. OpenWebUI has really good search engine integration which makes it work like perplexity does. That's a better option for knowledge usecases.
At best, theres a slow march to incremental improvements that look exactly like how human culture developed knowledge.
And all the downsides will remain, the same way people, despite hundreds of good sourxes of info still prefer garbage.
For many people, sadly, one can never be rich enough. My point is, planning for both short term exit, and long term gains, is essentially the same in this particular situation. What a boon! Nice problem to have!
Yes, he has made lots of investments over the years as head of YC but not every investment was successful. This was discussed on BBC's podcast 'Good Billionaire Bad Billionaire' recently.
If the end goal is monetization of ChatGPT with ads, it will be enshittified to the same degree as Google searches. If you get to that, what is the benefit of using ChatGPT if it just gives you the same ads and bullshit as Google?
Also, don't forget the recent Apple partnership [1], a very strong signal of their strategic positioning. Aligning with Apple reinforces their credibility and opens up even more opportunities for innovation and expansion, beyond just monetizing through ads. I just searched through this thread, and it seems the Apple partnership isn't being recognized as a significant achievement under Sam Altman's tenure as CEO, which is surprising given its importance.
High-intelligence AGI is the last human invention — the holy grail of technology. Nothing could be more ambitious, and if we know anything about Altman, it is that his ambition has no ceiling.
Having said all of that, OpenAI appears to be all in on brute-force AGI and swallowing the bitter lesson that vast and efficient compute is all you need. But they’ve overlooking a massive dataset that all known biological intelligences rely upon: qualia. By definition, qualia exist only within conscious minds. Until we train models on qualia, we’ll be stuck with LLMs that are philosophical zombies — incapable of understanding our world — a world that consists only of qualia.
Building software capable of utilizing qualia requires us to put aside the hard problem of consciousness in favor of mechanical/deterministic theories of consciousness like Attention-Schema Theory (AST). Sure, we don’t understand qualia. We might never understand. But that doesn’t mean we can’t replicate.
Citation?
...or are you just assuming that AGI will be able to solve all of our problems, appropos of nothing but Sam Altman's word? I haven't seen a single credible study suggest that AGI is anything more than a marketing term for vaporware.
" High-intelligence AGI is the last human invention" What? I could certainly see all kinds of entertaining arguments for this, but to write it so matter of fact was cringe inducing.
What? Would you mind explaining this?
I don’t think this is a controversial take. Many people take issue with the premise that artificial intelligence will surpass human intelligence. I’m just pointing out the logical conclusion of that scenario.
Likewise (silicon based) AGI may be so costly that it exists only for a few years before it's unsustainable no matter the demand for it. Much like Bitcoin, at least in its original incarnation.
I’m pretty sure it means exactly that. Without actually understanding subjective experience, there’s a fundamental doubt akin to the Chinese room. Sweeping that under the carpet and declaring victory doesn’t in fact victory make.
Ironically, we understand consciousness perfectly. It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.
That’s the hard part!
> It is literally the only thing we know — conscious experience. We just don’t know, yet, how to replicate it outside of biological reproduction.
This conflates subjective experience as necessary (an uncontroversial observation) with actually understanding what subjective experience is.
Or put another way: we all know what it’s like to breathe, but this doesn’t imply knowledge of the pulmonary system.
I think a better analogy would be vision. Even with a full understanding of the eye and visual cortex, one can only truly understand vision by experiencing sight. If we had to reconstruct sight from scratch, it would be more important to experience sight than to understand the neural structure of sight. It gives us something to aim for.
We basically did that with language and LLMs. Transformers aren’t based on neural structures for language processing. But they do build upon the intuition that the meaning of a sentence consists of the meaning that each word in a sentence has in relation to every other word in a sentence — the attention mechanism. We used our experience of language to construct an architecture.
I think the same is true of qualia and consciousness. We don’t need to know how the hardware works. We just need to know how the software works, and then we can build whatever hardware is necessary to run it. Luckily there’s theories of consciousness out there we can try out, with AST being the best fit I’ve seen so far.
Maybe not, since Altman pretty much said they no longer want to think it terms of "how close to AGI?". Iirc, he said they're moving away from that and instead want to move towards describing the process as hitting new specific capabilities incrementally.
I still don't get the safety team. Yes, I understand the need for a business to moderate the content they provide and rightly so. But elevating the safety to the level of the survival of humanity over a generative model, I'm not so sure. And even for so-called preventing harmful content, how can an LLM be more dangerous than the access of the books like The Anarchist Cookbook, the pamphlets on how to conduct guerrilla warfare, the training materials of how to do terrorisms, and etc? They are easily accessible on the internet, no?
> Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.
> How do we ensure AI systems much smarter than humans follow human intent?
This is a question that naturally arises if you are pursuing something that's superhuman, and a question that's pointless if you believe you're likely to get a really nice algorithm for solving certain kinds of problems that were hard to solve before.
Getting rid of the superalignment team showed which version Altman believes is likely.
It won't do Sam Altman and friends any good if they are the richest corpses after an unaligned AI goes rogue.
So it would be in their were egoistical self interest to make sure it doesn't.
Altman's actions are even more consistent with total confidence & dedication to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever. Plus, a desire to retain more personal control over that outcome – & not just conventional wealth! – than was typical with prior breakthroughs.
I'm a software engineer comfortably drawing a decent-but-not-FAANG paycheck at an established company with every intention of taking the slow-and-steady path to retirement. I'm not projecting, I promise.
> to a vision where OpenAI is the 1st and faraway leader in the production of the most valuable 'technology' ever
Except that OpenAI isn't a faraway leader. Their big news this week was them finally making an effort to catch up to Anthropic's Artifacts. Their best models do only marginally better in the LLM Arena than Claude, Gemini, and even the freely-released Llama 3.1 405B!
Part of why I believe Altman is looking to cash out is that I think he's smart enough to recognize that he has no moat and a very small lead. His efforts to get governments to pull up the ladder have largely failed, so the next logical choice is to exit at peak valuation rather than waiting for investors to recognize that OpenAI is increasingly just one in a crowd.
Where did that assertion come from? Has anyone come close to replicating either of these yet (other than possibly Google, who hasn't fully released their thing yet either), let alone "quickly"? I wouldn't be surprised if it's these "sideways" architectural changes actually give OpenAI a deeper moat than just working on larger models.
They both produce garbage code in this solution. Claude versions is just 20% less garbage, but still useless. The code mixes those two, even if i specify i want the python3 version or directly specifying a version.
a) "the technology is overhyped", based on some meaningless subjective criteria, if you think a technology is overhyped, don't invest your money or time in it. No one's forcing you.
b) "child abuse problems are more important", with a link to an article that clearly specifies that the child abuse problems have nothing to do with OpenAI.
c) "it uses too much energy and water". OpenAI is paying fair market price for that energy and what's more the infrastructure companies are using those profits to start making massive investments in alternative energy [1]. So if everything about this AI boom fails what we'll be left with is a massive amount of abundant renewable energy (the horror!)
Probably the laziest conjecture I have endured from The Atlantic.
[1]: https://www.cbc.ca/news/canada/calgary/artificial-intelligen...
Except that someone has to pay for it. AI companies are only willing to pay for power purchase agreements, not capital expenses. Same with the $7T of chip fab. Invest your money in huge capital expenditures and our investors will pay you for it on an annual basis until they get tired of losing money.
I absolutely support AI companies signing as many PPAs for low carbon energy even if they implode in the future. The PV panels, wind turbines, and potentially stationary storage will already be deployed at that point.
https://betterbuildingssolutioncenter.energy.gov/financing-n...
Slight nit: a board can't start a coup because a coup is an illegitimate attempt to take power and the board's main job is to oversee the CEO and replace them if necessary. That's an expected exercise of power.
The coup was when the CEO reversed the board's decision to oust him and then ousted them.
now you can use AI to easily write the type of articles he produces and he's pissed.
You really cannot.
Seriously. This is just the parrot thing again. The fact that AI proponents confuse the form of words with authorial intent is mindbending to me.
Wouldn't have confused Magritte, I think.
Words are words. Writers are writers. Writers are not words.
ETA: consider what would actually be necessary to prove me wrong. And when you hear back from David Karpf about his willingness to take part in that experiment, write a blog post about it and any results, post it to HN.
I am sure people here will happily suggest topics for the articles. I, for example, would love to hear what your hypothetical ChatKarpf has to say about influences from his childhood that David Karpf has never written about, or things he believed at age five that aren't true and how that affects his writing now.
Do you see what I mean? These aren't even particularly forced examples: writers draw on private stuff, internal thoughts, internal contradictions, all the time, consciously and unconsciously.
"Words are words. Writers are writers. Writers are not words."
I'm very bullish on AI/LLMs but I think we do need to have a better shared understanding of what they are and what they aren't. I think there's a lot of confusion around this.
Thank you. I don't think it really explains the distinction, of course. It just makes it clear there necessarily must be one, and it can't be wished away by discussions of larger training sets, more token context, or whatever. It never will be wished away.
When I’m writing out a comment, there’s no muse in my head singing the words to me. I have a model of who I am and what I believe - if I weren’t religious I might say I am that model - and I type things out by picking the words which that guy would say in response to the input I read.
(The model isn’t a transformer-based LLM, of course.)
Come on.
The article is written to appeal to people who want to feel clever casually slagging off and dismissing tech.
> it appears to have plateaued. GPT-4 now looks less like the precursor to a superintelligence and more like … well, any other chatbot.
What a pathetic observation. Does the author not recall how bad chatbots were pre-LLMs?
What LLMs can do blows my mind daily. There might be some insufferable hype atm, but gees, the math and engineering behind LLMs is incredible, and it's not done yet - they're still improving from more compute alone, not even factoring in architecture discoveries and innovations!
This is such a ridiculous sentence.
GPT-4 now looks like any other chatbot because the technology advanced so the other chatbots are smarter now as well. Somehow the author is trying to twist this as a bad thing.
https://www.technologyreview.com/2022/04/06/1048981/worldcoi...
https://www.buzzfeednews.com/article/richardnieva/worldcoin-...
BuzzFeed News is not BuzzFeed. They were a serious news website¹ staffed by multiple investigative journalists, including a Pulitzer winner heading that division. They received plenty of awards and recognition.² It is indeed a shame they shared a name with BuzzFeed and that no doubt didn’t help, but it does not detract from their work.
> or from The Atlantic.
There was no Atlantic link. The other source was MIT Technology Review.
> I’d like a more impartial breakdown ala AP-style news reporting.
The Associated Press did report on it³, and the focus was on the privacy implications too. The other time they reported on it⁴ was to announce Spain banned it for privacy concerns.
¹ https://en.wikipedia.org/wiki/BuzzFeed_News
² https://en.wikipedia.org/wiki/BuzzFeed_News#Awards_and_recog...
³ https://apnews.com/article/worldcoin-cryptocurrency-sam-altm...
⁴ https://apnews.com/article/worldcoin-spain-eyeballs-privacy-...
Yes. We've been through this again and again. Technology does not follow potential. It follows incentive. (Also, “all of physics”? Wtf is he smoking?)
> It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
I mean, everything good in life uses energy, that’s not AIs fault per se. However, we should absolutely evaluate tech anchored in the present, not the future. Especially with something we understand so poorly like emergent properties of AI. Even when there’s an expectation of rapid changes, the present is a much better proxy than yet-another sociopath with a god-complex whose job is to be a hype-man. Everyone’s predictions are garbage. At least the present is real.
The world has become a less trustworthy place for a lot of reasons and AI is only making it worse, not better.
ITT: People taking Sam at his word.
On top of that, the advance in models for language and physical simulation based models (for protein prediction and weather forecasting as examples) has been so rapid and unexpected that even folks who were previously very skeptical of "AI" are believers - it ain't because Sam Altman is up there talking a lot. I went from AI skeptic to zealot in about 18 months, and I'm in good company.
The problem is, when it pops, which it will, it'll fuck the economy.
He was literally invited to congress to speak about AI safety. Sure, perhaps people that have a longer memory of the tech world don't trust him. That's actually not a lot of people. A lot of people just aren't following tech (like my in-laws).
The other issue is that AI's 'boundless prosperity' is a little like those proposals to bring an asteroid made of gold back to earth. 20m tons, worth $XX trillion at current prices, etc. The point is, the gold price would plummet, at the same time as the asteroid, or well before, and the promised gains would not materialize.
If AI could do everything, we would no longer be able (due to no-one having a job), let alone willing, to pay current prices for the work it would do, and so again, the promised financial gains would not materialize.
Of course in both cases, there could be actual societal benefits - abundant gold, and abundant AI, but they don't translate directly to 'prosperity' IMHO.
It seems fair to say Altman has completed his Musk transformation. Some might argue it's inevitable. And indeed Bill Gates' books in the 90s made a lot of wild promises. But nothing that egregious.
So far Musk has been pushing the lies out continually to try and prevent any possible exposure to fraud. Like "Getting to Mars will save humanity" or the latest "We will never reach Mars unless Trump is president again". Then again, self driving cars are just around the corner, as stated in 2014 with a fraudulently staged video of their technology, that they just need to work the bugs out.
Altman is making wild clams too with how Machine Learning will slow and reverse climate change while proving that the technology needs vast more resources, specially in power consumption, just to be market viable for business and personal usage.
All three play off people's emotions to repress critical thinking. They are no different than the lying preachers, I can heal you with a touch of my hand, that use religion to gain power and wealth. The three above are just replacing religion with technology.
One of them is illegal, the other isn't.
This is a bizarre take about a 167-year-old, continuously published magazine.
Not sure the record supports that if you remove OpenAi which is a work-in-progress and supposedly not going too great at the moment. A talented 'tech whisperer' maybe?
So my question is: What does the AI rumor mill say about that? Was all that just hype-building, or is OpenAI holding back some major trump card for when they become a for-profit entity?
All of these doing the rounds of foreign governments and acting like artificial general intelligence is just around the corner is what got him this fundraising round today. It's all just games.
The issue is more that the company is hemorrhaging talent, and doesn’t have a competitive moat.
But luckily this doesn’t affect most of us, rather it will only possibly harm his investors if it doesn’t work out.
If he continues to have access to resources and can hire well and the core tech can progress to new heights, he will likely be okay.
AGI cannot exist in a box that you can control. We figured that out 20 years ago.
Could they start that? Sure theoretically. However they would have to massively pivot and nobody at OAI are robotics experts
Contrary to the Atlantic's almost always intentionally misleading framing, the "dot com boom" did in fact go on to print trillions later and it is still printing them. After what was an ultimately marginal if account clearing dip for many.
I say that as someone who would be deemed to be an Ai pessimist, by many.
But its wildly early to declare anything to be "what it is" and only that, in terms of ultimate benefit. Just like it was and is wild to declare the dot com boom to be over.
Agreed - I stopped taking The Atlantic seriously after their 2009 cover story, "Did Christianity Cause the Crash?"[1] To ignore CDOs, the Glass-Steagal repeal, the co-option of the ratings agencies and the dissolution of lending standards, and instead blame the Great Recession on a few obnoxious megapastors is to completely discard the magazine's credibility.
[1] https://www.theatlantic.com/magazine/archive/2009/12/did-chr...
Maybe not "cause", but "contribute notably to".
Now that he’s restructuring the company to be a normal for-profit corp, with a handsome equity award for him, we should assume the normal monopoly-grabbing that we see from the other tech giants.
If the dividend is simply going to the shareholder (and Altman personally) we should be much more skeptical about baking these APIs into the fabric of our society.
The article is asinine; of course a tech CEO is going to paint a picture of the BHAG, the outcome that we get if we hit a home run. That is their job, and the structure of a growth company, to swing for giant wins. Pay attention to what happens if they hit. A miss is boring; some VCs lose some money and nothing much changes.
Google just paid over $2.4 billion to get Noam Shazeer back in the company to work with Gemini AI. Google has the deepest pool of AI researchers. Microsoft and Facebook are not far behind.
OpenAI is losing researchers, they have maybe 1-2 years until they become Microsoft subsidiary.
The whole LLM era was started with the 2017 "Attention is all you need" paper by Google Brain/Research and nobody has done anything same magnitude since.
Noam Shazeer was one of the authors.
From robotics, neurology, transport to everything in between - not a word should be taken as is.
First, he mentioned wishing he was more into AI. While I appreciate the honesty, it was pretty off-putting. Here’s the CEO of a company building arguably the most consequential technology of our time, and he’s expressing apathy? That bugs me. Sure, having a dispassionate leader might have its advantages, but overall, his lack of enthusiasm left a bad taste in my mouth. Why IS he the CEO then?
Second, he talked about going on a “world tour” to meet ChatGPT users and get their feedback. He actually mentioned meeting them in pubs, etc. That just sounded like complete BS. It felt like politician-level insincerity—I highly doubt he’s spoken with any end-users in a meaningful way.
And one more thing: Altman being a well-known ‘prepper’ doesn’t sit well with me. No offense to preppers, but it gives me the impression he’s not entirely invested in civilization’s long-term prospects. Fine for a private citizen, but not exactly reassuring for the guy leading an organization that could accelerate its collapse.
I've done a huge amount of political organizing in my life, for common good - influencing governments to build tens of billions of dollars worth of electric rail infrastructure.
I'm also a big prepper. It's important to understand that stigmatizing prepping is very dangerous - specifically to those who reject it.
Whether it's a gas main break, a forest fire, an earthquake, or a sci-fi story, encouraging people to become resilient to disaster is incredibly beneficial for society as a whole, and very necessary for individuals. The vast, vast majority of people who do it are benefiting their entire community by doing so. Even, as much as I'm sure I'd dislike him if I met him, Sam Altman. Him being a prepper is good for us, at least indirectly, and possibly directly.
Just look at the stories in NC right now - people who were ready to clear their own roads, people taking in others because they have months of food.
Be careful not to ascribe values to behaviors like you're doing.
My issue, though, is with someone like Sam Altman—a leader of an organization that could potentially accelerate the downfall of civilization—being so deeply invested in prepping. Altman isn’t just a regular guy preparing for emergencies; he’s an incredibly wealthy individual who has openly discussed stockpiling machine guns and setting up private land he can retreat to at a moment’s notice. It’s that level of preparation, combined with his position at the helm of one of the most consequential tech companies, that doesn’t sit well with me. It feels like he’s hedging against the very future his company might be shaping.
I dont think the prepping can really be taken as evidence anything nefarious. Prepping simply means someone thinks there is a risk with hedging against, even if they are strongly opposed to that outcome.
I think you see many of the rich prepping because they can, but It says little about their desire for catastrophic events.
Prepping for a hurricane doesn't mean you want it to destroy your neighborhood.
And Altman is definitely in the latter camp with, "But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to."[1]
That a guy who says the above, and also says that AI may be an existential threat to humanity, also runs the world's most prominent AI company is disturbing.
1. https://futurism.com/the-byte/openai-ceo-survivalist-prepper
I am personally not sold on AGI being possible. We might be able to make some poor imitation of it, and maybe an LLM is the closest we get, but to me it smacks of “man attempts to create life in order to spite his creator.” I think the result of those kinds of efforts will end more like That Hideous Strength (in disaster).
But, but, but… their drama, or Altman’s drama is now too much for me, personally.
With a lot of reluctance I just stopped doing the $20/month subscription. The advanced voice mode is lots of fun to demo to people, and o1 models are cool, but I am fine just using multiple models for chat on Abacus.AI and Meta, an excellent service, and paid for APIs from Google, Mistral, Groq, and OpenAI (and of course local models).
I hope I don’t sound petty, but I just wanted to reduce their paid subscriber numbers by -1.
I have skepticism of his predictions, and disregard for his exaggerations.
I have a ChatGPT subscription and build features on OpenAI technology.
As a matter of fact, I suspect the author of the article actually belongs to gullible minority who ever took Altman at his word, and now is telling everyone what they already knew. But so what? What are we even discussing? Nobody calls to remove their OpenAI (or, in fact, Anthropic, or whatever) account, as long as we find it useful for something, I suppose. It just makes no difference at all if that writer or his readers take Altman at his word, their opinions have no real effect on the situation, it seems. They are merely observers.
So close, yet so far. And, both help the respective CEOs in hyping the respective companies.
He went from a failed startup to president of yc to ultra wealthy investor in the span of about a decade. That's sus
During my interview with Jared Friedman, their CTO, I asked him what Sam was trying to create, the greatest investment firm of all time surpassing Berkshire Hathway, or the greatest tech company surpassing Google? Without hesitation, Jared said Google. Sam wanted to surpass Google. (He did it with his other company, OpenAI, and not YC, but he did it nonetheless)
This morning I tried Googling something and the results sucked compared to what ChatGPT gave me.
Google still creates a ton of value (YouTube, Gmail, etc), but he has surpassed Google in terms of cutting edge tech.
Real technological progress in the 21st century is more capital-intensive than before. It also usually requires more diverse talent.
Yet the breakthroughs we can make in this half-century can be far greater than any before: commercial-grade fusion power (where Lawrence Livermore National Lab currently leads, thanks to AI[1]), quantum computing, spintronics, twistronics, low-cost room-temperature superconductors, advanced materials, advanced manufacturing, nanotechnology.
Thus, it's much more about the many, not the one. Multi-stakeholder. Multi-person. Often led by one technology leader, sure, but this one person must uplift and be accountable to the many. Otherwise we get the OpenAI story, and end-justifies-the-means type of groupthink wrt. those who worship the technoking.
[1]: https://www.llnl.gov/article/49911/high-performance-computin...
They would at least be more believable if they blast claims that a certain video must be fake, especially with how absurd and shocking it is.
But apparently as a society we like handing multi-billion dollar investments to folks with a proven track record of (not actually shipping) complete bullshit.
https://www.betterworldbooks.com/product/detail/the-sociopat...
But that's not how the market works.
I still "feel the AGI". I think Ben Goertzel'a recent talk on ML Street Talk was quite grounded / too much hype clouds judgement.
In all honesty, once the hype dies down, even if AGI/ASI is a thing - we're still going to be heads down back to work as usual so why not enjoy the ride?
Covid was a great eye-opener, we dream big but in reality people jump over each other for... toilet paper... gotta love that Gaussian curve of IQ right?
Yeah, maybe on the surface chatbots turned out to be chatbots. But you have to be a poor journalist to stop your investigation of the issue at that and conclude AI is no big deal. Nuance, anyone?
Hilarious.
Now keep in mind that this is going to be the default option for a lot of forums and social media for automated moderation. Reddit is already using it a lot and now a lot of the front page is clearly feedback farming for OpenAI. What I'm getting at is we're moving towards a future where only a certain type of dialog will be allowed on most social media and Sam Altman and his sponsors get to decide what that looks like.
Old tactic.
The project that would eventually became Microsoft Corp. was founded on it. Gates told Ed Roberts the inventor of the first personal computer that he had a programming for it. He had no such programming langugage.
Gates proceeded to espouse "vapourware" for the decades. Arguably Microsoft and its disciples are still doing so today.
Will the tactic ever stop working. Who knows.
Focus on the future that no one can predict, not the present that anyone can describe.
Is kind of a boring way of looking at things. I mean we have fairly good chatbots and image generators now but it's where the future is going that's the interesting bit.
Lumping AI in with dot coms and crypto seems a bit silly. It's a different category of thing.
(By the way Sam being shifty or not techy or not seems kind of incidental to it all.)
It's funny we coach people not to ascribe human characteristics to LLMS..
But we seem equally capable of denying the very human characteristics in our would be overlords.
Which warlord will we canonize next?