Imo video models are the closest thing we have to “spatial intelligence.” They generate in three dimensions (2D images + time) scale just like image and probably language models, and given the right controls can model 3D worlds (https://gamengen.github.io/) interactively. Not sure there’s a need to directly model polygons or point clouds (assuming that’s what they’re trying to do?) when there’s so much video data to enable massive scaling of video models. I expect soon we’ll see video models being used as planners for robotics as well.
Why would anyone think they can own, boss around and rent out an intelligent entity?
It can't be goaded with threats of starvation and exposure. I wonder why anyone would think that manufacturing an intelligent entity would result in a benevolent and super productive slave mind.
Do we not already do this with existing biological intelligence, like horses?
> It can't be goaded with threats of starvation and exposure
I think reason we do this for biological intelligence is because evolution has made the entity act in accordance with self-preservation, which isn't in itself a useful goal to us, so we set up conditions such that completing our own goal is a proxy of that (the way to get food is doing useful work).
For artificial intelligence we can instead directly set the loss function that the model is created to act in accordance with. We can also still similarly still set up proxy goals - like how with an LLM its immediate design is to generate sensible next tokens, but we can set up context such that the way to complete that task is by fulfilling the user's request to generate a poem.
And the vast majority of humans
Then they pivoted to profiting from this civilisational danger instead, which is itself a danger sign they warned against:
https://openai.com/index/introducing-openai/
> it’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.
> Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.
That already seems to imply that AGI would somehow be closer to a God than a program. Otherwise it's easy, you just switch it off