Was this written pre-ChatGPT? I am amazed that the author decided that this is an insightful take he'd like to share on the internet. They managed to confuse computational capacity and actual capability, while remaining completely blind to the fact that nobody expects AI development to happen along this sort of curve, or to resemble a walk up the list of intelligent animal species. We didn't expect it 10 years ago, we especially don't expect it now that it's super prominently not happening.
Thank you for helping discredit the AGI space
In particular the "futurism" school which tries to extrapolate existing trends to the future. To me it seems pretty clear that the path of technological development does not follow a predictable roadmap, and the things that seem important now end up being supplanted by unanticipated things, and the things that will seem important in the future will be important in ways that we can't predict.
So far I feel this has been borne out as each concrete prediction of people like Kurzweil has been ruled out or seen to be irrelevant or uninteresting, but it seems like the school still exists because it can always offer a post hoc adjustment towards the same end.
I have a question for people who believe in Kurzweil's theory -- what would it take to disprove it? To ultimately say "yeah, this was just an incorrect model of the future"?
Irving J Good said it best: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
The singularity is an inevitability. It's a recognition of a fact about the universe. There will come a point in time where humans aren't the top of the food chain anymore. We won't be in control of technological development. It's not any specific set of technologies or a particular threshold of research hours per day or anything which can be easily quantified like that. It's the collective output being greater than the sum of its parts, in ways that are fundamentally unpredictable, with unpredictable consequences.
Kurzweil's predictions have been irritatingly accurate. This blog post is irritatingly ignorant.
Brains are fleshy computers in bone vats. You simulate your very own Matrix every single conscious moment of your existence. Computation is what brains do. Disputing this is putting ignorance of either brains or computers on display.
Computation is sufficient to explaining anything and everything brains do. It's the simplest explanation, it's scientifically coherent, and there is a total absence of any evidence of any phenomena or feature that is unexplained by brains-as-computers. The more we implement functions of intelligence in computer software, the faster technological development occurs, and more of the intellectual development starts to occur in artificial minds.
This seems very imprecise. I don't know how literally to take this. We passed this point just about 10,000 years ago. Probably even further back than that.
> The singularity refers to a point in time specifically when technology is being iterated and developed so rapidly that nobody can know what tomorrow will bring.
Is this literally tomorrow or figuratively tomorrow? I'm looking for more of a precise statement and I feel like I'm getting metaphors that are already literally true and have been for thousands of years.
The quote about ultraintelligent machines seems fundamentally unsound. The unit of consciousness or intelligent capacity is not well defined so I don't see a reason that e.g. a group of people would not be considered "ultraintelligent" with respect to the individuals that comprise the group. How fast this "machine" thinks is important too because if designing the next level of intelligence takes exponentially more time than the last then I don't think we have to worry.
We could similarly talk about having a program that solves the halting problem and how it would be the last program that we have to write. Doesn't mean that such a thing is possible much less inevitable. In the case of intelligence I don't think we have enough understanding to even say whether the proposition about ultra intelligent machines is even well-posed.
As far as I've seen, there are almost zero predictions that Kurzweil has made that are accurate aside from some trivial ones about speech recognition (his actual area of research) and even there almost irrelevantly true -- we have voice interfaces and they can recognize speech but they suck ass and people prefer to use non-speech interfaces.
Intelligence is one of those "we know it when we see it" problems. It literally encompasses the entirety of human cognition. Moravec described AI capabilities in terms of human equivalence, avoiding the need for any specific and well defined unit of measurement. If and when AI capabilities exceed human capabilities in any and every possible capacity, AI will be more intelligent than humans, and we can be certain that AGI is achieved. When we run out of "but machines can't do X as well as a human!" statements, we'll have achieved software that, when reproduced, optimized, and tasked en masse, will represent superintelligence.
There are multiple paths to get there. There are dozens, perhaps hundreds, of books covering the topic in depth.
There's no "exponentially more time than the last" issue with AI development. The halting problem can't be solved - that's the point of the halting problem.
We have sufficient understanding of computation and intelligence to say that there is nothing about the laws of physics, or the way biological brains work that even remotely suggests that anything brains do is impossible, or even particularly difficult, for silicon chips to achieve. The bottleneck in reproducing brains in silicon right now is data. Because of ethical and technical challenges, we can't gain access to the low level details of living brain tissue, but the science of connectome mapping, neurochemistry, and real-time electrical activity tracking continues to improve, and we will absolutely have mapped the entirety of the human brain down to the molecular level at some point in the next century. Given that information, we'll be able to recreate important structures in software, and produce functional equivalents to things like cortical neural columns and minicolumns, and we'll gain access to Hebbian dynamics that govern the wiring and growth of synapses, allowing us to recreate the rules that comprise human intelligence. That's the upper bound on when we're going to achieve artificial general intelligence - it may take a decade or two after we've got the map, but it's likely to happen a long time before we get the complete map, because much of what the brain is doing is based on highly repeated structures distributed throughout the neocortex - figure out how they work, how their behavior is coordinated, and how that produces learning and behavior, and you crack the problem of intelligence.
This is the singularity as far as I'm concerned: when AI is smart enough to design a better AI, which will be even smarter and therefore able to design an even better AI, ad infinitum. We are absolutely nowhere close to that today.
You might say: but TSMC and nVidia are using AI to help them design better chips for the AI to run on! (I don't know if that's true, but I'm sure it will be within the next few years.) But that's no different than saying: look, these better hammers have led to better metal forging which has led to even better hammers! Our tools have always been improving in this way. For the singularity to happen, the AI has to be able to do the entire cycle on its own, so that all you need to throw at it is more compute and you get exponential (or at least superlinear) growth of intelligence out.
The actual one with all the dates in it, well, the dates are enough. We are not there.
But if you are complaining about the general idea that machines will eventually be more capable than humans in every task, and then we can't predict what will happen... If you give up on that "every task"part because it's clearly not necessary, technology has done that a few times already. I don't know how can anybody complain about it except for the reason that it's not an interesting observation.
But $1000 of time on Claude Opus will buy 13 million tokens of output, or about 52,000 human hours of output content at typical human author writing rates. The content will be well-formatted, logical, and if well-prompted, virtually indistinguishable from that of an above-average human writer.
$1000 on DALLE-3 will generate 8300 images, some fraction of which will pass an artistic turing test.
And $1000 on AlphaFold will do things that no human can do.
So it seems Kurtweil was right on target, and AI did surpass human capabilities around 2023?
I personally consider the physically embodied tasks extremely challenging for AI because it involves continuous real time sensory integration and extensive nuanced tool use
Those robots are low mass and sensing is done off-board. But it’s real time. Boy, is it.
> games progress faster than humans can really follow, let alone participate in
Call me when Standard Platform League is like that ;)
They didn't used to be very good at all. There's a lot of achievement you're describing with that 'of course'. Progress was made where progress was available.
Of course humans are good at real soccer. We are duration-running social tribal animals with incredibly efficient actuators and energy storage. It seems tailored for us. Robots are good at other things.
Edit for footnote: The space of things robots are good at is not very large compared to humans. I've been in the robot business thirty years and watching their ecological niches grow oh-so-slowly.
Don't all "successful" prophecies have that?
We can certainly build a grenade-dropper that sometimes picks the wrong target and gets intercepted by hostile actors. We have LLMs that aren't doing some things that we might task a single human brain with.
It's really a matter of product, market, fit.
When our army of bug-smart drones wipes out civilians and heritage sites, does something cruel or otherwise distasteful, and/or requires a deployment of a million drones to get the job done, folks realize they actually wanted something with human-level capabilities...like remote-controlled drones.
When we build an AI chat bot that's often wrong and can't do math all that well, folks complain that they didn't really want any old brain, they want something better.
I'm not sure I buy everything Kurzweil's selling TBH, but I don't think this article is making a great argument.
Call me naive, but I would like to believe that at least part of the reason we don't have these autonomous murder machines is that we choose not to develop them.
Analog and digital computing may be mathematically equivalent, but the devil is in the implementation, basically destroying any sense of equivalence in "processing power". The modes and methods of processing are so distinct (even if the quantities of "information flow" are equal) that reducing everything to FLOPS is not just irrelevant, it's horribly misleading.