Singularity Missed
35 points by telotortium 13 days ago | 24 comments
  • Quinner 13 days ago |
    What this is really saying is that hardware is a necessary precondition for, but not determinative of, intelligence. If you have a human brain worth of neurons firing randomly you don't get consciousness. The structure and instruction set is crucial. And we are perhaps 20 years away from figuring out how to structure the compute power we have to get the consciousness of a human adult.
  • throwuxiytayq 13 days ago |
    > So we shouldn't expect human like intelligence before late 2040's. That is of course if we actually get an insect brain this year. Which we won't.

    Was this written pre-ChatGPT? I am amazed that the author decided that this is an insightful take he'd like to share on the internet. They managed to confuse computational capacity and actual capability, while remaining completely blind to the fact that nobody expects AI development to happen along this sort of curve, or to resemble a walk up the list of intelligent animal species. We didn't expect it 10 years ago, we especially don't expect it now that it's super prominently not happening.

    • telotortium 13 days ago |
      It does show that Kurzweil's model of AI development is significantly deficient. Kurzweil isn't considered widely discredited in the AGI space, so it's a useful argument.
      • timdiggerm 13 days ago |
        > Kurzweil isn't considered widely discredited in the AGI space

        Thank you for helping discredit the AGI space

  • andrewla 13 days ago |
    I'm of the school of thought that says that the Singularity Theory is just fundamentally unsound.

    In particular the "futurism" school which tries to extrapolate existing trends to the future. To me it seems pretty clear that the path of technological development does not follow a predictable roadmap, and the things that seem important now end up being supplanted by unanticipated things, and the things that will seem important in the future will be important in ways that we can't predict.

    So far I feel this has been borne out as each concrete prediction of people like Kurzweil has been ruled out or seen to be irrelevant or uninteresting, but it seems like the school still exists because it can always offer a post hoc adjustment towards the same end.

    I have a question for people who believe in Kurzweil's theory -- what would it take to disprove it? To ultimately say "yeah, this was just an incorrect model of the future"?

    • marssaxman 13 days ago |
      I was able to get more excited about these exponential curve models when I was younger, but these days there is always a skeptical voice in the back of my head predicting that a logistic function will prove to be a better fit, considering that we live on a finite planet.
    • observationist 13 days ago |
      You've fundamentally misunderstood what the singularity is. It's when technological progress outpaces the capacity for any person to control or understand what's happening. We get a taste of this, with the internet, for example - no country or person or entity can control what gets published online. We see an enormous amount of bots posting random slop on platforms, spam emails, or outright spam websites being published. We see a huge number of invalid scientific papers published to "legitimate" journals. That information far exceeds the capacity for any person to read the whole, let alone understand or control it. The singularity refers to a point in time specifically when technology is being iterated and developed so rapidly that nobody can know what tomorrow will bring.

      Irving J Good said it best: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."

      The singularity is an inevitability. It's a recognition of a fact about the universe. There will come a point in time where humans aren't the top of the food chain anymore. We won't be in control of technological development. It's not any specific set of technologies or a particular threshold of research hours per day or anything which can be easily quantified like that. It's the collective output being greater than the sum of its parts, in ways that are fundamentally unpredictable, with unpredictable consequences.

      Kurzweil's predictions have been irritatingly accurate. This blog post is irritatingly ignorant.

      Brains are fleshy computers in bone vats. You simulate your very own Matrix every single conscious moment of your existence. Computation is what brains do. Disputing this is putting ignorance of either brains or computers on display.

      Computation is sufficient to explaining anything and everything brains do. It's the simplest explanation, it's scientifically coherent, and there is a total absence of any evidence of any phenomena or feature that is unexplained by brains-as-computers. The more we implement functions of intelligence in computer software, the faster technological development occurs, and more of the intellectual development starts to occur in artificial minds.

      • andrewla 13 days ago |
        > It's when technological progress outpaces the capacity for any person to control or understand what's happening

        This seems very imprecise. I don't know how literally to take this. We passed this point just about 10,000 years ago. Probably even further back than that.

        > The singularity refers to a point in time specifically when technology is being iterated and developed so rapidly that nobody can know what tomorrow will bring.

        Is this literally tomorrow or figuratively tomorrow? I'm looking for more of a precise statement and I feel like I'm getting metaphors that are already literally true and have been for thousands of years.

        The quote about ultraintelligent machines seems fundamentally unsound. The unit of consciousness or intelligent capacity is not well defined so I don't see a reason that e.g. a group of people would not be considered "ultraintelligent" with respect to the individuals that comprise the group. How fast this "machine" thinks is important too because if designing the next level of intelligence takes exponentially more time than the last then I don't think we have to worry.

        We could similarly talk about having a program that solves the halting problem and how it would be the last program that we have to write. Doesn't mean that such a thing is possible much less inevitable. In the case of intelligence I don't think we have enough understanding to even say whether the proposition about ultra intelligent machines is even well-posed.

        As far as I've seen, there are almost zero predictions that Kurzweil has made that are accurate aside from some trivial ones about speech recognition (his actual area of research) and even there almost irrelevantly true -- we have voice interfaces and they can recognize speech but they suck ass and people prefer to use non-speech interfaces.

        • observationist 12 days ago |
          You need to read the statement from the context of "in principle" vs "in practice". When, in principle, no human has the capacity to understand or control a thing. A forest fire is a great thought experiment - we can, in principle, control and understand any forest fire. Given enough time and resources, no forest fire will ever exceed our theoretical capacity to control it. In principle, the technological rate of progress will exceed humanity's capacity to understand or control it. It will become a thing which happens to us, as opposed to us doing things.

          Intelligence is one of those "we know it when we see it" problems. It literally encompasses the entirety of human cognition. Moravec described AI capabilities in terms of human equivalence, avoiding the need for any specific and well defined unit of measurement. If and when AI capabilities exceed human capabilities in any and every possible capacity, AI will be more intelligent than humans, and we can be certain that AGI is achieved. When we run out of "but machines can't do X as well as a human!" statements, we'll have achieved software that, when reproduced, optimized, and tasked en masse, will represent superintelligence.

          There are multiple paths to get there. There are dozens, perhaps hundreds, of books covering the topic in depth.

          There's no "exponentially more time than the last" issue with AI development. The halting problem can't be solved - that's the point of the halting problem.

          We have sufficient understanding of computation and intelligence to say that there is nothing about the laws of physics, or the way biological brains work that even remotely suggests that anything brains do is impossible, or even particularly difficult, for silicon chips to achieve. The bottleneck in reproducing brains in silicon right now is data. Because of ethical and technical challenges, we can't gain access to the low level details of living brain tissue, but the science of connectome mapping, neurochemistry, and real-time electrical activity tracking continues to improve, and we will absolutely have mapped the entirety of the human brain down to the molecular level at some point in the next century. Given that information, we'll be able to recreate important structures in software, and produce functional equivalents to things like cortical neural columns and minicolumns, and we'll gain access to Hebbian dynamics that govern the wiring and growth of synapses, allowing us to recreate the rules that comprise human intelligence. That's the upper bound on when we're going to achieve artificial general intelligence - it may take a decade or two after we've got the map, but it's likely to happen a long time before we get the complete map, because much of what the brain is doing is based on highly repeated structures distributed throughout the neocortex - figure out how they work, how their behavior is coordinated, and how that produces learning and behavior, and you crack the problem of intelligence.

      • feoren 13 days ago |
        > an ultraintelligent machine could design even better machines

        This is the singularity as far as I'm concerned: when AI is smart enough to design a better AI, which will be even smarter and therefore able to design an even better AI, ad infinitum. We are absolutely nowhere close to that today.

        You might say: but TSMC and nVidia are using AI to help them design better chips for the AI to run on! (I don't know if that's true, but I'm sure it will be within the next few years.) But that's no different than saying: look, these better hammers have led to better metal forging which has led to even better hammers! Our tools have always been improving in this way. For the singularity to happen, the AI has to be able to do the entire cycle on its own, so that all you need to throw at it is more compute and you get exponential (or at least superlinear) growth of intelligence out.

    • marcosdumay 13 days ago |
      > what would it take to disprove it?

      The actual one with all the dates in it, well, the dates are enough. We are not there.

      But if you are complaining about the general idea that machines will eventually be more capable than humans in every task, and then we can't predict what will happen... If you give up on that "every task"part because it's clearly not necessary, technology has done that a few times already. I don't know how can anybody complain about it except for the reason that it's not an interesting observation.

  • hyeonwho4 13 days ago |
    I must be missing something. The article claims that $1000 of compute (hardware?) was supposed to surpass an insect brain about 23 years ago, and we hasn't achieved that benchmark yet.

    But $1000 of time on Claude Opus will buy 13 million tokens of output, or about 52,000 human hours of output content at typical human author writing rates. The content will be well-formatted, logical, and if well-prompted, virtually indistinguishable from that of an above-average human writer.

    $1000 on DALLE-3 will generate 8300 images, some fraction of which will pass an artistic turing test.

    And $1000 on AlphaFold will do things that no human can do.

    So it seems Kurtweil was right on target, and AI did surpass human capabilities around 2023?

    • aquilaFiera 13 days ago |
      In both the article and Kurzweil's case, it just depends on how you want to set up the goal posts (which the article somewhat alludes to). If you want to measure flops and compare that way (which is similar to what you're suggesting), sure, we've surpassed human-level intelligence. If you want to measure capabilities like autonomous navigation (like the article does) there is still a ways to go before we have animal-level capabilities. Both discussions have merit. But it's a question of measurements and goal posts.
    • aabhay 13 days ago |
      Umm.. this is a severe recency bias. What about all the things that humans can do that robots/AI haven’t yet shown promise in at all. Like basically any sculptural art, glass blowing, knot tying, or almost any athletic sport at competitive level like soccer. While you might say “that’s just a hardware problem not an intelligence problem”, you ignore the fact that robotics researchers have been working on this problem for decades and are nowhere near this goal. Even in simulation, we don’t have a compelling mechanism for autonomous completion of a task like driving cross country if it involves fixing a tire.

      I personally consider the physically embodied tasks extremely challenging for AI because it involves continuous real time sensory integration and extensive nuanced tool use

      • robotresearcher 13 days ago |
        In the small league of RoboCup, games progress faster than humans can really follow, let alone participate in.

        Those robots are low mass and sensing is done off-board. But it’s real time. Boy, is it.

        • aabhay 13 days ago |
          I am not arguing that humans are better than AI at everything. But if you set the goal post at human-comparison you have to go apples to apples. Even non AI, even a digital calculator is better than every math genius combined at finding the 10 billionth digit of pi.
        • xenocratus 13 days ago |
          Never heard of this before, but watching a few videos, the difference between small league and standard platform league is shocking, and highlights why the small league seems so impressive. Basically, in the small league the interaction with the real world is as sanitised as possible. It all seems tailored to make it easier to build those robots, so of course they're pretty good at it.

          > games progress faster than humans can really follow, let alone participate in

          Call me when Standard Platform League is like that ;)

          • robotresearcher 13 days ago |
            > It all seems tailored to make it easier to build those robots, so of course they're pretty good at it.

            They didn't used to be very good at all. There's a lot of achievement you're describing with that 'of course'. Progress was made where progress was available.

            Of course humans are good at real soccer. We are duration-running social tribal animals with incredibly efficient actuators and energy storage. It seems tailored for us. Robots are good at other things.

            Edit for footnote: The space of things robots are good at is not very large compared to humans. I've been in the robot business thirty years and watching their ecological niches grow oh-so-slowly.

      • drewcoo 13 days ago |
        > Umm.. this is a severe recency bias.

        Don't all "successful" prophecies have that?

  • natch 13 days ago |
    More like "point missed." The singularity is about brains, not bodies. Of course physical instantiations of embodied agents will lag a very short bit when we first hit AGI.
  • FrameworkFred 13 days ago |
    I disagree that we're greatly off-target.

    We can certainly build a grenade-dropper that sometimes picks the wrong target and gets intercepted by hostile actors. We have LLMs that aren't doing some things that we might task a single human brain with.

    It's really a matter of product, market, fit.

    When our army of bug-smart drones wipes out civilians and heritage sites, does something cruel or otherwise distasteful, and/or requires a deployment of a million drones to get the job done, folks realize they actually wanted something with human-level capabilities...like remote-controlled drones.

    When we build an AI chat bot that's often wrong and can't do math all that well, folks complain that they didn't really want any old brain, they want something better.

    I'm not sure I buy everything Kurzweil's selling TBH, but I don't think this article is making a great argument.

  • cen4 13 days ago |
    I have been trying to kill a mosquito for about 20 minutes now. Can't believe what those speck of dust brains are doing.
  • falcor84 13 days ago |
    > So we should expect to be able to have e.g. a combat drone that can take a grenade, navigate for miles to enemy territory, pick on an enemy target, drop the grenade, and navigate back to the base.

    Call me naive, but I would like to believe that at least part of the reason we don't have these autonomous murder machines is that we choose not to develop them.

  • uoaei 13 days ago |
    This article highlights a gripe I've had for a while surrounding this conversation:

    Analog and digital computing may be mathematically equivalent, but the devil is in the implementation, basically destroying any sense of equivalence in "processing power". The modes and methods of processing are so distinct (even if the quantities of "information flow" are equal) that reducing everything to FLOPS is not just irrelevant, it's horribly misleading.