• keikobadthebad 15 hours ago |
    This doesn't feel like it will age well.
    • 082349872349872 13 hours ago |
      we do GI with about 100W, so it's obviously computationally tractable
      • jqpabc123 9 hours ago |
        We do GI with about 100W --- but we don't do it "computationally".

        We still don't know exactly how the human brain works. But what we do know is that it is analog and not digital.

        Some speculate it may even be quantum in nature. And we know that quantum can address problems that are intractable with digital computation.

        For all we know, "intelligence" could be an inherent attribute of the universe that is being manifested somehow in the human brain.

        • mindcrime 4 hours ago |
          FWIW, "computational" doesn't have to mean "digital computer". Analog computers are a thing, and depending on who you ask quantum computers are also (or will be someday).
          • jqpabc123 4 hours ago |
            FWIW, "computational" doesn't have to mean "digital computer".

            In the context of existing AI "computational" models, it does. It's all we've got.

            • 082349872349872 4 hours ago |
              Sorry, I had no clue about that context; in my world "computationally tractable" only means polynomial time.
  • alexander2002 15 hours ago |
    eli5 from chatgpt: Imagine human thinking is like a super complicated puzzle. When cognitive science (studying how we think and learn) was just starting, people thought of Artificial Intelligence (AI) as a special toolbox that could help solve parts of this puzzle. But now, many people working on AI are trying to build robots or computers that can solve the entire puzzle by themselves, just like a human would. This paper says that's really, really hard—so hard that we probably can't do it. The paper also says that if we believe these robots or computers are just like us, we're getting the wrong idea about how our own minds work. It's like using a map of a different place to try and find your way home—it doesn't work and just makes things confusing. The paper suggests we should use AI like a toolbox again, to help us understand our minds better, but we need to be careful not to make the same mistakes we did before.
  • mindcrime 14 hours ago |
    The opening few pages of this read like they were written by somebody with an axe to grind, which makes me suspicious of the rest. Why? Well because having an "axe to grind" may motivate one to start with a conclusion and go looking for ways to justify it. And you can almost always talk yourself into believing you've proven something you already really want to believe.

    "But mindcrime, there's a mathematical proof. How can you argue with math?"

    To be fair, I didn't read their entire proof. I skimmed some bits of it, and while I can't say it's wrong I didn't find it very convincing at first blush. My initial read left me thinking that the proof rests on some assumptions that may be unfounded and which may not hold up.

    Some of my skepticism may also be rooted in the way the paper seemed to weave back and forth between claiming to show that "AGI is computationally intractable" and "AGI is unachievable in the short-term". Those are two substantially different arguments and it's still not clear to me which the authors were really aiming for.

    I dunno. I gave up before getting through it all. I'll wait to see if others find it compelling and decide whether or not it's worth going back to.

    Also, see earlier discussion:

    https://news.ycombinator.com/item?id=41689558