• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    9
    ·
    1 day ago

    I can think of only two ways that we don’t reach AGI eventually.

    1. General intelligence is substrate dependent, meaning that it’s inherently tied to biological wetware and cannot be replicated in silicon.

    2. We destroy ourselves before we get there.

    Other than that, we’ll keep incrementally improving our technology and we’ll get there eventually. Might take us 5 years or 200 but it’s coming.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        14 hours ago

        Same argument applies for consciousness as well, but I’m talking about general intelligence now.

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Well I’m curious then, because I have never seen or heard or read that general intelligence would be needing some kind of wetware anywhere. Why would it? It’s just computations.

          I do have heard and read about consciousness potentially having that barrier though, but only as a potential problem, and if you want conscious robots ofc.

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            I don’t think it does, but it seems conceivable that it potentially could. Maybe there’s more to intelligence than just information processing - or maybe it’s tied to consciousness itself. I can’t imagine the added ability to have subjective experiences would hurt anyone’s intelligence, at least.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      22
      ·
      1 day ago

      If it’s substrate dependent then that just means we’ll build new kinds of hardware that includes whatever mysterious function biological wetware is performing.

      Discovering that this is indeed required would involve some world-shaking discoveries about information theory, though, that are not currently in line with what’s thought to be true. And yes, I’m aware of Roger Penrose’s theories about non-computability and microtubules and whatnot. I attended a lecture he gave on the subject once. I get the vibe of Nobel disease from his work in that field, frankly.

      If it really turns out to be the case though, microtubules can be laid out on a chip.

      • phutatorius@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Penrose has always had a fertile imagination, and not all his hypotheses have panned out. But he does have the gift that, even when wrong, he’s generally interestingly wrong.

      • panda_abyss@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        I could see us gluing third world fetuses to chips and saying not to question it before reproducing it.

        • voronaam@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          edit-2
          22 hours ago

          This is a funny graph. What’s the Y-axis? Why the hell DVDs are a bigger innovation than a Steam Engine or a Light Bulb? It has a way bigger increase on the Y-axis.

          In fact, the top 3 innovations since 1400 according to the chart are

          1. Microprocessors
          2. Man on Moon
          3. DVDs

          And I find it funny that in the year 2025 there are no people on the Moon and most people do not use DVDs anymore.

          And speaking of Microprocessors, why the hell Transistors are not on the chart? Or even Computers in general? Where did the humanity placed their Microprocessors before Apple Macintosh was designed (this is an innovation? IBM PC was way more impactful…)

          Such a funny chart you shared. Great joke!

          • Perspectivist@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            10
            ·
            edit-2
            17 hours ago

            The chart is just for illustration purposes to make a point. I don’t see why you need to be such a dick about it. Feel free to reference any other chart that you like better which displays the progress of technological advancements thorough human history - they all look the same; for most of history nothing happened and then everything happened. If you don’t think that this progress has been increasing at explosive speed over the past few hundreds of years then I don’t know what to tell you. People 10k years ago had basically the same technology as people 30k years ago. Now compare that with what has happened even jist during your lifetime.

            • Womble@piefed.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              15 hours ago

              Its not a chart, to be that it would have to show some sort of relation between things. What it is is a list of things that were invented put onto an exponential curve to try and back up loony singularity naratives.

              Trying to claim there was vastly less innovation in the entire 19th century than there was in the past decade is just nonsense.

              • Perspectivist@feddit.uk
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                5
                ·
                15 hours ago

                Trying to claim there was vastly less innovation in the entire 19th century than there was in the past decade is just nonsense.

                And where have I made such claim?

                • Womble@piefed.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  14 hours ago

                  The “chart” that you posted, it showed barely any increase in the 1800s and massive increases in the last decades.

                  • Perspectivist@feddit.uk
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    4
                    ·
                    14 hours ago

                    The chart is just for illustration to highlight my point. As I already said - pick a different chart if you prefer, it doesn’t change the argument I’m making.

                    It took us hundreds of thousands of years to go from stone tools to controlling fire. Ten thousand years to go from rope to fish hook. And then just 60 years to go from flight to space flight.

                    I’ll happily grant you rapid technological progress even over the past thousand years. My point still stands - that’s yesterday on the timeline I’m talking about.

                    If you lived 50,000 years ago, you’d see no technological advancement over your entire lifetime. Now, you can’t even predict what technology will look like ten years from now. Never before in human history have we taken such leaps as we have in the past thousand years. Put that on a graph and you’d see a steady line barely sloping upward from the first humans until about a thousand years ago - then a massive spike shooting almost vertically, with no signs of slowing down. And we’re standing right on top of that spike.

                    Throughout all of human history, the period we’re living in right now is highly unusual - which is why I claim that on this timeline, AGI might as well be here tomorrow.

        • panda_abyss@lemmy.ca
          link
          fedilink
          English
          arrow-up
          23
          ·
          1 day ago

          If we make this graph in 100 years almost nothing modern like hybrid cars, dvds, etc. will be in it.

          Just like this graph excludes a ton of improvements in metallurgy that enabled the steam engine.

          • anomnom@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            There’s also no reason for it to be a smooth curve, it looks more like a series if steps with varying flat spots between them in my head.

            And we are terrible at predicting how long a flat spot will be between improvements.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      14 hours ago

      You’re talking about consciousness, not AGI. We will never be able to tell if AI has “real” consciousness or not. The goal is really to create an AI that acts intelligent enough to convince people that it may be conscious.

      Basically, we will “hit” AGI when enough people will start treating it like it’s AGI, not when we achieve some magical technological breakthrough and say “this is AGI”.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        Same argument applies for consciousness as well, but I’m talking about general intelligence now.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          13 hours ago

          I don’t think you can define AGI in a way that would make it substrate dependent. It’s simply about behaving in a certain way. Sufficiently complex set of ‘if -> then’ statements could pass as AGI. The limitation is computation power and practicality of creating the rules. We already have supercomputers that could easily emulate AGI but we don’t have a practical way of writing all the ‘if -> then’ rules and I don’t see how creating the rules could be substrate dependent.

          Edit: Actually, I don’t know if current supercomputers could process input fast enough to pass as AGI but it’s still about computation power, not substrate. There’s nothing suggesting we will not be able to keep increasing computational power without some biological substrate.

    • Chozo@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      General intelligence is substrate dependent, meaning that it’s inherently tied to biological wetware and cannot be replicated in silicon.

      We’re already growing meat in labs. I honestly don’t think lab-grown brains are as far off as people are expecting.

    • wirehead@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      Well, think about it this way…

      You could hit AGI by fastidiously simulating the biological wetware.

      Except that each atom in the wetware is going to require n atoms worth of silicon to simulate. Simulating 10^26 atoms or so seems like a very very large computer, maybe planet-sized? It’s beyond the amount of memory you can address with 64 bit pointers.

      General computer research (e.g. smaller feature size) reduces n, but eventually we reach the physical limits of computing. We might be getting uncomfortably close right now, barring fundamental developments in physics or electronics.

      The goal if AGI research is to give you a better improvement of n than mere hardware improvements. My personal concern is that that LLM’s are actually getting us much of an improvement on the AGI value of n. Likewise, LLM’s are still many order of magnitude less parameters than the human brain simulation so many of the advantages that let us train a singular LLM model might not hold for an AGI model.

      Coming up with an AGI system that uses most of the energy and data center space of a continent that manages to be about as smart as a very dumb human or maybe even just a smart monkey is an achievement in AGI but doesn’t really get you anywhere compared to the competition that is accidentally making another human amidst a drunken one-night stand and feeding them an infinitesimal equivalent to the energy and data center space of a continent.

      • Frezik@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        I see this line of thinking as more useful as a thought experiment than as something we should actually do. Yes, we can theoretically map out a human brain and simulate it in extremely high detail. That’s probably both inefficient and unnecessary. What it does do is get us past the idea that it’s impossible to make a computer that can think like a human. Without relying on some kind of supernatural soul, there must be some theoretical way we could do this. We just need to know how without simulating individual atoms.

        • kkj@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          It might be helpful to make one full brain simulation, so that we can start removing parts and seeing what needs to stay. I definitely don’t think that we should be mass-producing then, though.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I don’t think our current LLM approach is it, but I doing think intelligence is unique to humans at all.

    • realitista@piefed.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Well it could also just depend on some mechanism that we haven’t discovered yet. Even if we could technically reproduce it, we don’t understand it and haven’t managed to just stumble into it and may not for a very long time.