We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • Imgonnatrythis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.

  • benni@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

  • dsilverz@friendica.world
    link
    fedilink
    arrow-up
    1
    ·
    1 month ago

    @technocrit While I agree with the main point that “AI/LLMs has/have no agency”, I must be the boring, ackchyually person who points out and remembers some nerdy things.

    tl;dr: indeed, AIs and LLMs aren’t intelligent… we aren’t so intelligent as we think we are, either, because we hold no “exclusivity” of intelligence among biosphere (corvids, dolphins, etc) and because there’s no such thing as non-deterministic “intelligence”. We’re just biologically compelled to think that we can think and we’re the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).

    If you have the patience to read a long and quite verbose text, it’s below. If you don’t, well, no problems, just stick to my tl;dr above.

    -----

    First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation e = mc, is energy as well), and this inexorably includes living beings.

    Bodies, flesh, brains, nerves and other biological parts, they’re not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it’s all “made of star stuff”, it’s all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves…

    Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).

    It’s bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we’re essentially no different than our fancy “tools”, be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as “Markov Chains” and “Artificial Neural Networks”.

    Then, there’s also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we’re exclusive in that regard, but we’re not, we’re just fooling ourselves!

    IMHO, every time we try to argue “there’s no intelligence beyond humans”, it’s highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as “Homo sapiens”). We tend to carry on our biological existences as if we were some kind of “deities” or “extraterrestrials” among a “primitive, wild life”.

    Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of “senses” (“Because it’s bodiless. It has no senses, …”): “my senses deceive me” is the starting point for Cartesian (René Descartes) doubt. While Descarte’s conclusion, “Cogito ergo sum”, is highly anthropocentric, it’s often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly “exclusive” nature of human intelligence on the ability to “feel”.

    Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of “evil” (both the Hobbesian line regarding “human evilness” and the Epicurean paradox regarding “metaphysical evilness”), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.

    All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly “soul-bearing beings”) are no different from a “souless” machine, because we’re both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).

    Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don’t care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn’t dare to gaze into the abyss yet.

    I’m someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as “biological organism”, one of which “managed” to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We’re biologically programmed, just like the other living beings, to “fear Death”, even though our very cells are programmed to terminate on a regular basis (apoptosis) and we’re are subjected to the inexorable chronological falling towards “cosmic chaos” (entropy, as defined, “as time passes, the degree of disorder increases irreversibly”).

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.

  • hera@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Philosophers are so desperate for humans to be special. How is outputting things based on things it has learned any different to what humans do?

    We observe things, we learn things and when required we do or say things based on the things we observed and learned. That’s exactly what the AI is doing.

    I don’t think we have achieved “AGI” but I do think this argument is stupid.

    • counterspell@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      No it’s really not at all the same. Humans don’t think according to the probabilities of what is the likely best next word.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You’re a meat based copy machine with a built in justification box.

          Except of course that humans invented language in the first place. So uh, if all we can do is copy, where do you suppose language came from? Ancient aliens?

          • Zexks@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            No we invented “human” language. There are dozens of other animal out there that all have their own languages, completely independant of our.

            We simply refined base calls to be more and more specific. Differences evolved because people are bad at telephone and lots of people have to be special/different and use slight variations every generation.

            • aesthelete@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              2 months ago

              Are you saying human languages are a derivative of bird language or something? If so, I’d like to see the proof of that.

                • aesthelete@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  What does any of this have to do with anything anyway?

                  Humans invented the first human language. People have ideas that aren’t simple derivatives of other ideas.

    • Mistic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      It’s not. It’s a math formula that predicts an output based on its parameters that it deduced from training data.

      Say you have following sets of data.

      1. Y = 3, X = 1
      2. Y = 4, X = 2
      3. Y = 5, X = 3

      We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.

      I won’t go into much detail, but

      Y = 2 + 1x + e

      e in an ideal world = 0 (which it is, in this case), that’s our model’s error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model’s bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.

      If x = 4, then

      Y = 2 + 1×4 + 0 = 6

      Our model just predicted that if X is 4, then Y is 6.

      In a nutshell, that’s what AI does, but instead of numbers, it’s tokens (think symbols, words, pixels), and the formula is much much more complex.

      This isn’t intelligence and not deduction. It’s only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it’s not thinking, it will be just as confidently incorrect as it would be if it was correct.

      Companies calling it “AI” is pure marketing.

  • pachrist@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    As someone who’s had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isn’t or, particularly, can’t be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI don’t have kids. The other side usually does.

    When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friend’s dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.

    And as funny as it would be to argue that they’re both sapient, but not sentient, I don’t think that’s the case. I think you can make the case that without true volition, AI is sentient but not sapient. I’d love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.

    • terrific@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      2 months ago

      I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

      I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

      The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.

      • jpeps@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Couldn’t agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don’t think this is one of them.

        Kids are certainly building a vocabulary and learning about the world, but LLMs don’t learn.

        • stephen01king@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          LLMs don’t learn because we don’t let them, not because they can’t. It would be too expensive to re-train them on every interaction.

          • terrific@lemmy.ml
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I know it’s part of the AI jargon, but using the word “learning” to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.

            • stephen01king@lemmy.zip
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              But that’s exactly how we learn stuff, as well. Artificial neural networks are modelled after how our neuron affect each other while we learn and store memories.

              • terrific@lemmy.ml
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                2 months ago

                Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

                I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.

                AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        2 months ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

        • innermachine@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.