just terrific

  • 0 Posts
  • 9 Comments
Joined 2 months ago
cake
Cake day: June 11th, 2025

help-circle



  • Do you have any expertise on the issue?

    I hold a PhD in probabilistic machine learning and advise businesses on how to use AI effectively for a living so yes.

    IMHO, there is simply nothing indicating that it’s close. Sure LLMs can do some incredibly clever sounding word-extrapolation, but the current “reasoning models” still don’t actually reason. They are just LLMs with some extra steps.

    There is lots of information out there on the topic so I’m not going to write a long justification here. Gary Marcus has some good points if you want to learn more about what the skeptics say.




  • Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

    I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.

    AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.



  • I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

    I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

    The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.