• 0 Posts
  • 24 Comments
Joined 10 days ago
cake
Cake day: June 20th, 2025

help-circle








  • That’s a very emphatic restatement of your initial claim.

    I can’t help but notice that, for all the fancy formatting, that wall of text doesn’t contain a single line which actually defines the difference between “learning” and “statistical optimization”. It just repeats the claim that they are different without supporting that claim in any way.

    Nothing in there, precludes the alternative hypothesis; that human learning is entirely (or almost entirely) an emergent property of “statistical optimization”. Without some definition of what the difference would be we can’t even theorize a test





  • Pollution per GDP is a better measure. https://ourworldindata.org/grapher/co2-intensity Pollution per GNP would be even better but I can’t find it.

    Individuals don’t pollution much, it’s mostly industry. Really poor countries often don’t pollution much because they can’t afford to. Sometimes they pollute prodigiously because the only thing they can afford to do is destructive resource extraction. Rich countries can often outsource their pollution to poorer countries.

    China has been making mind boggling investments in renewables. They have been expanding all their energy sources but their renewables have the lions share of the growth.

    They’ve been building roads and all kinds of infrastructure. That’s what the BRI is all about, even if they’re being a bit quieter about saying the phrase. They like to build their long haul roads on elevated columns; not only because it’s less disruptive to wildlife but because it lets them use giant road laying robots to place prefab highway segments.

    They dropped the one-child policy a while back but they’re having some trouble getting people to have more babies. That said, there’s some research that suggests that rural populations around the world are severely undercounted, so they may have a bunch more subsistence farmers than they, or anyone else, realizes.




  • You may be correct but we don’t really know how humans learn.

    There’s a ton of research on it and a lot of theories but no clear answers.
    There’s general agreement that the brain is a bunch of neurons; there are no convincing ideas on how consciousness arises from that mass of neurons.
    The brain also has a bunch of chemicals that affect neural processing; there are no convincing ideas on how that gets you consciousness either.

    We modeled perceptrons after neurons and we’ve been working to make them more like neurons. They don’t have any obvious capabilities that perceptrons don’t have.

    That’s the big problem with any claim that “AI doesn’t do X like a person”; since we don’t know how people do it we can neither verify nor refute that claim.

    There’s more to AI than just being non-deterministic. Anything that’s too deterministic definitely isn’t an intelligence though; natural or artificial. Video compression algorithms are definitely very far removed from AI.



  • I’d say there are two issues with it.

    FIrst, it’s a very new article with only 3 citations. The authors seem like serious researchers but the paper itself is still in the, “hot off the presses” stage and wouldn’t qualify as “proven” yet.

    It also doesn’t exactly say that books are copies. It says that in some models, it’s possible to extract some portions of some texts. They cite “1984” and “Harry Potter” as two books that can be extracted almost entirely, under some circumstances. They also find that, in general, extraction rates are below 1%.