• 0 Posts
  • 20 Comments
Joined 3 years ago
cake
Cake day: March 14th, 2022

help-circle
  • More AI pearl clutching by crapmodo because this type of outrage porn sells. Yeah the engagement fine tuning sucks but it’s no different than other dopamine hacking engagement systems used in big social networks. No outrage porn about algorithmic echo chambers driving people insane though because it’s not as clickbaity.

    Anyway, people don’t randomly get psychosis because anyone or anything validated some wonky beliefs and misinformed them about this and that. Both these examples were people already diagnosed with something and the exact same thing would happen if they were watching Alex Jones and interacting with other viewers. Basically how flat earth bs spread.

    The issue here is the abysmal level of psychiatric care, lack of socialized medicine, lack of mental health awareness in the wider population, police interactions with mentally ill people being abnormally highly lethal and not crackpot theories about AI causing delusions. That’s now how delusions work.

    Also casually quoting Yudkowski? The Alex Jones of scifi AI fear mongering? The guy who said abortions should be allowed up until a baby develops qualia at 2-3 years of age? That’s the voice of reason for crapmodo? Lmao.





  • Honestly you can’t even meaningfully summarize just the 20th century and late 19th in a single comment. Unleashing Pinkertons on strikers for example. Prez Wilson mass deporting Italians because they were anarchists and socialists while also designing little ethnostates after WWI and creating the excuse for WWII.

    But that’s where most would agree. If I said that even the liberal-leaning (not leftist, liberal) dominant discourse via mass media constantly frames US politics in terms of “stupidity” i.e. innate unchanging characteristics and not criminally underfunded public education that makes the working class more easily manipulated by bougie populists on both sides then I’m not sure most people would either get or want to admit that even the liberal dominant norms are lowkey eugenicist race science nazi shit but without calling it explicitly eugenics.

    But it’s a thing that once you see it you can’t unsee it.


  • Yeah okay sure, when the US was putting down mass insurrections in Detroit etc with the National Guard and there was armed combat in the streets (1960s) the US wasn’t a fascist hellhole officially. Or the entire period until the 1965 Voting Act and the multiple Civil Rights Acts (didn’t end in the 60s) that it took for “constitutional rights” to be applied to all residents because simply being a resident didn’t make you a full citizen in the “constitutional republic” if you didn’t have the right skin tone, that also wasn’t officially a fascist hellhole either.

    What else? Henry Ford publishing a ton of dodgy anti-Semitic literature in the 1920s and inspiring Hitler and the Euro OG fascists? Not officially fascist hellhole enough.

    Blood quantum laws, one drop rules, eugenics programs etc that also directly inspired the Nazis but were often rejected for being too extreme for them (the Nuremberg blood laws that determined Jewish ancestry are significantly more lenient than US one drop laws)? Not officially fascist hellhole.

    Yeah, USA was straight up Weimar Republic or maybe the USSR or maybe anarchist Catalonia during the Spanish Civil War until Trump won in 2016.

    Can people start pushing back against this self-serving center-lib romantic bs already? Because simply put, if it all got ruined because of Trump then come right out and say openly that you’re fine with everything that came before because it mostly affected natives, blacks and foreigners but not US WASPs. Or I don’t know, relax because saying it’s always been a total hellhole doesn’t mean you support Trump.

    Maybe I should mention that this is Cold War era propaganda about USA representing “freedom and democracy” (the free world bs) vs totalitarianism etc back during the time segregation and putting people in jail for sodomy were supposedly freedom loving acts?




  • Your claim was this, “supported” by some corporate unpublished preprint (which is really funny considering you have the nerve to ask for citations):

    It can’t. It just fucking can’t. We’re all pretending it does, but it fundamentally can’t.

    You don’t need a citation for LLMs being able to “reason for code”, doubting AI coding abilities is delusional online yapping considering how documented it is since its deployed all over the place so how about you prove that being able to write code and do things like control flow, conditionals etc can be done without reason. Try doing that instead of spamming incoherent replies.

    Nobody cares if you’re a professional vibe coder all of a sudden, if you can’t code without copilot maybe you shouldn’t have an opinion based on Apple’s “research”.

    But until then, are Palantir’s AIs fundamentally incapable of reasoning? Yes or no? None of you anti-AI warriors are clear, should we not worry about corporate AI surveillance because apparently AI isn’t really “I” or not? Simple question, but maybe ask copilot for help. But you seem bugged when it comes to corporate propaganda contradictions, it’s really interesting.


  • I never said its going to replace teachers or that it “stores context” but your sloppily googled preprints to support your “fundamentally can’t reason” statement were demonstrably garbage. You didn’t say even once “show me it’s close” but you think you said several times. Either your reading comprehension is worse than an LLM and you wildly confabulate, which means an LLM could replace you or you’re a bot. Anyway, so far you proved nothing and already said they can write code, it’s a non trivial cognitive task that you can’t perform without several higher order abilities so cope and seethe I guess.

    So, what about Palantir AI? Is that also “not close”? Why are you avoiding surveillance AI? They’re both neural networks. Some are LLMs.


  • You’re less coherent than a broken LLM lol. You made the claim that transformer-based AIs are fundamentally incapable of reasoning or something vague like that using gimmicky af “I tricked the chatbot into getting confused therefore it can’t think” unpublished preprints (while asking for peer review). Why would I need to prove something? LLMs can write code, that’s an undeniable demonstration that they understand abstract logic fairly well that can’t be faked using probability and it would be a complete waste of time to explain it to anyone who is either having issues with cognitive dissonance or less often may be intentionally trying to spread misinformation.

    Are the AIs developed by Palantir “fundamentally incapable” of their demonstrated effectiveness or not? It’s a pretty valid question when we’re already surveilled by them but some people like you indirectly suggest that this can’t be happening. Should people not care about predictive policing?

    How about the industrial control AIs that you “critics” never mention, do power grid controllers fake it? You may need to tell Siemens, they’re not aware their deployed systems work. And while on that, we shouldn’t be concerned about monopolies controlling public infrastructure with closed source AI models because they’re “fundamentally incapable” to operate?

    I don’t know, maybe this “AI skepticism” thing is lowkey intentional industry misdirection and most of you fell for it?


  • Another unpublished preprint that hasn’t published peer review? Funny how that somehow doesn’t matter when something seemingly supports your talking points. Too bad it doesn’t exactly mean what you want it to mean.

    “Logical operations and definitions” = Booleans and propositional logic formalisms. You don’t do that either because humans don’t think like that but I’m not surprised you’d avoid mentioning the context and go for the kinda over the top and easy to misunderstand conclusion.

    It’s really interesting how you get people constantly doubling down on specifically chatbots being useless citing random things from google but somehow Palantir finds great usage in their AIs for mass surveillance and policing. What’s the talking point there, that they’re too dumb to operate and that nobody should worry?


  • You made huge claims using a non peer reviewed preprint with garbage statistics and abysmal experimental design where they put together 21 bikes and 4 race cars to bury openAI flagship models under the group trend and go to the press with it. I’m not going to go over all the flaws but all the performance drops happen when they spam the model with the same prompt several times and then suddenly add or remove information, while using greedy decoding which will cause artificial averaging artifacts. It’s context poisoning with extra steps i.e. not logic testing but prompt hacking.

    This is Apple (that is falling behind in its AI research) attacking a competitor with fake FUD and doesn’t even count as research, which you’d know if you looked it up and saw you know, opinions of peers.

    You’re just protecting an entrenched belief based on corporate slop so what would you do with peer reviewed anything? You didn’t bother to check the one you posted yourself.

    Or you post corporate slop on purpose and now trying to turn the conversation away from that. Usually the case when someone conveniently bypasses absolutely all your arguments lol.


  • And here’s experimental verification that humans lack formal reasoning when sentences don’t precisely spell it out for them: all the models they tested except chatGPT4 and o1 variants are from 27B and below, all the way to Phi-3 which is an SLM, a small language model with only 3.8B parameters. ChatGPT4 has 1.8T parameters.

    1.8 trillion > 3.8 billion

    ChatGPT4’s performance difference (accuracy drop) with regular benchmarks was a whooping -0.3 versus Mistral 7B -9.2 drop.

    Yes there were massive differences. No, they didn’t show significance because they barely did any real stats. The models I suggested you try for yourself are not included in the test and the ones they did use are known to have significant limitations. Intellectual honesty would require reading the actual “study” though instead of doubling down.

    Maybe consider the possibility that a. STEMlords in general may know how to do benchmarks but not cognitive testing type testing or how to use statistical methods from that field b. this study being an example of a few “I’m just messing around trying to confuse LLMs with sneaky prompts instead of doing real research because I need a publication without work” type of study, equivalent to students making chatGPT do their homework c. 3.8B models = the size in bytes is between 1.8 and 2.2 gigabytes d. not that “peer review” is required for criticism lol but uh, that’s a preprint on arxiv, the “study” itself hasn’t been peer reviewed or properly published anywhere (how many months are there between October 2024 to May 2025?) e. showing some qualitative difference between quantitatively different things without showing p and using weights is garbage statistics f. you can try the experiment yourself because the models I suggested have visible Chain of Thought and you’ll see if and over what they get confused about g. when there are graded performance differences with several models reliably not getting confused at least more than half the time but you say “fundamentally can’t reason” you may be fundamentally misunderstanding what the word means

    Need more clarifications instead of reading the study or performing basic fun experiments? At least be intellectually curious or something.


  • The faulty logic was supported by a previous study from 2019

    This directly applies to the human journalist, studies on other models 6 years ago are pretty much irrelevant and this one apparently tested very small distilled ones that you can run on consumer hardware at home (Llama3 8B lol).

    Anyway this study seems trash if their conclusion is that small and fine-tuned models (user compliance includes not suspecting intentionally wrong prompts) failing to account for human misdirection somehow means “no evidence of formal reasoning”. Which means using formal logic and formal operations and not reasoning in general, we use informal reasoning for the vast majority of what we do daily and we also rely on “sophisticated pattern matching” lmao, it’s called cognitive heuristics. Kahneman won the Nobel prize for recognizing type 1 and type 2 thinking in humans.

    Why don’t you go repeat the experiment yourself on huggingface (accounts are free, over ten models to test, actually many are the same ones the study used) and see what actually happens? Try it on model chains that have a reasoning model like R1 and Qwant and just see for yourself and report back. It would be intellectually honest to verify things since we’re talking about critical thinking in here.

    Oh add a control group here, a comparison with average human performance to see what the really funny but hidden part is. Pro-tip: CS STEMlords catastrophically suck when larping being cognitive scientists.


  • And besides this it’s not like there’s no labour aristocracy that primarily gains from this while other working class groups get much less and get ideologically gaslit about not being members of some potentially either fully corrupt or workerist union with zero radical ultimate aims.

    Even the global North(west) contains highly exploited groups with only a minority getting the benefits.



  • via mechanisms including scraping, APIs, and bulk downloads.

    Omg exactly! Thanks. Yet nothing about having to use logins to stop bots because that kinda isn’t a thing when you already provide data dumps and an API to wikimedia commons.

    While undergoing a migration of our systems, we noticed that only a fraction of the expensive traffic hitting our core datacenters was behaving how web browsers would usually do, interpreting javascript code. When we took a closer look, we found out that at least 65% of this resource-consuming traffic we get for the website is coming from bots, a disproportionate amount given the overall pageviews from bots are about 35% of the total.

    Source for traffic being scraping data for training models: they’re blocking javascript therefore bots therefore crawlers, just trust me bro.


  • Kay, and that has nothing to do with what i said. Scrapers, bots =/= AI. It’s not even the same companies that make the unfree datasets. The scrapers and bots that hit your website are not some random “AI” feeding on data lol. This is what some models are trained on, it’s already free so it’s doesn’t need to be individually rescraped and it’s mostly garbage quality data: https://commoncrawl.org/ Nobody wastes resources rescraping all this SEO infested dump.

    Your issue has everything to do with SEO than anything else. Btw before you diss common crawl, it’s used in research quite a lot so it’s not some evil thing that threatens people’s websites. Add robots.txt maybe.