That’s like saying that colonies on Mars are the future. In the future colonies on Mars will be the direction things are going, (assuming we don’t global warm ourselves to death first) but we’re not there yet. AI have yet to prove themselves.
This really depends on what you consider “progress”. Some forms of AI are neat pieces of tech, there’s no denying that. However, all I’ve really seen them do in an industrial sense is shrink workforces to save a buck via automation, and produce a noticably worse product.
That quality is sure to improve, but what won’t change is the fact that real humans with skill and talent are out of a job because of a fancy piece of software. I personally don’t think of that as progress, but that’s just me.
Typographers saw the same thing with personal computing in the latter half of the 90s. Almost over night, everyone starting printing their own documentation and comic sans became their canary in the coal mine. It was progress but progress is rarely good for everyone. There’s always a give and a take.
As another user said, typographers still exist. And, until now, computers weren’t really a threat to their job security. They were just a new set of tools they had to adapt to. But, if I was running a business and had little regard for ethics, why would I hire a typographer when I could just ask an AI to generate a new font for my billboard, and have it done in 30 seconds for free?
I get the argument that AI is a tool that lowers the barrier of entry to certain fields, which is absolutely true. If I wanted to be a graphic designer today, I could do it with AI. But, when I went to sell my logo to the small company down the street, I’d have to come to terms with the fact that the owner of that business also happened to become a graphic designer that very morning, and all of a sudden my career is over before it started.
If someone said this in 1970 it would be just as true as you saying it today. Would you have used generative AI tools for video game development back then?
All I ask is in what way are LLMs progress. Ability to generate a lot of slop is pretty much only thing LLMs are good for. Even that is not really cheap, especially factoring the environmental costs.
Have you ever programmed an interpreter for interactive fiction / MUDs, before all this AI crap? It’s a great example of the power that even super tiny models can accomplish. NLP interfaces are a useful thing for people.
Also consider that Firefox or Electron apps require more RAM and CPU and waste more energy than small language models. A Gemma slm can translate things into English using less energy than it requires to open a modern browser. And I know that because I’m literally watching the resources get used.
I am not implying that transformers-based models have to be huge to be useful. I am only talking about LLMs. I am questioning the purported goal of LLMs, i.e., to replace all humans in as many creative fields as possible, in the context of it’s cost, both environmental and social.
LLMs are actually spectacular for indexing large amounts of text data and pulling out the answer to a query. Combine that with natural language processing and it is literally what we all thought Ask Jeeves was back in the day. If you ever spent time sifting through stack overflow pages or parsing discussion threads, that is what it is good at. And many models actually provide ways to get a readout of the “thought process” and links to pages that support the answer which drastically reduces the impact of hallucinations.
And many of those don’t necessarily require significant power usage… relative to what is already running in data centers.
The problem is that people use it and decide it is “like magic” and then insist on using it for EVERYTHING. And you go from “Write me a simple function to interface with this specific API” to “Write me an application to do my taxes and then file them for me”
Of course, there is also the issue of where training data comes from. Which is why so much of the “generative AI” stuff is so disgusting because it is just stealing copyrighted data left and right. Rather than the search engine style LLMs that mostly just ignore the proverbial README_FBI.txt file.
And the “this is magic” is on both sides. The evangelists are demonstrably morons. But the rabid anti-AI/“AI” crowd are just as bad with “it gave you a wrong answer, it is worthless”. Think of it less like a magic box and more like asking a question on a message board. You are gonna get a LOT of FUD and it is on you to do additional searches to corroborate when it actually matters.
Like a lot of things AI/“AI”, they are REALLY good at replacing intern/junior level employees (and all the consequences of that…) and are a way to speed through grunt work. And, much like farming a task out to that junior level employee, you need to actually supervise it and check the results. Whether that is making sure it actually does what you want it to do or making sure they didn’t steal copyrighted work.
or a silly, halfwit race to build out the infrastructure (because they’re smoking their own product) that could crash the economy.
You’re only seeing the upsides - make nifty pictures, ai music, whatever - because the entire shitshow is a free or exceptionally underpriced preview of what’s to come. while everyone from google to grok to your mom fails to find a way to actually profit off of it all when they have to figure the costs of the water, power, training data, lawsuits and other shit into the actual equation it blows up.
These aren’t my ideas - please, take a break from your preconceptions and read:
Where is the idea that LLMs will ever to curing diseases coming from? What is the possible mechanism? LLMs generate text from probability distributions. There is no reason to trust their output because they don’t have built-in concept of true or false. When one cannot judge the quality of the output, how can one reliably use it as a tool for any purpose, let alone scientific research?
I can guarantee you that there will not be a point in time at which everybody on the planet just decides to stop using AI out of the goodness of their hearts.
We as humans can take steps to lessen our impact on the planet. We cannot stop climate change. The planet by design will always change climates. It has changed without humans influence and it will continue after we are gone.
Don’t be pedantic. Anyone with half a brain knows that when someone brings up “climate change” they’re referring to “human-made climate change” — and it’s completely uncontroversial that the changes we’ve made since the industrial revolution have greatly outweighed the changes of the Earth’s natural climate cycles.
Yep that’s absolutely not what people are talking about when they say ‘climate change’ in this context, they mean anthropogenic climate change, and you know it. Your bad faith response shows you have no interest in an honest discussion.
AI is the future. Sure you can hate on it all you like. Can’t stop progress.
That’s like saying that colonies on Mars are the future. In the future colonies on Mars will be the direction things are going, (assuming we don’t global warm ourselves to death first) but we’re not there yet. AI have yet to prove themselves.
This really depends on what you consider “progress”. Some forms of AI are neat pieces of tech, there’s no denying that. However, all I’ve really seen them do in an industrial sense is shrink workforces to save a buck via automation, and produce a noticably worse product.
That quality is sure to improve, but what won’t change is the fact that real humans with skill and talent are out of a job because of a fancy piece of software. I personally don’t think of that as progress, but that’s just me.
Typographers saw the same thing with personal computing in the latter half of the 90s. Almost over night, everyone starting printing their own documentation and comic sans became their canary in the coal mine. It was progress but progress is rarely good for everyone. There’s always a give and a take.
As another user said, typographers still exist. And, until now, computers weren’t really a threat to their job security. They were just a new set of tools they had to adapt to. But, if I was running a business and had little regard for ethics, why would I hire a typographer when I could just ask an AI to generate a new font for my billboard, and have it done in 30 seconds for free?
I get the argument that AI is a tool that lowers the barrier of entry to certain fields, which is absolutely true. If I wanted to be a graphic designer today, I could do it with AI. But, when I went to sell my logo to the small company down the street, I’d have to come to terms with the fact that the owner of that business also happened to become a graphic designer that very morning, and all of a sudden my career is over before it started.
Except typographers still exist, we need them to create fonts that aren’t comic sans.
If someone said this in 1970 it would be just as true as you saying it today. Would you have used generative AI tools for video game development back then?
💯%. No doubt advancements don’t stop because people are upset about it.
I meant more like, AI is the future but it may be of limited use right now.
Heh. Out of curiosity how many nfts did you buy?
Zero. I took a deep dive into nfts and determined they were problematic.
All I ask is in what way are LLMs progress. Ability to generate a lot of slop is pretty much only thing LLMs are good for. Even that is not really cheap, especially factoring the environmental costs.
How much do you know about transformers?
Have you ever programmed an interpreter for interactive fiction / MUDs, before all this AI crap? It’s a great example of the power that even super tiny models can accomplish. NLP interfaces are a useful thing for people.
Also consider that Firefox or Electron apps require more RAM and CPU and waste more energy than small language models. A Gemma slm can translate things into English using less energy than it requires to open a modern browser. And I know that because I’m literally watching the resources get used.
I am not implying that transformers-based models have to be huge to be useful. I am only talking about LLMs. I am questioning the purported goal of LLMs, i.e., to replace all humans in as many creative fields as possible, in the context of it’s cost, both environmental and social.
LLMs are actually spectacular for indexing large amounts of text data and pulling out the answer to a query. Combine that with natural language processing and it is literally what we all thought Ask Jeeves was back in the day. If you ever spent time sifting through stack overflow pages or parsing discussion threads, that is what it is good at. And many models actually provide ways to get a readout of the “thought process” and links to pages that support the answer which drastically reduces the impact of hallucinations.
And many of those don’t necessarily require significant power usage… relative to what is already running in data centers.
The problem is that people use it and decide it is “like magic” and then insist on using it for EVERYTHING. And you go from “Write me a simple function to interface with this specific API” to “Write me an application to do my taxes and then file them for me”
Of course, there is also the issue of where training data comes from. Which is why so much of the “generative AI” stuff is so disgusting because it is just stealing copyrighted data left and right. Rather than the search engine style LLMs that mostly just ignore the proverbial
README_FBI.txt
file.And the “this is magic” is on both sides. The evangelists are demonstrably morons. But the rabid anti-AI/“AI” crowd are just as bad with “it gave you a wrong answer, it is worthless”. Think of it less like a magic box and more like asking a question on a message board. You are gonna get a LOT of FUD and it is on you to do additional searches to corroborate when it actually matters.
Like a lot of things AI/“AI”, they are REALLY good at replacing intern/junior level employees (and all the consequences of that…) and are a way to speed through grunt work. And, much like farming a task out to that junior level employee, you need to actually supervise it and check the results. Whether that is making sure it actually does what you want it to do or making sure they didn’t steal copyrighted work.
Sure everything starts with meager beginnings. The AI you’re upset about existing may find the cure to many diseases. It may save the planet one day.
The type of AI that researchers are building to try cure diseases are not LLMs. So not the stuff that is running behind these kind of tech for games.
or a silly, halfwit race to build out the infrastructure (because they’re smoking their own product) that could crash the economy.
You’re only seeing the upsides - make nifty pictures, ai music, whatever - because the entire shitshow is a free or exceptionally underpriced preview of what’s to come. while everyone from google to grok to your mom fails to find a way to actually profit off of it all when they have to figure the costs of the water, power, training data, lawsuits and other shit into the actual equation it blows up.
These aren’t my ideas - please, take a break from your preconceptions and read:
https://futurism.com/data-centers-financial-bubble
https://www.zdnet.com/article/todays-ai-ecosystem-is-unsustainable-for-most-everyone-but-nvidia-warns-top-scholar/
https://www.dailykos.com/stories/2025/8/22/2339789/-Why-The-AI-Bubble-Will-Burst
https://www.wheresyoured.at/the-haters-gui/
Where is the idea that LLMs will ever to curing diseases coming from? What is the possible mechanism? LLMs generate text from probability distributions. There is no reason to trust their output because they don’t have built-in concept of true or false. When one cannot judge the quality of the output, how can one reliably use it as a tool for any purpose, let alone scientific research?
There are other general AIs that can look over imaging like cat scans and in some situations catch things a doctor can’t.
There are also ones that can simulate drug interactions with the body and can be used to model creating novel drugs for treatments.
These are not LLMs though.
Ya you can, stop using it and don’t. No use, no VC money nor customers. Business baby
I can guarantee you that there will not be a point in time at which everybody on the planet just decides to stop using AI out of the goodness of their hearts.
It can be stopped just like climate change but we won‘t and kill humanity instead apparently.
We as humans can take steps to lessen our impact on the planet. We cannot stop climate change. The planet by design will always change climates. It has changed without humans influence and it will continue after we are gone.
Don’t be pedantic. Anyone with half a brain knows that when someone brings up “climate change” they’re referring to “human-made climate change” — and it’s completely uncontroversial that the changes we’ve made since the industrial revolution have greatly outweighed the changes of the Earth’s natural climate cycles.
Yep that’s absolutely not what people are talking about when they say ‘climate change’ in this context, they mean anthropogenic climate change, and you know it. Your bad faith response shows you have no interest in an honest discussion.