Do you have any ideas or thoughts about this?
Brazil must invent Lua 2.
AI is a tech debt generator.
Any programmer who worked with legacy code knows a situation where something was written by a former employee or a contractor without much comments or documentation, making it difficult to modify (because of complexity or readability) or replace (because of non-existing business documentation and/or peculiar bugs and features)
AI accelerates these situations, but the person does not even exist. Which, IMO is the main thing that needs to be called out.
Yeah I’ve been trying to call this out at my company. Junior programmers, especially , do t seem to know how to turn ai responses into maintainable code
I find it ironic since ive mostly been on the QA side of dev. I’ve spent decades pointing out the stats that code is much more expensive to maintain than it is to write the first time, so now AI puts us in a position of writing something the first time a little faster, but that’s even more expensive to maintain. Does not compute
Not if you use it correctly. You don’t write code with AI, you get inspiration to get over sticking points. You pick out the relevant bits, make certain you understand how they work, save hours of banging your head.
Not if you use it correctly.
Ah! “Git gud” elitism to paper over the risk.
The issue still stands: what few seniors you still have at the shop who can tell people WHY something is a bad idea, are now distracted with juniors submitting absolute shit code for review and needing to be taught why that structure is a bad idea.
“Well everyone else is doing it” was a bad rebuttal when you wanted to go to Chuck’s party and Mom said no. Laundering “this is what everyone else writes” through an Ai concentrator when 2 generations of coders are self-taught and unmentored after the great post-y2k purge of mentors and writers, isn’t a better situation.
AI for the win in figuring out how to use code libraries with minimal to non-existent documentation scattered accross the entire web.
Ah yes, “just use it correctly”. All these programmers convinced that they are one of the chosen few that “get it” and can somehow magically make it not a damaging, colossal waste of time.
“Inspiration”, yeah, in the same way we can draw “inspiration” from a monkey throwing shit at a wall.
Not in IT, huh? Because you missed my entire point. This isn’t like making a lame email that screams fake.
I got stuck on a Google Calendar/Sheets integration. Almost no documentation or examples out there. After banging my head for hours it occurred to me to try this new AI thing.
ChatGPT spit out some code, didn’t work of course, but I saw a new path I hadn’t considered and one I’d never knew existed! Picked out the bits I needed, got the script stood up within an hour, after wasting hours trying to do it from scratch.
People like you were criticizing the use of fire back in the day. “Oog burned hut new fire thing!” “Oog antelope shit head, no use fire good.” “Fire bad FIRE BAD!”
Cute. I’m a senior software engineer that has trained many different models (NLP, image classification, computer vision, LIDAR analysis) before this stupid fucking LLM craze. I know precisely how they work (or rather, I know how much people don’t know how they work, because of the black box approach to training). From the outset, I knew people believed it was much more capable than it actually is, because it was incredibly obvious as someone who’s actually built the damn things before (albeit with much less data/power).
Every developer that loves LLMs I see is pretty fucking clueless about them and think of them as some magical device that has actual intelligence (just like everybody does, I guess, but I expect better of developers). It has no semantic understanding whatsoever. It’s stochastic generation of sequences of tokens to loosely resemble natural language. It’s old technology recently revitalized because large corporations plundered humanity in order to brute force their way into models with astronomically-high numbers of parameters, so they now are now “pretty good” at resembling natural language, compared to before. But that’s all it fucking is. Imitation. No understanding, no knowledge, no insight. So calling it “inspiration” is a fucking joke, and treating it as anything other than a destructive amusement (due to the mass ecological and sociological catastrophe it is) is sheer stupidity.
I’m pissed off about it for many reasons, but especially because my peers at work are consistently wasting my fucking time with LLM slop and it’s fucking exhausting to deal with. I have to guard against way more garbage now to make sure our codebase doesn’t turn into utter shit. The other day, an engineer submitted an MR for me to review that contained dozens of completely useless/redundant LLM-generated tests that would have increased our CI time a shitload and bloated our codebase for no fucking reason. And all of it is for trivial, dumb shit that’s not hard to figure out or do at all. I’m so fucking sick of all of it. No one cares about their craft anymore. No one cares about being a good fucking engineer and reading the goddamn documentation and just figuring shit out on their own, with their own fucking brain.
By the way, no actual evidence exists of this supposed productivity boost people claim, whereas we have a number of studies demonstrating the problems with LLMs, like MIT’s study on its effects on human cognition, or this study from the ACM showing how LLMs are a force multiplier for misinformation and deception. In fact, not only do we not have any real evidence that it boosts productivity, we have evidence of the opposite: this recent METR study found that AI usage increased completion time by 19% for experienced engineers working on large, mature, open-source codebases.
Guess I’ll just pull the Terry A. Davis here and say it’s God.
I am in IT. CTO, yet also still doing development
Anyone that would deliver a pure AI project I would reject immediately and have them first look at what the hell it is
That is the biggest issue with AI, people only use it for ready to go solutions. Nobody checks what comes out of it
I use AI in my IDE exactly like you mentioned; it gives me a wrong answer (because of course) and even though the answer is wrong, it might give me a new idea. That’s fine.
The problem is ready to go idiots that will just blindly trust AI, ie, the 90% of humans on this world
Unionize
next question
- just don’t use it
- become extremely knowledgeable on how it works, so that you can have coherent and pointed arguments with people on the pitfalls that the systems have. You don’t need to use it to understand the technical foundations.
- understand the infrastructural shortfalls - that is, investment cycle/ROI impossibility given the generational cycles that have become apparent in ML infra, the power requirements and impact (both direct and second order), as well as the broader ecological impact (beyond just power, but also exacerbated by the makeup of most energy grids outside of China)
- understands the copyright and licensing implications and hypocrisy that the vast majority of LLM and generative platforms have made an implicit yet pervasive part of their training sets, and how it is currently understood to be algorithmically impossible to excise one particular element from a training set post-training (thus implicitly violating GDPR’s “right to forget”)
- check out the studies that indicate over reliance on generative platforms can meaningfully negatively affect cognitive aptitude and reasoning ability.
There’s a lot more you can dig into, and that is by no means an exhaustive list. The more you learn about the nuance of how this shit works, the more you’ll be able to poke huge fucking holes in pretty much any argument anyone makes.
Moving away from GitHub to other git hosting sites.
Abandoning forges would make it harder for humans while bots could still download any publicly available repo.
E: Looks that I misread “to” as “and.”
No. You archive your GH code with the readme.md saying all new stuff is at gitlab, codeburg, bit bucket, etc. And a link to it.
I still look for answers on stack overflow, instead of waiting for an AI summary of the same answer
I mean, agentic AIs are getting good at outputting working code. Thousands of lines per minute; talking trash of it won’t work.
However, I agree that losing the human element of writing code is losing a very important element of programming. So, I believe there should exist a strong resistance against this. Don’t feel pressured to answer if you think your plans shouldn’t be revealed, but it would be nice to know if someone is preparing a great resistance out there.
This is honestly a lot of the problem: code generation tools can output thousands of lines of code per minute. Great, committable, defendable code.
There is basically no circumstance in which a project’s codebase growing at a rate of thousands of lines per minute is a good thing. Code is a necessary evil of programming: you can’t always avoid having it, but you should sure as hell try, because every line of code is capable of being wrong and will need to be read and understood later. Probably repeatedly.
Taking the approach to solving a problem that involves writing a lot of code, rather than putting in the time to find the setup that lets you express your solution in a little code, or reworking the design so code isn’t needed there at all, is a mistake. It relinquishes the leverage that is very point of software engineering.
A tool that reduces the effort needed to write large amounts of human-facing, gets-committed-to-the-source-tree code, so that it’s much easier and faster than finding the actual right way to parse your problem, is a tool that makes your project worse and that makes you a worse programmer when you hold it.
Maybe eventually someone will create a thinking machine that itself understands this, but it probably won’t be someone who charges by the token.
It’s just a greater level of abstraction. First we talked to the computers on their own terms with punch cards.
Then Assembly came along to simplify the process, allowing humans to write readable code while compiling into Machine Code so the computers can run it.
Then we used higher-level languages like C to create the Assembly Code required.
Then we created languages like Python, that were even more human-readable, doing a lot more of the heavy lifting than C.
I understand the concern, but it’s just the latest step in a process that has been playing out since programming became a thing. At every step we give up some control, for the benefit of making our jobs easier.
I disagree. Even high level languages will consistently produce the same results. There may be low level differences depending on the compiler and the system’s architecture but if those are consistent you will get the same results.
AI coding isn’t an extremely human readable higher level programming language. Using an LLM to generate code adds a literal black box and the interpretation of the user and LLM’s human language (which humans can’t even do consistently) to the equation.
That’s fair, but I’m not arguing that it’s a higher-level language. I was trying to illustrate that it’s just to help people code more easily - as all of the other steps were.
If you asked ten programmers to turn a given set of instructions into code, you’d end up with ten different blocks of code. That’s the nature of turning English into code.
The difference is that this is a tool that does it, not a person. You write things in English, it produces code.
FWIW, I enjoy using a hex-editor to tinker around with Super Famicom ROMs in my free time - I’m certainly not anti-coding. As OP said, though, AI is now pretty good at generating working code - it’s daft not to use it as a tool.
Most programmers are embracing ai. As its the use case where it acts as the biggest force multiplyer.
Shhhh don’t tell them. We’re trying to leave these guys in the dust.
They will adapt or die. If they haven’t adapted already telling them isn’t gonna change their minds.