Interesting what he wrote about LLMs’ inability to “zoom out” and see the whole picture. I use Gemini and ChatGPT sometimes to help debug admin / DevOps problems. It’s a great help for extra input, a bit like rubberducking on steroids.
Examples how it went:
Problem: Apache-cluster and connected KeyCloak-Cluster, odd problems with loginflow. Reducing KeyCloak to 1 node solves it, so it says that we need to debug node communication and how to set the debug log settings. A lot of analysis together. But after a while, it’s pretty obvious that the Apache-cluster doesn’t use the sticky session correctly and forwards requests to the wrong KeyCloak node in the middle of the login flow. LLM does not see that, wanted to continue to dig deeper and deeper into supposedly “odd” details of the communication between KeyCloak nodes, althought the combined logs of all nodes show that the error was in load balancing.
Problem: Apache from a different cluster often returns 413 (payload too large). Indeed it happens with pretty large requests, the limit where it happens is a big over 8kB without the body. But the incoming request is valid. So I ask both Gemini and ChatGPT for a complete list of things that cause Apache to do that. It does a decent job at that. And one of it is close: It says to check for mod_proxy_ajp use, since that observed limit could be caused by trying to make an AJP package to communicate with backchannel servers. It was not the cause; the actual mod was mod_jk, which also uses AJP. It helped me focus on watching out for anything using AJP when reviewing the whole config manually, so I found it, and the “rubberducking” helped indirectly. But the LLM said we must forget about AJP and focus on other possible causes - a dead end. When I told it the solution, it was like: Of course mod_jk. (413 sounds like the request TO the apache is wrong, but actually, it tries internally to create an invalid AJP package over 8kB, and when it fails blames the incoming request.)
The LLM worship has to stop.
It’s like saying a hammer can build a house. No, it can’t.
It’s useful to pound in nails and automate a lot of repetitive and boring tasks but it’s not going to build the house for you - architect it, plan it, validate it.
It’s similar to the whole 3D printing hype. You can 3D print a house! No you can’t.
You can 3D print a wall, maybe a window.
Then have a skilled Craftsman put it all together for you, ensure fit and finish and essentially build the final product.
You’re making a great analogy with the 3D printing of a house.
However, if we consider the 3D printed house scenario; that skilled craftsman is now able to do things on his own that he would have needed a team for in the past. Most, if not all, of the less skilled members of that team are not getting any experience within the craft at that point. They’re no longer necessary when one skilled person can now do things on their own.
What happens when the skilled and highly experienced craftsmen that use AI as a supplemental tool (and subsequently earn all the work) eventually retire, and there’s been no juniors or mid-levels for a while? No one is really going to be qualified without having had exposure to the trade for several years.
Absolutely. This is a huge problem and I’ve read about this very problem from a number of sources. This will have a huge impact on engineering and information work.
Interestingly enough, A similar shortage occurred in the trades when information work was up and coming and the trades were shunned as a career path for many. Now we don’t have enough plumbers and electricians. Trades are now finding their the skills in high demand and charging very high rates.
The trades problem is a typical small business problem with toxic work environments. I knew plenty that washed out of the trades because of that. The “nobody wants to work anymore” tradesmen but really it’s “nobody wants to work with me for what I’m willing to pay”
I hate the simulated intelligence nonsense at least as much as you, but you should probably know about this if you’re saying you can’t 3d print a house: https://youtu.be/vL2KoMNzGTo
Yeah I’ve seen that before and it’s basically what I’m talking about. Again, that’s not “printing a 3D house” as hype would lead one to believe. Is it extruding cement to build the walls around very carefully placed framing and heavily managed and coordinated by people and finished with plumbing, electrical, etc.
It’s cool that they can bring this huge piece of equipment to extrude cement to form some kind of wall. It’s a neat proof of concept. I personally wouldn’t want to live in a house that looked anything like or was constructed that way. Would you?
I mean, “to 3d print a wall” is a massive, bordering on disingenuous, understatement of what’s happening there. They’re replacing all of the construction work of framing and finishing all of the walls of the house, interior and exterior, plus attaching them and insulating them, with a single step.
My point is if you want to make a good argument against LLMs, your metaphor should not have such an easy argument against it at the ready.
Huh? They just made the walls. Out of cement.
Making the walls of a house is one of the easiest steps, if not the easiest. And these would still need insulation, electrical, etc. And they look like shit.
Spoken like a person who has never been involved in the construction of a home. It’s effectively doing the job of (poorly) pouring concrete which isn’t the difficult or time consuming part.
My dude, I worked home renovations for many years. Nice try to discredit me rather than my argument though.
Did you see another video about this? The one linked only showed the walls and still showed them doing interior framing. Nothing about windows, electrical, plumbing, insulation, etc.
What they showed could speed up construction but there are tons of other steps involved.
I do wonder how sturdy it is since it doesn’t look like rebar or anything else is added.
I’m not an expert on it, I’ve only watched a few videos on it, but from what I’ve seen they add structural elements between the layers at certain points which act like rebar.
There’s no framing of the walls, but they do set up scaffolds to support overhangs (because you can’t print onto nothing)
I’m with you on this. We can’t just causally brush aside a machine that can create the frame of a house unattended - just because it can’t also do wiring. It was a bad choice of image to use to attack AI. In fact it’s a perfect metaphor for what AI is actually good for: automating certain parts of the work. Yes you still need an electrician to come in, just like you also need a software engineer to wire up the UI code their LLM generated to the back end, etc.
You circled all the way back to the original point lol. The whole thrust of this conversation is “AI can be used to automate parts of the work, but you still need knowledgeable people to finish it”. Just like “a concrete 3d printer can be used to automate parts of building a house, but you still need knowledgeable people to finish it.”
3d printed concrete houses exist. Why can’t you 3d print a house? Not the best metaphor lol
You don’t like glass windows? Air conditioning? A door?
You can certainly 3D print a building, but can you really 3D print a house? Can it 3d print doors and windows that can open and close and be locked? Can it 3D print the plumbing and wiring and have it be safe and functional? Can it 3D print the foundation? What about bathroom fixtures, kitchen cabinets, and things like carpet?
It’s actually not a bad metaphor. You can use a 3D printer to help with building a house, and to 3D print some of fixtures and bits and pieces that go into the house. Using a 3D printer would automate a fair amount of the manual labor that goes into building a house today (at least how it is done in the US). But you’re still going to need people who know what they are doing put it all together to transform the building to a functional home. We’re still a fair ways away from just being able to 3D print a house, just like we’re fair ways away from having a LLM write a large, complex piece of software.
Exactly this.
LLMs are useful to provide generic examples of how a function works. This is something that would previously take an hour of searching the docs and online forums, but the LLM can do for very quickly, and I appreciate. But I have a library I want to use that was just updated with entirely new syntax. The LLMs are pretty much useless for it. Back to the docs I go! Maybe my terrible code will help to train the model. And in my field (marine biogeochemistry), the LLM generally cannot understand the nuances of what I’m trying to do. Vibe coding is impossible. And I doubt the training set will ever be large or relevant enough for the vibe coding to be feasible.
I think it’s going to require a change in how models are built and optimized. Software engineering requires models that can do more than just generate code.
You mean to tell me that language models aren’t intelligent? But that would mean all these people cramming LLMs in places where intelligence is needed are wasting their time?? Who knew?
Me.
I have a solution for that, I just need a small loan of a billion dollars and 5 years. #trustmebro
Only one billion?? What a deal! Where’s my checkbook!?
Clearly LLMs are useful to software engineers.
Citation needed. I don’t use one. If my coworkers do, they’re very quiet about it. More than half the posts I see promoting them, even as “just a tool,” are from people with obvious conflicts of interest. What’s “clear” to me is that the Overton window has been dragged kicking and screaming to the extreme end of the scale by five years of constant press releases masquerading as news and billions of dollars of market speculation.
I’m not going to delegate the easiest part of my job to something that’s undeniably worse at it. I’m not going to pass up opportunities to understand a system better in hopes of getting 30-minute tasks done in 10. And I’m definitely not going to pay for the privilege.
LLMs have made it really clear when previous concepts actually grouped things that were distinct. Not so long ago, Chess was thought to be uniquely human, until it wasn’t, and language was thought to imply intelligence behind it, until it wasn’t.
So let’s separate out some concerns and ask what exactly we mean by engineering. To me, engineering means solving a problem. For someone, for myself, for theory, whatever. Why do we want to solve the problem, what we want to do to solve the problem, and how we do that often blurred together. Now, AI can supply the how in abundance. Too much abundance, even. So humans should move up the stack, focus on what problem to solve and why we want to solve it. Then, go into detail to describe what that solution looks like. So for example, making a UI in Figma or writing a few sentences on how a user would actually do the thing. Then, hand that off to the AI once you think it’s sufficiently defined.
The author misses a step in the engineering loop that’s important though. Plans almost always involve hidden assumptions and undefined or underdefined behavior that implementation will uncover. Even more so with AI, you can’t just throw a plan and expect good results, the humans need to come back, figure out what was underdefined or not actually what they wanted, and update the plan. People can ‘imagine’ rotating an apple in their head, but most of them will fail utterly if asked to draw it; they’re holding the idea of rotating an apple, not actually rotating the apple, and application forces realization of the difference.
To those who have played around with LLM code generation more than me, how are they at debugging?
I’m thinking of Kernighan’s Law: “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” If vibe coding reduces the complexity of writing code by 10x, but debugging remains just as difficult as before, then Kernighan’s Law needs to be updated to say debugging is 20x as hard as vibe coding. Vibe coders have no hope of bridging that gap.
deleted by creator
They’re not good at debugging. The article is pretty spot on, IMO - they’re great at doing the work; but you are still the brain. You’re still deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore. Similar for debugging - this is not an exercise at the lowest level, and needs you to run it.
deciding what to do, and maybe 50% of the time how to do it, you’re just not executing the lowest level anymore
And that’s exactly what I want. And I don’t get it why people want more. Having more means you have less and less control or influence on the result. What I want is that in other fields it becomes like it is in programming now, so that you micromanage every step and have great control over the result.
I’ve used AI by just pasting code, then asking if there’s anything wrong with it. It would find things wrong with it, but would also say some things were wrong when it was actually fine.
I’ve used it in an agentic-AI (Cursor), and it’s not good at debugging any slightly-complex code. It would often get “stuck” on errors that were obvious to me, but making wrong, sometimes nonsensical changes.
Definitely not good. Sometimes they can solve issues but you gotta point them in the direction of the issue. Other times they write hacky workarounds that do the job for the moment but crash catastrophically with the next major dependency update.
I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”
It didn’t even solve our problem.
I saw an LLM override the casting operator in C#. An evangelist would say “genius! what a novel solution!” I said “nobody at this company is going to know what this code is doing 6 months from now.”
Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”. This is btw one of the reasons why I increasingly dislike programming as a field over the years and happily delegate the coding part to AI nowadays. This field celebrates conformism and that’s why humans shouldn’t write code manually. Perfect field to automate away via LLMs.
Wow you just completely destroyed any credibility about your software development opinions.
Why though? I think hating and maybe even disrespecting programming and wanting your job to be as much redundant and replaced as possible is actually the best mindset for a programmer. Maybe in the past it was a nice mindset to become a teamlead or a project manager, but nowadays with AI it’s a mindset for programmers.
Before LLMs people were often saying this about people smarter than the rest of the group. “Yeah he was too smart and overengineered solutions that no one could understand after he left,”.
This part.
The fact that I dislike it that it turned out that software engineering is not a good place for self-expression or for demonstrating your power level or the beauty and depth of your intricate thought patterns through advanced constructs and structures you come up with, doesn’t mean that I disagree that this is true.
Before LLMs people were often saying this about people smarter than the rest of the group.
Smarter by whose metric? If you can’t write software that meets the bare minimum of comprehensibility, you’re probably not as smart as you think you are.
Software engineering is an engineering discipline, and conformity is exactly what you want in engineering — because in engineering you don’t call it ‘conformity’, you call it ‘standardization’. Nobody wants to hire a maverick bridge-builder, they wanna hire the guy who follows standards and best practices because that’s how you build a bridge that doesn’t fall down. The engineers who don’t follow standards and who deride others as being too stupid or too conservative to understand their vision are the ones who end up crushed to death by their imploding carbon fiber submarine at the bottom of the Atlantic.
AI has exactly the same “maverick” tendencies as human developers (because, surprise surprise, it’s trained on human output), and until that gets ironed out, it’s not suitable for writing anything more than the most basic boilerplate — which is stuff you can usually just copy-paste together in five minutes anyway.
You’re right of course and engineering as a whole is a first-line subject to AI. Everything that has strict specs, standards, invariants will benefit massively from it, and conforming is what AI inherently excels at, as opposed to humans. Those complaints like the one this subthread started with are usually people being bad at writing requirements rather than AI being bad at following them. If you approach requirements like in actual engineering fields, you will get corresponding results, while humans will struggle to fully conform or even try to find tricks and loopholes in your requirements to sidestep them and assert their will while technically still remaining in “barely legal” territory.
I feel like this isn’t quite true and is something I hear a lot of people say about ai. That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.
In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.
Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.
Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.
This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.
That it’s good at following requirements and confirming and being a mechanical and logical robot because that’s what computers are like and that’s how it is in sci fi.
They’re good at that because they are ANNs.
In reality, it seems like that’s what they’re worst at. They’re great at seeing patterns and creating ideas but terrible at following instructions or staying on task. As soon as something is a bit bigger than they can track context for, they’ll get “creative” and if they see a pattern that they can complete, they will, even if it’s not correct. I’ve had copilot start writing poetry in my code because there was a string it could complete.
Get it to make a pretty looking static web page with fancy css where it gets to make all the decisions? It does it fast.
Give it an actual, specific programming task in a full sized application with multiple interconnected pieces and strict requirements? It confidently breaks most of the requirements, and spits out garbage. If it can’t hold the entire thing in its context, or if there’s a lot of strict rules to follow, it’ll struggle and forget what it’s doing or why. Like a particularly bad human programmer would.
This is why AI is automating art and music and writing and not more mundane/logical/engineering tasks. Great at being creative and balls at following instructions for more than a few steps.
My experience is opposite.
How are they at debugging? In a silo, they’re shit.
I’ve been using one LLM to debug the other this past week for a personal project, and it can be a bit tedious sometimes, but it eventually does a decent enough job. I’m pretty much vibe coding things that are a bit out of my immediate knowledge and skill set, but I know how they’re supposed to work. For example, I’ve got some python scripts using rekognition to scan photos for porn or other explicit stuff before they get sent to an s3 bucket. After that happens, there’s now a dashboard that’s going to give me results on how many images were scanned and then marked as either acceptable or flagged as inappropriate. After a threshold of too many inappropriate images being sent in, it’ll shadowban them from sending any more dick pics in.
For someone that’s never taken a coding course, I’m relatively happy with the results I’m getting so far. Granted, this may be small potatoes for someone with an actual development background; but as someone that’s been working adjacent to those folks for several years, I’m happy with the output.
The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.
In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.
For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.
Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.
we must start using AI tools in our workflow and is tracking our usage
Reads to me as “Please help us justify the very expensive license we just purchased and all the talented engineers we just laid off.”
I know the pain. Leadership’s desperation is so thick you can smell it. They got FOMO’d, now they’re humiliated, so they start lashing out.
Funny enough, the AI shift is really just covering for the over-hiring mistakes in 2021. They can’t admit they fucked up in hiring too many people during Covid, so they’re using AI as the scapegoat. We all know it’s not able to actually replace people yet; but that’s happening anyway.
There won’t be any immediate ramifications, we’ll start to see that in probably 12-18 months or so. It’s just another form of kicking the can down the road.
maybe its just me but I find typos to be the most difficult because my brain and easily see it as correct so the whole code looks correct. Its like the way you can take the vowels out of sentences and people can still immediately read it.
The nastiest typos are autocompleted similarly named (and correctly typed) variables, functions, or types. Which is why it’s a good idea to avoid such name clashes in the first place. If this is impossible or not practical, at least put the part that differs at the start of the name.
Thing is that having the differ part at the end is nicer for sorting.
What do you mean? For what purpose would you sort variables or functions?
Sorry. I was thinking hostnames or other endpoints and was thinking that way back with typos. dev78usc03 instead of dev78usc02 or such.
I am working at a big AI company on llm generating code for automation. I’ve had cursor solve a bug that was occuring in prod after prompting and asking questions of the responses. It took a few rounds but it found a really obscure interaction with the app and the host, and it thanked me for the insight. 😀. I deployed the fix and it worked.
The problem I have is I member it solving this bug, and I remember being impressed, but I don’t remember the bug. I took a screenshot of it, but currently don’t have access to those. I am disconnected from the code that the llm has generated, but I am very aware of how the app works and what it should do very intently because I had to write requirements and design doc.
Well, they will simply fire many and leave the required number of workers to work with AI. This is exactly what they will want to do at any convenient opportunity. But those who remain will still have to check everything carefully in case the AI made a mistake somewhere.
I don’t work in IT, but I do know you need creativity to work in the industry, something which the current LLM/AI doesn’t possess.
Linguists also dismiss LLMs in similar vein because LLMs can’t grasp context. It is always funny to be sarcastic and ironic on an LLM.
Soft skills and culture are what that the current iteration of LLMs lack. However, I do think there is still huge potential for AI development in dacades to come, but I want this AI bubble to burst as “in your face” to companies.