I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).
Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.
I don’t see how it could be more effecient to have AI generate something that you then have to review and make sure actually works over just writing the code yourself, unless you don’t know enough to code it yourself and just accept the AI generated code as-is without further review.
The junior developer can (hopefully) learn and improve.
if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs
LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…
type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure
let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten
fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this
let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc
there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks
I never said that, and a single review already will make a junior dev better off the bat
I agree, but then you say…
…which says the other thing. Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.
Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.
I’m not sure what you mean by that.
I agree with that, naming or even documenting is a good way to use an LLM. With supervision of course, but an imprecise name or documentation is not critical.