I’ve seen a few articles saying that instead of hating AI, the real quiet programmers young and old are loving it and have a renewed sense of purpose coding with llm helpers (this article was also hating on ed zitiron, which makes sense why it would).

Is this total bullshit? I have to admit, even though it makes me ill, I’ve used llms a few times to help me learn simple code syntax quickly (im and absolute noob who’s wanted my whole life to learn code but cant grasp it very well). But yes, a lot of time its wrong.

  • AlphaOmega@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 days ago

    I’m a full stack web dev. I use it for HTML and CSS (sometimes) else it’s a big waste of time trying to get working modern PHP and JS out of it. Least that’s been my experience

  • jasory@programming.dev
    link
    fedilink
    arrow-up
    3
    ·
    5 days ago

    I’m not a software dev but rather a mathematical researcher. I see zero use for myself or designing any advanced or critical systems. LLM coding is like relying on stack overflow, if you want to solve a novel or sophisticated problem relying on them is the wrong approach.

  • southernbrewer@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 days ago

    I’m enjoying it, mostly. It’s definitely great at some tasks and terrible at orhers. You get a feel for what those are after a while:

    1. Throwaway projects - proof of concepts, one-off static websites, that kind of thing: absolutely ideal. Weeks of dev becomes hours, and you barely need to bother reviewing it if it works.

    2. Research (find a tool for doing XYZ) where you barely know the right search terms: ideal. The research mode on claude.ai is especially amazing at this.

    3. Anything where the language is unfamiliar. AI bootstraps past most of the learning curve. Doesn’t help you learn much, but sometimes you don’t care about learning the codebase layout and you just need to fix something.

    4. Any medium sized project with a detailed up front description.

    What it’s not good for:

    1. Debugging in a complex system
    2. Tiny projects (one line change), faster to do it yourself
    3. Large projects (500+ line change) - the diff becomes unreviewable fairly quickly and can’t be trusted (much worse than the same problem with a human where you can at least trust the intent)
  • iglou@programming.dev
    link
    fedilink
    arrow-up
    1
    ·
    6 days ago

    I’m not against AI use in software development… But you need to understand what the tools you use actually do.

    An LLM is not a dev. It doesn’t have the capability to think on a problem and come up with a solution. If you use an LLM as a dev, you are an idiot pressing buttons on a black box you understand nothing about.

    An LLM is a predictive tool. So use it as a predictive tool.

    • Boilerplate code? It can do that, yeah. I don’t like to use it that way, but it can do that.
    • Implementing a new feature? Maybe, if you’re lucky, it has been trained on enough data that it can put something together. But you need to consider its output completely untrustworthy, and therefore it will require so much reviewing that it’s just better to write it yourself in the first place.
    • Implementing something that solves a problem not solved before? Just don’t. Use your own brain, for fuck’s sake. That’s what you have been trained on.

    The one use of AI, at the moment, that I actually like and actually improves my workflow is JetBrains’ full line completion AI. It very often accurately predicts what I want to write when it’s boilerplate-ish, and shuts up when I write something original.

  • I don’t see how it could be more effecient to have AI generate something that you then have to review and make sure actually works over just writing the code yourself, unless you don’t know enough to code it yourself and just accept the AI generated code as-is without further review.

    • Quibblekrust@thelemmy.club
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 days ago

      I don’t see how it could be more effecient to have [a junior developer write] something that you then have to review and make sure actually works over just writing the code yourself…

      • iglou@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        6 days ago
        1. A junior dev wont be a junior dev their whole career, code reviews also educates them
        2. You can’t trust the quality of a junior’s work, but you can trust that they are able to understand the project and their role in it. LLMs are by definition unable to think and understand. Just pretty good at pretending they are. Which leads to the third point:
        3. When you “vibe code”, you don’t “just” have to review the produced code, you also have to constantly tell the LLM what you want it to do. And fight with it when it fucks up.
        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 days ago

          if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

          LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete…

          type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

          let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

          fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

          let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

          there’s load of things LLMs are good for, but unless you’re just learning something new and you know your code will be garbage anyway, none of those things replace your brain: just repetitive crap you probably hate to start with because you could explain it to a non-programmer and they could carry out the tasks

          • iglou@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            6 days ago

            if the only point of hiring junior devs were to skill them up so they’d be useful in the future, nobody would hire junior devs

            I never said that, and a single review already will make a junior dev better off the bat

            LLMs aren’t the brain: they’re exactly what they are… a fancy auto complete

            I agree, but then you say…

            type a function header, let if fill the body… as long as you’re descriptive enough and the function is simple enough to understand (as all well structured code should be) it usually gets it pretty right: it’s somewhat of a substitute for libraries, but not for your own structure

            …which says the other thing. Implementing a function isn’t for a “fancy autocomplete”, it’s for a brain to do. Unless all you do is reinventing the wheel, then yeah, it can generate a decent wheel for you.

            let it generate unit tests: doesn’t matter if it gets it wrong because the test will fail; it’ll write a pretty solid test suite using edge cases you may have forgotten

            Fuck no. If it gets the test wrong, it won’t necessarily fail. It might very well pass even when it should fail, and that’s something you won’t know unless you review every single line it spits out. That’s one of the worst areas to use an LLM.

            fill lines of data based on other data structures: it can transform text quicker than you can write regex and i’ve never had it fail at this

            I’m not sure what you mean by that.

            let it name functions based on a description… you can’t think of the words, but an LLM has a very wide vocabulary and - whilst not knowledge - does have a pretty good handle on synonyms and summary etc

            I agree with that, naming or even documenting is a good way to use an LLM. With supervision of course, but an imprecise name or documentation is not critical.

  • Naich@lemmings.world
    link
    fedilink
    arrow-up
    1
    ·
    7 days ago

    You can either spend your time generating prompts, tweaking them until you get what you want and then using more prompts to refining the code until you end up with something that does what you want…

    or you can just fucking write it yourself. And there’s the bonus of understanding how it works.

    AI is probably fine for generating boiler plate code or repetitive simple stuff, but personally I wouldn’t trust it any further than that.

    • hono4kami@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Not sure why you’re sharing this. This is one of the worst blog post I’ve read this year. The amount of name calling is unnecessary, childish. It’s just not good.

  • Beej Jorgensen@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    I’m pretty sure every time you use AI for programming your brain atrophies a little, even if you’re just looking something up. There’s value in the struggle.

    So they can definitely speed you up, but be careful how you use it. There’s no value in a programmer who can only blindly recite LLM output.

    There’s a balance to be struck in there somewhere, and I’m still figuring it out.

    • 0x1C3B00DA@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      I’m pretty sure every time you use AI for programming your brain atrophies a little, even if you’re just looking something up. There’s value in the struggle.

      I assume you were joking but some studies have come out recently that found this is exactly what happens and for more than just programming. (sorry it was a while ago so I dont have links)

      • Honytawk@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        There are similar studies on the effects of watching a Youtube video instead of reading a manual.

  • JoeKrogan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    7 days ago

    I use it now and again but not integrated into an ide and not to write large bits of code.

    My uses are like so

    Rewrite this rant to shut the PO/PM up. Explain why this is a waste of time

    Convert this excel row into a custom model.

    Given these tables give me the sql to do xyz.

    Sometimes for troubleshooting an environment issue

    Do I need it , no. But if it saves me some time on bullshit tasks then thats more time for me

    • andyburke@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      My brother in programming,

      please don’t use AI for data format conversion when deterministic energy efficient means are readily available.

      • an old man.
  • Sicklad@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    From my experience it’s great at doing things that have been done 1000x before (which makes sense given the training data), but when it comes to building something novel it really struggles, especially if there’s 3rd party libraries involved that aren’t commonly used. So you end up spending a lot of time and money hand holding it through things that likely would have been quicker to do yourself.

    • kewjo@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      7 days ago

      the 1000x before bit has quite a few sideffects to it as well.

      • lesser used languages suffer because there’s not enough training data. this gets annoying quickly when it overrides your static tools and suggests nonsense.
      • larger training sets contain more vulnerabilities as most code is pretty terrible and may just be snippets that someone used once and threw away. owasp has a top 10 for a reason. take input validation for example, if I’m working on parsing a string there’s usually context such as is this trusted data or untrusted? if i don’t have that mental model where I’m thinking about the data i might see generated code and think it looks correct but in reality its extremely nefarious.