A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • JensSpahnpasta@feddit.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 days ago

    I really hate this new trend of FOSS developers being attacked and harassed for using AI. You might not like if they are using AI. Or you might not like AI at all, but there’s no reason to harass people who are providing you free software. Let them develop it like they want. If you don’t like that they used AI, use another software. Or fork the software before they started using AI. But attacking people like that is not okay on so many levels. It’s not okay to attack people for the software they are using. It’s not okay to attack developers providing a free service and it’s not okay to attack people at all.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 days ago

        That’s twisting the order of events.

        The developer was marking code when AI was used.

        Anti-AI drones started harassing him in Discord, the forums and Github PRs

        The developer stopped marking code when AI was used.

        The Anti-AI assholes are not participating in development in good faith, this is a harassment campaign. He’s taking steps to mitigate the harassment.

        The fault and blame here is entirely on the people who thought it was okay to dog pile on a volunteer developer.

    • selokichtli@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I’m not so sure anymore whether THEY are providing FOSS or just approving slop PRs. I do not like harassment at all, even less against a guy that says is/was dealing with depression. That’s why I comment here. Being said that, it’s kind of a jerk move to just hide the fact he is using this tool for development under his own name. As a teacher, few things would make me more mad than having a student doing this.

      • JensSpahnpasta@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        I mean does it really matter? people are using his software and it is totally irrelevant how he is developing it. It doesn’t matter if he is scribbling it on paper or using VS Code or something else or an AI tool. He can develop the software as he likes. You can walz into an open source project and demand that the project has to work according to your own standards. We are not in school - in school the cheating with AI is hindering your learning progress, but if you want to use a coding agent as adult, just do it. And you don’t have to disclose it because why should you? You also don’t disclose that you are using code auto-completion or some other technology.

        • selokichtli@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 days ago

          That’s where we are different. I don’t use Lutris, now I’m sure I won’t be trying it. Of course there are people who think this is relevant. I didn’t think I’d have to explain to you that it is not about them using AI as a tool, it’s about not giving enough information about their authorship what concerns me. I suppose they accept donations, well, it is important to me that part of my donations don’t end up with companies that hurt the environment and people’s jobs. Did auto-completion technologies steal almost all humankind constructed knowledge? Do they need catastrophic amounts of environmental resources to work? Do they produce grave diseases to whole populations near them? Did they disrupt several markets for being created? How many employments did they ruin?

          • JensSpahnpasta@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            Does it really matter? You do not know how commercial software is being developed. In most cases you do not even know which language is used and AI usage is also not disclosed. You do not know this about your phone, you do not know about your appliances, you do not know how the software in your car is being made, you do not know how everything you’re using every day is being created. I’m not sure why you have other standards for open source software. And please do not tell me that you are only using open source software in your car or in the train you’re using as public transit.

            And yes, there might be harm, but let’s be honest, every other company around there is doing harm to someone or the environment. And if an open source dev uses AI this is really not the fight you want to do. fight against oil companies or something like that. factory farming, the car industry, big tech and all the other industries doing real harm. Attacking the dev of Lutris is not the fight anyone should put some effort in.

            • selokichtli@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              8 days ago

              As I said before, it matters to me and I will be doing this until it’s possible. I know I’m not using as much AI through software as someone that just don’t care about it. It’s not a categorical thing to use it or not, sometimes you don’t even get to choose, that’s life. I believe it’s as important to take this fight against the industry of AI as a whole as the oil companies, probably even more important. Honestly, the developer already made their decision, and that’s okay for me.

    • ilickfrogs@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      The only degenerates attacking open source devs wouldn’t even have the capacity to vibe code something themselves. Literal losers. If they’re so much better than the devs, fork it and make it yourself fucktard.

  • Evotech@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    8 days ago

    Moral of the story is don’t let Claude do commits. It insists on crediting itself

    Also stop harassing openspurce developers

    Also be transparent when you have vibecoded commits. There’s no reason to hide it. Just say that parts of your codebase is vibecoded or coded with ai assist and those who don’t like it can fork it or use something else.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      8 days ago

      Also be transparent when you have vibecoded commits. There’s no reason to hide it.

      I find it rather ironic that one thing they are transparent about is the covering up the evidence that proves it was vibecoded. Apparently, they never heard of the Strainsand Effect.

    • oortjunk@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      This. I’m fine with auto complete, snippets and llm suggestions. But I still need to read and review and, critically, comphrend the code. I always review it’s work, and most often make changes that I know are better practices.

        • oortjunk@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          I, too, sense a rising anti-intellectualism in today’s world in general. Just the other day some guy came by my place asking to “read the gas meter”. Whatever happened to the classics?!

  • Cyv_@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 days ago

    I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.

    • Tony Bark@pawb.socialOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.

      • Scrollone@feddit.it
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard

        • yucandu@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          It’s like having an idiot employee that works for free. Depending on how you manage them, that employee can either do work to benefit you or just get in your way.

          • daikiki@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 days ago

            Only it’s not free. If you run it in the cloud, it’s heavily subsidized and proactively destroying the planet, and if you run it at home, you’re still using a lot of increasingly unaffordable power, and if you want something smarter than the average American politician, the upfront investment is still very significant.

            • yucandu@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 days ago

              Yeah I’m not buying the “proactively destroying the planet” angle. I’d imagine there’s a lot of misinformation around AI, given that the products surrounding it are mostly Western, like vaccines…

          • BackgrndNoize@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Not even free, just cheaper than an actual employee for now, but greed is inevitable and AI is computationally expensive, it’s only a matter of time before these AI companies start cranking up the prices.

      • Vlyn@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        You might genuinely be using it wrong.

        At work we have a big push to use Claude, but as a tool and not a developer replacement. And it’s working pretty damn well when properly setup.

        Mostly using Claude Sonnet 4.6 with Claude Code. It’s important to run /init and check the output, that will produce a CLAUDE.md file that describes your project (which always gets added to your context).

        Important: Review everything the AI writes, this is not a hands-off process. For bigger changes use the planning mode and split tasks up, the smaller the task the better the output.

        Claude Code automatically uses subagents to fetch information, e.g. API documentation. Nowadays it’s extremely rare that it hallucinates something that doesn’t exist. It might use outdated info and need a nudge, like after the recent upgrade to .NET 10 (But just adding that info to the project context file is enough).

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          10 days ago

          Agreed, I don’t understand people not even giving it a chance. They try it for five minutes, it doesn’t do exactly what they want, they give up on it, and shout how shit it is.

          Meanwhile, I put the work in, see it do amazing shit after figuring out the basics of how the tech works, write rules and skills for it, have it figure out complex problems, etc.

          It’s like handing your 90-year-old grandpa the Internet, and they don’t know what the fuck to do with it. It’s so infuriating.

          Probably because, like your 90-year-old grandpa with the Internet, you have to know how to use the search engine. You have to know how to communicate ideas to an LLM, in detail, with fucking context, not just “me needs problem solvey, go do fix thing!”

          • Vlyn@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            It’s not really that simple. Yes, it’s a great tool when it works, but in the end it boils down to being a text prediction machine.

            So a nice helper to throw shit at, but I trust the output as much as a random Stackoverflow reply with no votes :)

            • P03 Locke@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              10 days ago

              but in the end it boils down to being a text prediction machine.

              And we’re barely smarter than a bunch of monkeys throw piles of shit at each other. Being reductive about its origins doesn’t really explain anything.

              I trust the output as much as a random Stackoverflow reply with no votes :)

              Yeah, but that’s why there’s unit tests. Let it run its own tests and solve its own bugs. How many mistakes have you or I made because we hate making unit tests? At least the LLM has no problems writing the tests, after you know it works.

            • dream_weasel@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              10 days ago

              I feel like there needs to be a dedicated post (and I don’t want to write it, but maybe I eventually will) that outlines what a model really is. It is not just a statistical text prediction machine unless you are being so loose with the definition of “statistical” that it doesn’t even mean anything anymore.

              A decent example of a statistical text prediction machine is the middle word suggested by your phone when you’re using the keyboard. An LLM is not that.

              In the most general terms, this kind of language model tokenizes a corpus of text based on a vocabulary (which is probably more than just the words in the dictionary), uses an embedding model to translate these tokens into a vector of semantic “meaning” which minimized loss in a bidirectional encoding (probably), that is then trained against a rubric for one or more topic area questions, retrained for instruction and explainability, retrained with reinforcement learning and human feedback to provide guardrails, and retrained again to make use of supplemental materials not part of the original training corpus (resource augmented generation), then distilled, then probably scaled and fine tuned against topic areas of choice (like coding or Korean or whatever) and maybe THEN made available to people to use. There are generally more parts to curriculum learning even than that but it’s a representative-ish start.

              My point being that, yes, it would be nuts to pose ANY question to a predictor that says “with 84% probability, the word that is most likely follows ‘I really like’ is ‘gooning’ on reddit”, but even Grok is wildly more sophisticated than that and Grok is terrible.

              Edit: And also I really like your take at the start of this thread: user error is a pretty huge problem in this space.

              • Vlyn@lemmy.zip
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                10 days ago

                The training is sophisticated, but inference is unfortunately really a text prediction machine. Technically token prediction, but you get the idea.

                For every single token/word. You input your system prompt, context, user input, then the output starts.

                The

                Feed the entire context back in and add the reply “The” at the end.

                The capital

                Feed everything in again with “The capital”

                The capital of

                Feed everything in again…

                The capital of Austria

                It literally works like that, which sounds crazy :)

                The only control you as a user can have is the sampling, like temperature, top-k and so on. But that’s just to soften and randomize how deterministic the model is.

                Edit: I should add that tool and subagent use makes this approach a bit more powerful nowadays. But it all boils down to text prediction again. Even the tools are described per text for what they are for.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            Just yesterday I had one of those moments of grace that are becoming commonplace.

            Basically I have to migrate a service from a n8n workflow to an actual nodejs server for performance reasons. I spent 15 minutes carefully scoping the migration, telling it exactly what tools to use and code style to adopt. Gave it the original brief and access to the n8n workflows.

            The whole thing was done in 4 minutes and 30 seconds. It even noticed a bug which has been in production unnoticed for the past year. Gave me some good documentation on how to setup the Google service account, the kind of memory usage to expect so I can dimension the instant accordingly. Another five minutes and I had a whole test suite with decent coverage. I had negotiated with the client that it would take around a week, well that was the under promise of the year…

            People who go around telling it doesn’t work are incompetent, out of their minds or straight up lying.

      • Fatal@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        At a minimum, the agent should be compiling the code and running tests before handing things back to you. “It references non-existent APIs” isn’t a modern problem.

      • aloofPenguin@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        I had the same experience. Asked a local LLM about using sole Qt Wayland stuff for keyboard input, a the only documentation was the official one (which wasn’t a lot for a noob), no.examples of it being used online, and with all my attempts at making it work failing. it hallucinated some functions that didn’t exist, even when I let it do web search (NOT via my browser). This was a few years ago.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          10 days ago

          This was a few years ago.

          That’s 50 years in LLM terms. You might as well have been banging two rocks together.

      • CompassRed@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        The symptoms you describe are caused by bad prompting. If an AI is providing over-complicated solutions, 9 times out of 10 it’s because you didn’t constrain your problem enough. If it’s referencing tools that don’t exist, then you either haven’t specified which tools are acceptable or you haven’t provided the context required for it to find the tools. You may also be wanting too much out of AI. You can’t expect it to do everything for you. You still have to do almost all the thinking and engineering if you want a quality project - the AI is just there to write the code. Sure, you can use an AI to help you learn how to be a better engineer, but AIs typically don’t make good high-level decisions. Treat AI like an intern, not like a principal engineer.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            It’s not about stupid or smart. It’s a tool, not a person. If you don’t get the same results that other people get with the same tool, then what could possibly be the problem other than how the person is using the tool?

        • Bronzebeard@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 days ago

          “it’s your fault that it just made up tools that don’t exist” is a bold statement, bro.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 days ago

            The junior analogy comes to mind. If you hire a fresh face and they ship code that doesn’t work, it’s definitely on you, bro.

          • CompassRed@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 days ago

            No, it’s not. It doesn’t have intention. It’s literally just a tool. If you don’t get the results you expect with a tool when other people do get those results, then the problem isn’t the tool.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.

    • Fizz@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.

      • svtdragon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        It’s mostly not a thing developers do. It’s a thing the tools themselves do when asked to make a commit.

  • PerogiBoi@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 days ago

    Aaaaand just uninstalled lutris. There are many other ways to install windows games and applications that aren’t ensloppified.

  • r1veRRR@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    From his perspective, he’s investing his free time and likely money into a project for people that are 99% of the time just leechers, as in they never contribute back and only complain.

    Now he has a tool that he feels helps him deal with all that FREE labor is doing for everyone, and the very same people now want to tell him how to do his FREE labor he does for them.

    I completely understand being pissed off by that.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      So he is no longer maintaining it and Claude is. And what bullahit choose a company that doesn’t work with the military. Does he know what the military is using eight now at this very instance for AI.

      • Stitch0815@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 days ago

        wtf are you talking about

        AI is a tool

        Claude does not take over any maintainer position.

        You are just inventing facts to be angry Don’t use lutris if you disagree with him.

        But don’t harass the dev

    • qaeta@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      I mean, a reasonable person would choose to stop rather than becoming an unethical egotistical fuckwit…

      • kungen@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        A reasonable person would have forked the repo and maintained the project themselves, or used something else. I’m also deathly allergic to LLM code, but I don’t come into someone else’s free project and tell them how they should live their life.

        But I agree that it was bad style to remove the co-author attribute. He should have just said “yeah, I slop, so what?”

  • lohky@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 days ago

    There hasn’t been anything I haven’t been able to run between Heroic and Steam. I didn’t like using lutris anyway. ¯\_(ツ)_/¯

  • nialv7@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.

      • Zos_Kia@jlai.lu
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 days ago

        It’s typical of dev burnout, though. Communication starts becoming more impulsive and less constructive, especially in the face of conflicts of opinions.

        I’ve seen it play a few times already. A toxic community will take a dev who’s already struggling, troll them, screenshot their problematic responses, and use that in a campaign across relevant places such as github, reddit, lemmy… Maybe add a little light harassment on the side, as a treat. It’s a fun activity ! The dev spirals, posts increasingly unhinged responses and often quits as a result.

        The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 days ago

            Yeah same. I’d like to think i’d answer “I’ll use AI, if you don’t like it you can fork the project and i wish you good luck. Go share your opinion on AI in an appropriate place.”. But realistically there’s a high chance it catches me on a bad day and i get stupid.

        • MousePotatoDoesStuff@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          … You’re right. I definitely wouldn’t be above such a response.

          The problem is, a lot of people here - myself included - were/are also being impulsive about their responses to this issue, at least partially due to all the shitty stuff caused by GenAI.

          There might be some toxic people too, I wouldn’t be surprised - but this can happen without them, too.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            10 days ago

            The thing is, toxic people thrive in mob situations and are often found leading or even manufacturing them. I tend to be wary around this kind of setups as they are easy to get caught up in and hard to get out of.

        • Venia Silente@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          The fact that the thread is titled “is lutris slop now” is a clear indication that the intention of the poster wasn’t to contribute anything constructive but to attack the dev and put them on their back foot.

          No, it was literally an important question to have answered. And booooy did the dev answer.

          • Zos_Kia@jlai.lu
            link
            fedilink
            English
            arrow-up
            0
            ·
            8 days ago

            Is it appropriate to ask a stranger a question by first calling their work “slop” ? Is that how you communicate with people ? How is that working out irl ?

            Y’all are so immersed in bully culture that this seems normal to you smh

            • Venia Silente@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              8 days ago

              Wow, calling asking to identify if something is a thing by the name of the thing that it’s being asked about is “bully culture” now? This is a whole new low level of argument in the pro-AI take.

              • Zos_Kia@jlai.lu
                link
                fedilink
                English
                arrow-up
                1
                ·
                8 days ago

                So yes, you think this is normal human behaviour. Good luck with that shit, i hope the world treats you with the same energy.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 days ago

          I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.

          Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.

    • 4am@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      They want to put clanker code that they freely admit they don’t validate into a product that goes on the computers of people who’s experience with Linux is “I heard it’s faster for games”

      It’s irresponsible to hide it from review. It doesn’t matter if AI tools got better, AI tools still aren’t perfect and so you still have to do the legwork. Or at least let your community.

      Also, you should let your community make ethics decisions about whether to support you.

      Overall it was a rash reaction to being pressured rudely in a GitHub thread; but you know AI is a contentious topic and you went in anyway. It’s weak AF to then have a tantrum and spit in the community’s face about it.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 days ago

        Nothing is being hidden from review. The code is open source. They removed the specific attribution that indicates which parts of the code were created using Claude. That changes absolutely nothing about the ability to review the code, because a code review should not distinguish between human written code and machine written code; all of it should be checked thoroughly. In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

        • P03 Locke@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          10 days ago

          In fact, I would argue that specifically designating code as machine written is detrimental to code review, because there will be a subconscious bias among many reviewers to only focus on reviewing the machine code.

          Oh, it’s more than subconscious, as you can see in this thread.

          Lutris developer makes a perfectly sane and nuianced response to a reactionary “is lutris slop now” comment, and gets shit on for it, because everybody has to fight in black and white terms. There are no grey opinions, only battle lines to be drawn to these people.

          What? Are you all going to shit on your lord and savior Linus himself for also saying he uses LLMs? Oh, what, you didn’t know?!?

          • aksdb@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 days ago

            The response is only nuanced until the “good luck” sentence. If he swallowed that it would be an almost perfect response. But that sentence is a quite big “fuck you”.

            • P03 Locke@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 days ago

              It’s not as much of a “fuck you” as much as “I’m tired of this same fucking response, when all I’m trying to do is get some work done, which I do for fucking free, by the way”.

              • aksdb@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 days ago

                Yes, and I didn’t say that. I even argued in favor of his response thoughout this whole post (getting a shit ton of downvotes all along). But I think that doesn’t invalidate my point either: without this one sentence, his whole chain of arguments would have been pretty good and reasonable. It was just unnecessary to then add this snarky remark. It’s understandable if he’s pissed, but just because you are pissed when you say something doesn’t make what you said a clever move.

  • magikmw@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    Worth mentioning that the user that started the issue jumps around projects and creates inflammatory issues to the same effect. I’m not surprised lutris’ maintainer went off like they did, the issue is not made with good faith.

    • Zos_Kia@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 days ago

      Yes, both threads are led by two accounts with probably less than 50 commits to their names during the last year, none of which are of any relevance to the subject they are discussing.

      In a world where you could contribute your time to make some things better, there is a certain category of people who seek out nice things specifically to harm them. As open source enters mainstream culture, it also appears on the radar of this kind of people. It’s dangerous to catch their attention, as once they have you they’ll coordinate over reddit, lemmy, github, discord to ruin your reputation. The reputation of some guy who never ever did them any harm apart from bringing them something they needed, for free, but in a way that doesn’t 100% satisfy them. Pure vicious entitlement.

      I’d sooner have a drink with a salesman from OpenAI than with one of them.

  • Crozekiel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 days ago

    AI is actively destroying the environment and harming people. Data centers have been caught using methane burner generators (which are banned for use by the EPA) which significantly increase health risk to residents that live nearby (cancer and asthma rates already significantly increased). Then you have the ridiculous effects it is having on computer hardware markets, energy and water infrastructure and prices.

    Then after all of that, the AI themselves are hallucinating somewhere in the neighborhood of 25% of the time, and multiple studies have found that people that use them regularly are losing their own skills.

    I can’t figure out why people would choose to use them. I can’t figure out why programming is the one place where people that might have otherwise been considered experts in the field are excited to use them. Writers, artists, lawyers, doctors, basically every other professional field that AI companies have suggested these would be good for, they get trashed by experts in the fields for making garbage. I have a hard time believing the only thing AI can do well is write code when it sucks so badly at everything else it does. Does development suck this much? Do developers have so little idea what they are doing that this seems like a good idea?

    • antihumanitarian@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 days ago

      If you’re honestly asking, LLMs are much better at coding than any other skill right now. On one hand there’s a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models “are”. Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.

      Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It’s not like 2 years ago when the code mostly wouldn’t build, rather they can write hundreds or thousands of lines of code that works first try. I’m no blind supporter of AI, and it’s very emotionally complicated watching it after years honing the craft, but for most tasks it’s simple reality that you can do more with AI than without it. Whether it’s higher quality, higher volume, or integrating knowledge you don’t have.

      Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

      • Crozekiel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 days ago

        I was honestly asking, I constantly see artists and writers wishing AI didn’t exist because all it makes is garbage… But I also regularly see developers lashing out at AI hate, fighting tooth and nail to keep it and get more of it. That’s a really strange dichotomy to see “in the wild”.

      • Venia Silente@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

        On the contrary!

        I’ve seen quite a number of “AI cleanup specialist” job offerings so far, and even a few consulting positions on training juniors away from using AI in development.

        (No, I have not seen any position open on training management away from using AI…)

    • Netrunner@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 days ago

      You can’t fathom why someone would use AI and maybe hurt the environment a little while watching 18 F150s being driven by one person go by while drinking through a paper straw and a billionaire flies a private jet to the neighboring city overhead.

      Okay. Sounds like jealousy that you’re masking in social justice.

      Ever been a developer? If you have it’s very easy to see why having AI give you a massive second wind on a project you’ve given up on is a massive boon.

      I’ve been a developer my entire life and AI is amazing. Sorry you hate it. Does it make mistakes? Yes. Can I fix them? Yes. Can I build skyscrapers now? Yes.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        A rational person would question why they have beliefs that, when confronted with evidence against those beliefs they believe the evidence is wrong and not their beliefs.

        It could indicate that the person’s beliefs are not built on rational grounds.

    • Ephera@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      Yeah, management wants us to use AI at $DAYJOB and one of the strategies we’ve considered for lessening its negative impact on productivity, is to always put generated code into an entirely separate commit.

      Because it will guess design decisions at random while generating, and you want to know afterwards whether a design decision was made by the randomizer or by something with intelligence. Much like you want to know whether a design decision was made by the senior (then you should think twice about overriding this decision) or by the intern that knows none of the project context.

      We haven’t actually started doing these separate commits, because it’s cumbersome in other ways, but yeah, deliberately obfuscating whether the randomizer was involved, that robs you of that information even more.

    • Holytimes@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Well when you have a massive problem of harassment, death threats and fucking retarded shit stains screaming at every single dev that is even theorized to use ai regardless if it’s true or not.

      I blame fucking no one for hiding the fact.

      This is on the users not the dev. The users are fucking animals and created this very problem.

      Blaming the wrong people and attacking them is the yuck.

      Scream at the executives and giant corpos who created the problem not some random indie dev using a tool.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        Then just quit it isn’t worth it. I know AI has uses and is useful.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    9 days ago

    Honestly, unfortunately, I agree. It IS unfortunately helpful, and if you’re a competent developer using AI tooling, you can make sure it doesn’t generate slop. You are responsible for your code, at the end of the day.

    AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.

  • Skankhunt420@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 days ago

    Open source stuff is awesome and I really like people improving Linux in their spare time

    But, to do it this way is basically saying “fuck you” to the community which is fucked up.

    Could have talked about how AI helps him or how he uses it for templates or whatever and damn even if I didn’t agree with those points either that’s a lot better than being like “alright good luck finding it now then bitch

    I wouldn’t mess with anything this guy does anymore after this.

    • pheelicks@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Are you talking about his way of communicating or about his AI use? I think it could have been said a bit more level headed, but I mostly agree with what he’s said. I also see no issue with the part “good luck finding it then” that seems to sound malicious to you. To me this means “if you can’t find a difference in quality, your whole complaint is invalid because there basically is no difference in quality”. Yes, it’s still AI and should not be viewed as more than a knowledgeable intern, yada yada, but I hope the point comes across…

      • Senal@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 days ago

        Think of it like a jeweller suddenly announcing they were going to start mixing in blood diamonds with their usual diamonds “good luck finding them”.

        Functionally, blood diamonds aren’t different.

        Leaving aside that you might not want blood diamonds, are you really going to trust someone who essentially says “Fuck you, i’m going to hide them because you’re complaining”

        If you don’t know what blood diamonds are, it’s easily searchable.

        I’ll go on record as saying the aesthetic diamond industry is inflationist monopolist bullshit, but that doesn’t alter the analogy


        Secondly, it seems you don’t really understand why LLM generated code can be problematic, i’m not going to go in to it fully here but here’s a relevant outline.

        LLM generated code can (and usually does) look fine, but still not do what it’s supposed to do.

        This becomes more of an issue the larger the codebase.

        The amount of effort needed to find this reasonable looking, but flawed, code is significantly higher than just reading a new dev’s version.

        Hiding where this code is makes it even harder to find.

        Hiding the parts where you really should want additional scrutiny is stupid and self-defeating.

        • pheelicks@lemmy.zip
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 days ago

          Thanks, I think your first point is a really valid one. AI technology is far from clean, especially in a political scope.

          To your second point. I see that, but on the other hand, it makes an impression on me as if human code would be free of such errors. I would not put human code on an (implied) pedestal (especially not mine), but maybe I’m missing your point. I think being suspicious about AI code is good but same goes for human code. To me it sounds like nobody should ever trust AI code because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst. At some point there is no difference anymore between “it looks fine” and “it is fine”.

          • Senal@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 days ago

            Let’s assume we’re skipping the ethical and moral concerns about LLM usage and just discuss the technical.

            it makes an impression on me as if human code would be free of such errors

            Nobody who knows anything about coding is claiming human code is error free, that’s why code reviews, testing and all the other aspects of the software development lifecycle exist.

            To me it sounds like nobody should ever trust AI code

            Nobody should trust any code unless it can be verified that it does what is required consistently and predictably.

            because there can or will be mistakes you can’t see, which is reasonably careful at best and paranoid at worst

            This is a known thing, paranoia doesn’t really apply here, only subjectively appropriate levels of caution.

            Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.

            Whether or not these problems can be overcome (or mitigated) remains to be seen, but at the moment it still requires additional effort around the LLM parts, which is why hiding them is counterproductive.

            At some point there is no difference anymore between “it looks fine” and “it is fine”.

            This is important because it’s true, but it’s only true if you can verify it.

            This whole issue should theoretically be negated by comprehensive acceptance criteria and testing but if that were the case we’d never have any bugs in human code either.


            Personally i think the “uncanny valley code” issue is an inherent part of the way LLM’s work and there is no “solution” to it, the only option is to mitigate as best we can.

            I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.

            • pheelicks@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 days ago

              Thanks for taking the time to reply.

              Also it’s not that they can’t be seen, it’s just that the effort required to spot them is greater and the likelihood to miss something is higher.

              Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..

              I also really really dislike the non-declarative nature of generated code, which fundamentally rules it out as a reliable end to end system tool unless we can get those fully comprehensive tests up to scratch, for me at least.

              I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?

              • Senal@programming.dev
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                9 days ago

                Greater compared to human code? Not sure about that, but I’m not disagreeing either. Greater compared to verified able programmers, sure, but in general?..

                Both.

                The reasons are quite hard to describe, which is why it’s such a trap, but if you spend some time reviewing LLM code you’ll see what I mean.

                One reason is that it isn’t coding for logical correctness it’s coding for linguistic passability.

                Internally there are mechanisms for mitigating this somewhat, but its not an actual fix so problems slip through.

                I don’t think I’m getting your point here. Do you mean by that, the code basically lacks focus on an end goal? Or are you talking about the fuzzyness and randomization of the output?

                The latter, if you give it the exact same input in the exact same conditions, it’s not guaranteed to give you the same output.

                The fact that its sometimes close to the same actually makes it worse because then you can’t tell at a glance what has changed.

                It also isn’t a simple as using a diff tool, at least for anything non-trivial, because it’s variations can be in logical progression as well as language.

                Meaning you need to track these differences across the whole contextual area which, if you are doing end to end generation, is the whole codebase.

                As I said, there are mitigations, but they aren’t fixes.

  • teawrecks@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    Every extra person using all these AI tools is only adding to the issue.

    No, literally the opposite. They are going to do this until it is not financially viable. The more frugal and conscientious people are with their AI, the longer it is financially viable. If you want to pop the bubble, go set up a bot to hammer their free systems with bogus prompts. Run up their bills until they can’t afford to be speculative any more.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 days ago

      Run up their bills until they can’t afford to be speculative any more.

      Sadly I don’t think you’ve met venture capitalists… they will use your usage as a KPI for success. They have a runway longer than you can imagine, check the history of Amazon or Uber. They can be unprofitable for years, heck longer than a decade, and they are fine with it because they are claiming (and sadly sometimes right) to be cornering a trillion dollar market.

      • teawrecks@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 days ago

        Amazon is the exception, not the rule. Check the history of the dotcom bubble, including amazon. Uber is no longer allowed to lose money like it once was. That’s why they’ve switched from cheap rides and good pay, to algorithmic pricing and shit pay.

        • utopiah@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 days ago

          I’m not saying it’s a good strategy, just that since SoftBank it’s basically core to the VC default playbook.

          I believe it’s been tweaked, thanks to Musk, Enron and banks to subsidies transitioning to too big to fail.

          So, it might not work, ever, but I still think if you look at the large VC rounds, that’s what they are funding, to be so big nobody can reach you at any cost.

  • Retail4068@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 days ago

    You’re going to screech at this guy contributing his time and code, who in all likelihood will pump out more features. Absurd. Prejudice and fear has blinded a significant portion of the foss community

    • pheelicks@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 days ago

      Yup, single dev here, can confirm. I’m coding for a living but am mediocre at it since I jumped from civil engineering to something I kind of enjoy. To me coding assistants are a huge help. Finding solutions, discussing ideas, writing down implementation plans, can’t do all that stuff with my colleagues since they have no clue about my work.