• 6nk06@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      23 hours ago

      I’m not the one trying to prove anything, and I think it’s all bullshit. I’m waiting for your proof though. Even with a free open-source black box.

          • ryannathans@aussie.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            36 minutes ago

            At work, the software is not open source.

            I would use it for contributions to open source projects but I do not pay for any AI subscriptions, and I can’t use my employee account for copilot enterprise for non-work projects.

            Every week for the last year or so I have been testing various copilot models against customer reported software defects and it’s seriously at a point now where with a single prompt Gemini pro 2.5 is solving the entire defect with unit tests. Some need no changes in review and are good to go.

            As an open source maintainer of a large project I have noticed a huge uptick in PRs which has created a larger review workload, I’m almost certain these are due to LLMs. Quality of a typical PR has not decreased since LLMs have become available and I am thus far very glad

            If I were to speculate I’d guess the huge increase in context windows has made the tools viable, models like GPT5 are garbage on any sizable code bases