Mozilla is in a tricky position. It contains both a nonprofit organization dedicated to making the internet a better place for everyone, and a for-profit arm dedicated to, you know, making money. In the best of times, these things feed each other: The company makes great products that advance its goals for the web, and the nonprofit gets to both advocate for a better web and show people what it looks like. But these are not the best of times. Mozilla has spent the last couple of years implementing layoffs and restructuring, attempting to explain how it can fight for privacy and openness when Google pays most of its bills, while trying to find its place in an increasingly frothy AI landscape.

Fun times to be the new Mozilla CEO, right? But when I put all that to Anthony Enzor-DeMeo, the company’s just-announced chief executive, he swears he sees opportunity in all the upheaval. “I think what’s actually needed now is a technology company that people can trust,” Enzor-DeMeo says. “What I’ve seen with AI is an erosion of trust.”

Mozilla is not going to train its own giant LLM anytime soon. But there’s still an AI Mode coming to Firefox next year, which Enzor-DeMeo says will offer users their choice of model and product, all in a browser they can understand and from a company they can trust. “We’re not incentivized to push one model or the other,” he says. “So we’re going to try to go to market with multiple models.”

-_-

  • darkkite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 days ago

    the same people who are most opposed to AI.

    programmers almost exclusively use LLM

        • ThisSeriesIsFalse@lemmy.ca
          link
          fedilink
          English
          arrow-up
          26
          ·
          7 days ago

          Not elitist to say that people who use what are essentially weighted random word generators for programming, a career that requires one to know exactly how their code works in case it breaks, are bad at their jobs. Just like how it’s not elitist to say that generated images are not art, and that flying a plane into a building doesn’t make you a good pilot.

          • darkkite@lemmy.ml
            link
            fedilink
            English
            arrow-up
            6
            ·
            7 days ago

            a career that requires one to know exactly how their code works in case it breaks

            Using AI doesn’t mean that you lose the ability to reason, debug, or test generated code. All code merge should be peer-reviewed and tested

            We’re not discussing images, nor planes.

            The claim was tech savvy people, the same people who are most opposed to AI.

            There’s data that to suggest otherwise. people who are technically inclined engage with AI more and have a more positive reception compared to less experienced users.

            Unless you have additional data to support that they are in-fact “dog-shit programmers”, this appears to be an emotional claim colored by your own personal bias. Though if you’re a “pure” programmer who is better than the dog-shit developers I would love to see some of your work or writings.