It’s everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a “picture” of someone planting some seeds and their hand was merged into the ceramic flower pot.
The AI fire hose is destroying the internet.
I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.
One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.
That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.
The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.
Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!
I think the point is rather that it is distinguishable for someone knowledgeable on the subject, but not for someone is not. Thus making it harder to evolve from the latter to the former.
You will be able to tell slop from intelligence.
However, you won’t be able to tell AI slop from human slop, and we’ve had human slop around and already overwhelming, but nothing compared to LLM slop volume.
In fact, reading AI slop text reminds me a lot of human slop I’ve seen, whether it’s ‘high school’ style paper writing or clickbait word padding of an article.
Why people try to contribute even if they don’t work on their codes? Ai slop not helping at all.
CV padding and main character syndrome.
just deny PRs that are obvious bots and ban them from ever contributing.
even better if you’re running your own git instance and can outright ban IP ranges of known AI shitlords.
The bots, they don’t like when you do that.

If my own mother can’t shame me, a glorified sex bot has a snowballs chance in hell of doing it.
Get off of Github and I bet you those drop to nearly zero. Using Github is a choice with all of the AI slop it enables. They aren’t getting rid of it any time soon. The want agents and people making shitty code PRs—that’s money sent Microsoft’s way in their minds.
Now that they see what the cost of using Github is maybe Godot will (re?)consider codeberg or a self-hosted forgejo instance that they control.
A lot of programmers with thigh-high striped socks should take one for the team and take back Godot and such. Seriously!
I don’t contribute to open source projects (not talented enough at the moment, I can do basic stuff for myself sometimes) but I wonder if you can implement some kind of requirement to prove that your code worked to avoid this issue.
Like, you’re submitting a request that fixes X thing or adds Y feature, show us it doing it before we review it in full.
Tests, what you are asking for are automated tests.
Can that be done on github?
Yep, take a look into GitHub actions. Basically you can make it so that a specific set of tests are run every time a PR is opened against your code repo. In the background it just spins up a container and runs any commands you define in a YAML config file.
Better yet, don’t use GitHub!
The trouble is just volume and time, even just reading through the description and “proof it works” would take a few minutes, and if you’re getting 10s of these a day it can easily eat up time to find the ones worth reviewing. (and these volunteers are working in their free time after a normal work day, so wasting 15 or 30 minutes out of the volunteers one or two hours of work is throwing away a lot of time.
Plus, when volunteering is annoying the volunteers stop showing up which kills projects
gzdoom just simply banned ai code, and made a new fork that tries to stay clean. why cant they do the same?
Is all AI code tagged “hey, Claude made this puddle of piss code”?
This is a real “just catch all the criminals” type comment.
Much in the same way that laws don’t prevent crime, a project banning AI contributions doesn’t stop people from trying to sneak in LLM slop, it instead lets the project ban them without argument.
These people are flooding free projects with shite code: they lack that level of self-awareness.
But you believe a formal declaration that they don’t want AI crap code will stop complaints from the degenerates who then try to sneak it in? Or the people who complain that they’re “needlessly denying good code”? People will always complain and argue.
I’m not awake enough (nor qualified enough) to get into “laws” and what they’re “actually for” but sufficed to say that I don’t think the analogy applies to a curated resource. Sure, it’s free but it does have an owner and you can’t stop the owner from doing what they want with it, including unilaterally canning random contributions. You just fork it.
gzdoom just simply banned ai code
You got that wrong. Graf Zahl added AI code and all the other contributors left to fork it to create UZDoom.
Maybe we need a way to generate checksums during version creation (like file version history) and during test runs of code that would be submitted along side the code as a sort of proof of work that AI couldn’t easily recreate. It would make code creation harder for actual developers as well but it may reduce people trying to quickly contribute code the LLMs shit out.
A lightweight plugin that runs in your IDE maybe. So anytime you are writing code and testing it, the plugin is modifying a validation file that shows what you were doing and the results of your tests and debugging. Could then write an algorithm that gives a confidence score to the validation file and either triggers manual review or submits obviously bespoke code.
This could, in theory, also be used by universities to validate submitted papers to weed out AI essays.
What exactly would you checksum? All intermediate states that weren’t committed, and all test run parameters and outputs? If so, how would you use that to detect an LLM? The current agentic LLM tools also do several edits and run tests for the thing they’re writing, then edit more until their tests work.
So the presence of test runs and intermediate states isn’t really indicative of a human writing code and I’m skeptical that distinguishing between steps a human would do and steps an LLM would do is any easier or quicker than distinguishing based on the end result.
You could time stamp changes and progress to a file. Record results of tests and output and give an approximate algorithmic confidence rating about how bespoke the process of writing that code was. Even agentic AI rapidly spits out code like a machine would where humans take time and think about things as they go. They make typos and go back and correct them. Code tests fail and debugging looks different between an agent and a human. We need to fingerprint how agents write code and use agentic code processed through this sort of validation looks versus what it looks like for humans to do the same.
This basically amounts to a key/interaction logger in the IDE. I’d suspect this would prevent many people contributing to projects using something like that, at least I wouldn’t install such a plug-in.
It would be a keylogger within the IDE. How else do you prove you were the one doing the work? Otherwise, AI slop. I guess pick your poison.
Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.
I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.
AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.
Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”
Not good enough, you need to poison the data
I don’t want my data poisoned, I’d rather just poison the AI bros.
Yeah but then their Facebook accounts will keep producing slop even after they’re gone.
Tempting, but even that is not good enough as another reply pointed out
the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data
LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.
They’re not going to maintain it - they’ll just throw it back to the LLM and say “enhance”.
Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.
Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.
From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.
Doesn’t someone have to review those submissions before they’re published?
You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.
at least with foss the horseshit is being done in public.
Can Cloudflare help prevent this?
Do they think the AI written code Just Works
yes.
literally yes.
It’s insane
That’s how you know who never even tried to run the code.
Reminds me of one job I had where my boss asked shortly after starting there if their entry test was too hard. They had gotten several submissions from candidates that wouldn’t even run.
I envision these types of people are now vibe coding.
Super lazy job applications… can’t even bother to put two minutes into vibing.
that’s the annoying part.
LLM code can range to “doesn’t even compile” to “it actually works as requested”.
The problem is, depending on what exactly was done, the model will move mountains to actually get it running as requested. And will absolutely trash anything in its way, From “let’s abstract this with 5 new layers” to “I’m going to refactor that whole class of objects to get this simple method in there”.
The requested feature might actually work. 100%.
It’s just very possible that it either broke other stuff, or made the codebase less maintainable.
That’s why it’s important that people actually know the codebase and know what they/the model are doing. Just going “works for me, glhf” is not a good way to keep a maintainable codebase
LOL. So true.
On top of that, an LLM can also take you on a wild goose chase. When it gives you trash, you tell it to find a way to fix it. It introduces new layers of complication and installs new libraries without ever really approaching a solution. It’s up to the programmer to notice a wild goose chase like that and pull the plug early on.That’s a fun little mini-game that comes with vibe coding.
Nowadays people use OpenClaw agents which don’t really involve human input beyond the initial “fix this bug” prompt. They independently write the code, submit the PR, argue in the comments, and might even write a hit piece on you for refusing to merge their code.
I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.
Damn, Godot too? I know Curl had to discontinue their bug bounties over the absolutely tidal volume of AI slop reports… Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.
Unfortunately it’s a general theme in Open Source. I lost almost all motivation for programming in my free-time because of all these AI-slop(-PRs). It’s kinda sad, how that Art (among others) is flooded with slop…
Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.
This is the perpetual issue, not just with AI: Any system will have flaws and weaknesses, but often, they can generally be papered over with some good will and patience…
Until selfish, immoral assholes come and ruin it for everyone.
From teenagers using the playground to smoke and bury their cigs in the sand, so now parents with small children can’t use it any more, over companies exploiting legal loopholes to AI slop drowning volunteers in obnoxious bullshit: Most individual people might be decent, but a single turd is all it takes to ruin the punch bowl.
Then get ready for people just making slop libraries, not because people are dissatisfied with existing solutions (such as I did with iota, which is a direct media layer similar to SDL, but has better access to some low-level functionality + OOP-ish + memory safe lang), but just because they can.
I got a link to a popular rectpacking algorithm pretty quickly after asking in a Discord server. Nowadays I’d be asked to “vibecode it”.
Can confirm the last part. I am in Uni and if anyone ever asks questions on the class groupchats then first 5-6 answers will be “ask chatgpt.”
I think moving off of GitHub to their own forge would be a good first step to reduce this spam.
To Codeberg we go!
Codeberg is cool but I would prefer not having all FOSS project centralised on another platform. In my opinion projects of the size of Godot should consider using their own infrastructure.
Let’s be realistic. Not everyone is going to move to Codeberg. Godot moving to Codeberg would be decentralizing.
Hosting a public code repo can be expensive, however they can run a private repo using Forgejo and mirror to Codeberg to create redundancy and have public code that doesn’t eat so much monthy revenue, if they even have revenue.
Back to sourceforge it is then.
“Real men just upload their important stuff on ftp, and let the rest of the world mirror it.” - Linus Torvalds.
Don’t underestimate legitimate contributions from people who only do it because they already have an account.
It’s discussed in the Bluesky thread but the CI costs are too high on Gitlab and Codeberg for Godot‘s workflow.
That’s a shame. Did they take the wasted developer time dealing with slop into account in that discussion?
deleted by creator
ot, but any other libre 1s u rec.?
Did you intend to encrypt your comment or was it an accident?
Removed by mod
Removed by mod
Removed by mod
Removed by mod
Removed by mod
B cuz u talk liek ths
Removed by mod
fr no cap
This is big tech trying to kill FOSS.
Which is funny because most of them rely on it
This was honestly my biggest fear for a lot of FOSS applications.
Not necessarily in a malicious way (although there’s certainly that happening as well). I think there’s a lot of users who want to contribute, but don’t know how to code, and suddenly think…hey…this is great! I can help out now!
Well meaning slop is still slop.
Look. I have no problems if you want to use AI to make shit code for your own bullshit. Have at it.
Don’t submit that shit to open Source projects.
You want to use it? Use it for your own shit. The rest of us didn’t ask for this. I’m really hoping the AI bubble bursts in a big way very soon. Microsoft is going to need a bail out, openai is fucking doomed, and z/Twitter/grok could go either way honestly.
Who in their right fucking mind looks at the costs of running an AI datacenter, and the fact that it’s more economically feasible to buy a fucking nuclear power plant to run it all, and then say, yea, this is reasonable.
The C-whatever-O’s are all taking crazy pills.
























