AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned.
In most test scenarios, large language models (LLMs) – the technology behind platforms such as ChatGPT – successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted.
The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.
In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a “Dolores park”.
In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence.
Study link - https://arxiv.org/abs/2602.16800
Identify Deez nuts
I’ve said it before and I will reiterate it a thousand times if need be. AI based analysis, particularly LLM based analysis, will strictly lead to bogus results. The utility for this is intimidation and culling (!) of the have-nots.
Imagine entering a country and being flagged for antidemocratic rhetoric because the computer said so. It doesn’t matter if you said it or not, the machine has a claimed .01% error rate. Furthermore, it doesn’t matter if this error rate is correct or not, because how the machine got its results is a process that is impossible to pry open. Anyone who manages an algorithm can plant something and it’s very difficult to know who because the type of person to be able to do so will bring with them the precaution to wipe their tracks.
That said, you can mark my words. You’ll be seeing a lot more of this type of brutalisation in the future. The target is anyone who is among the patsies of state spun narratives, trans people, immigrants, specific nationalities, yadda-yadda.
deleted by creator
LLMs don’t work like this at all. They’re not analysis machines. They’re probability based glorified autocorrectors.
My girlfriend used to spend weeks stalking through years old comments to unearth info about a person. Now she’s being replaced by AI.
This is why we have to stop AI.
This is only possible if you have other SM where you’re not anonymous; which could have just as easily been linked to your anonymous accounts without the use of AI. If your entire online identity is anonymous, it has nothing to link your real identity to.
Everyone who uses their real name on a website has never been truly anonymous. Ever.
Shit used to be common knowledge that you do not use your real name and you do not share photos online. Now that’s what 90% of people exclusively use the internet to do.
I’m sorry, this paper is redundant and dumb.
We’ve had Stylometry tools like R-Stylo for over a decade now, as well as defensive tools against it.
Ah, ‘The Guardian’ accidentally wrote “Hackers” instead of “US Oligarchy” or “Corporations”. Better to hide that fact by deferring the actor to "those pesky ‘hackers’ " - they are always anonymous.
The Guardian are totally removed information for the Oligarchs - as usual…
Just use a VPN bro! Lmao




