Wherever I wander I wonder whether I’ll ever find a place to call home…

  • 0 Posts
  • 27 Comments
Joined 27 days ago
cake
Cake day: December 31st, 2025

help-circle

  • It sounds like you’re contradicting yourself now. You’re right, signal is more secure because its source code is open-source and auditable. So what’s the issue? It seems you’ve been arguing otherwise, and you’re just now coming around to it without admitting that you were wrong in the first place.

    The client-side app is also open-source and auditable, and you can monitor outgoing traffic on your devise to see whether the signal app is sending data that it shouldn’t. It sounds like people have verified that it doesn’t do that, but if you don’t want to take their word for it then why don’t you see for yourself?




  • You’re talking about E2E encryption as if it prevents side-channel attacks

    That’s literally what E2E encryption does. In order to attack it from outside you would have to break the encryption itself, and modern encryption is so robust that it would require quantum computing to break, and that capability hasn’t been developed yet.

    The only reason the other commenter’s words sound like spam to you is because you don’t understand it, which you plainly reveal when you say "(as long as there isn’t a backdoor in the published [audited] code)



  • AI bot swarms threaten to undermine democracy

    When AI Can Fake Majorities, Democracy Slips Away

    Full article

    A joint essay with Daniel Thilo Schroeder & Jonas R. Kunst, based on a new paper on swarms with 22 authors (including myself) that just appeared in Science. (A preprint version is here, and you can see WIRED’s coverage here.)

    Automated bots that purvey disinformation have been a problem since the early days of social media, and bad actors have been quick to jump on LLMs as a way of automating the generation of disinformation. But as we outline in the new article in Science we foresee something worse: swarms of AI bots acting together in concert.

    The unique danger of a swarm is that it acts less like a megaphone and more like a coordinated social organism. Earlier botnets were simple-minded, mostly just copying and pasting messages at scale—and in well-studied cases (including Russia’s 2016 IRA effort on Twitter), their direct persuasive effects were hard to detect. Today’s swarms, now emerging, can coordinate fleets of synthetic personas—sometimes with persistent identities—and move in ways that are hard to distinguish from real communities. This is not hypothetical: in July 2024, the U.S. Department of Justice said it disrupted a Russia-linked, AI-enhanced bot farm tied to 968 X accounts impersonating Americans. And bots already make up a measurable slice of public conversation: a 2025 peer-reviewed analysis of major events estimated roughly one in five accounts/posts in those conversations were automated. Swarms don’t just broadcast propaganda; they can infiltrate communities by mimicking local slang and tone, build credibility over time, and then adapt in real time to audience reactions—testing variations at machine speed to discover what persuades.

    Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement. Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human. Concrete signs of industrialization are already emerging: the Vanderbilt Institute of National Security released a cache of documents describing “GoLaxy” as an AI-driven influence machine built around data harvesting, profiling, and AI personas for large-scale operations.

    Because humans update their views partly based on social evidence—looking to peers to see what is “normal”—fabricated swarms can make fringe views look like majority opinions. If swarms flood the web with duplicative, crawler-targeted content, they can execute “LLM grooming,” poisoning the training data that future AI models (and citizens) rely on. Even so-called “thinking” AI models are vulnerable to this,

    We cannot ban our way out of the threat of generative-AI-fueled swarms of misinformation bots, but we can change the economics of manipulation. We need five concrete shifts.

    First, social media platforms must move away from the “whack-a-mole” approach they currently use. Right now, companies rely on episodic takedowns—waiting until a disinformation campaign has already gone viral and done its damage before purging thousands of accounts in a single wave. This is too slow. Instead, we need continuous monitoring that looks for statistically unlikely coordination. Because AI can now generate unique text for every single post, looking for copy-pasted content no longer works. We must look at network behavior instead: a thousand users might be tweeting different things, but if they exhibit statistically improbable correlations in their semantic trajectories or propagate narratives with a synchronized efficiency that defies organic human diffusion.

    Second, we need to stop waiting for attackers to invent new tactics before we build defenses. A defense that only reacts to yesterday’s tricks is destined to fail. We should instead proactively stress-test our defenses using agent-based simulations. Think of this like a digital fire drill or a vaccine trial: researchers can build a “synthetic” social network populated by AI agents, and then release their own test-swarms into that isolated environment. By watching how these test-bots try to manipulate the system, we can see which safeguards crumble and which hold up, allowing us to patch vulnerabilities before bad actors act on them in the real world.

    Third, we must make it expensive to be a fake person. Policymakers need to incentivize cryptographic attestations and reputation standards to strengthen provenance. This doesn’t mean forcing every user to hand over their ID card to a tech giant—that would be dangerous for whistleblowers and dissidents living under authoritarian regimes. Instead, we need “verified-yet-anonymous” credentialing. Imagine a digital stamp that proves you are a unique human being without revealing which human you are. If we require this kind of “proof-of-human” for high-reach interactions, we make it mathematically difficult and financially ruinous for one operator to secretly run ten thousand accounts.

    Fourth, we need mandated transparency through free data access for researchers. We cannot defend society if the battlefield is hidden behind proprietary walls. Currently, platforms restrict access to the data needed to detect these swarms, leaving independent experts blind. Legislation must guarantee vetted academic and civil society researchers free, privacy-preserving access to platform data. Without a guaranteed “right to study,” we are forced to trust the self-reporting of the very corporations that profit from the engagement these swarms generate.

    Finally, we need to end the era of plausible deniability with an AI Influence Observatory. Crucially, this cannot be a government-run “Ministry of Truth.” Instead, it must be a distributed ecosystem of independent academic groups and NGOs. Their mandate is not to police content or decide who is right, but strictly to detect when the “public” is actually a coordinated swarm. By standardizing how evidence of bot-like networking is collected and publishing verified reports, this independent watchdog network would prevent the paralysis of “we can’t prove anything,” establishing a shared, factual record of when our public discourse is being engineered.

    None of this guarantees safety. But it does change the economics of large-scale manipulation.

    The point is not that AI makes democracy impossible. The point is that when it costs pennies to coordinate a fake mob and moments to counterfeit a human identity, the public square is left wide open to attack. Democracies don’t need to appoint a central authority to decide what is “true.” Instead, they need to rebuild the conditions where authentic human participation is unmistakable. We need an environment where real voices stand out clearly from synthetic noise.

    Most importantly, we must ensure that secret, coordinated manipulation is economically punishing and operationally difficult. Right now, a bad actor can launch a massive bot swarm cheaply and safely. We need to flip those physics. The goal is to build a system where faking a consensus costs the attacker a fortune, where their network collapses like a house of cards the moment one bot is detected, and where it becomes technically impossible to grow a fake crowd large enough to fool the real one without getting caught.

    – Daniel Thilo Schroeder, Gary Marcus, Jonas R. Kunst

    Daniel Thilo Schroeder is a Research Scientist at SINTEF. His work combines large-scale data and simulation to study coordinated influence and AI-enabled manipulation (danielthiloschroeder.org).

    Gary Marcus, Professor Emeritus at NYU, is a cognitive scientist and AI researcher with a strong interest in combatting misinformation.

    Jonas R. Kunst is a professor of communication at BI Norwegian Business School, where he co-leads the Center for Democracy and Information Integrity.


  • Would something like Anubis or Iocaine prevent what you’re worried about?

    I haven’t used either, but from what I understand they’re both lightweight programs to prevent bot scraping. I think Anubis analyzes web traffic and blocks bots when detected, and Iocaine does something similar but also creates a maze of garbage data to redirect those bots into, in order to poison the AI itself and consume excessive resources on the end of the companies attempting to scrape the data.

    Obviously what others have said about firewalls, VPNs, and antivirus still applies; maybe also a rootkit hunter and Linux Malware Detect? I’m still new to this though, so you probably know more about all that than I do. Sorry if I’m stating the obvious.

    Not sure if this is overkill but maybe Network Security Toolkit might have some helpful tools as well?


  • I don’t support her, I think she’s a self-interested opportunist and doing everything for the wrong reasons, and deep down she’s still a bigot and a grifter, but lately she’s actually had some reasonable takes. I find it mildly annoying whenever she says something I can agree with, but any fracturing of the maga base is a good thing in my view.

    Of course, it’s mostly attributable to broken-clock syndrome, but strangely enough, that demonic look on her face has started to fade in recent photos. It’s almost as if breaking with maga is akin to having an exorcism. Weird…



  • You’re deliberately ignoring the fact that in vernacular terms, “carbon” is used to refer to “carbon dioxide” in contexts where the meaning is obvious.

    People using the term that way aren’t “morons” with “no clue about chemistry.” They’re just using a commonly-understood shorthand for saying “carbon dioxide.” They understand perfectly well that carbon dioxide has a molecular structure of CO2. You’re being willfully obtuse. [Edit: People also sometimes refer to table salt as “sodium,” so your example is really poorly thought-out.]

    Also, while there’s a commentary to be made about corporate greenwashing using phrases like “carbon neutral” and “net zero” to mask their true impacts on the environment, there certainly is such thing as “carbon neutral,” and it absolutely is a scientifically useful term.

    Going for a walk is a carbon neutral activity, unless you happen to fart. Planting trees to compensate for burning fossil fuels is not carbon neutral, although it may meet the regulatory definition required of corporations to use the term. That doesn’t mean the concept itself is mythical.

    Planting trees or sowing a wildflower meadow is carbon-negative. While that can’t displace emissions from regularly burning fossil fuels, it might neutralize the carbon-positive processes of manufacturing a bicycle, meaning riding your bike to work might also be carbon neutral.

    A circular-process that only emits as much C02 as it removes from the atmosphere is, by definition, carbon-neutral. And rejecting novel processes solely because the concept didn’t exist previously is nothing short of dogmatism.







  • See, you have more experience in the matter than I do, hence the caveat that I’m not an expert. Thanks for sharing your experience.

    Then again, I’d consider 128GB of memory to be fairly serious hardware, but if that’s common among hobbyists then I stand corrected. I was operating on the assumption that 64GB of RAM is already a lot

    All in all, 106 billion parameters on 128GB of memory with quantization doesn’t surprise me all that much. But again, I’m just going off of the vague notions I’ve gathered from reading about it.

    The focus of my original comment was more on the fact that self-hosting is an option, I wasn’t trying to be too precise with the specs. My bad if it came off that way



  • If Microsoft cared about privacy then they wouldn’t have made windows practically spyware. Even if they install AI locally in the OS, it’s still proprietary software that constantly sends data back to the mothership, consuming your electricity and RAM to do so. Linux has so many options, there’s really no reason not to switch.

    Small LLMs already exist for local self-hosting, and there are open-source options which won’t steal your data and turn you into a product.

    https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/

    Bear in mind that the number of parameters your system can handle is limited by how much memory is available, and using a quantized version can increase the number of parameters you can handle with the same amount of memory.

    Unless you have some really serious hardware, 24 billion parameters is probably the maximum that would be practical for self-hosting on a reasonable hobbyist set-up. But I’m no expert, so do some research and calculate for yourself what your system can handle.