Vote manipulation is getting more common. Some recent examples:
While the accounts were banned, the malicious voting activity stuck around.
Should admins have the ability to discard votes, and if so, which admins? Should community mods have that ability? Can you think of any ways that tools like this could be abused?


In my experience, it is almost always the case, but I said usually in case someone came up with a very unique situation.
Are you saying that because they would get more upvotes, they could offset the downvotes they receive? Potentially, but this is where the second metric comes in (giving a lot of downvotes), and as we said, the two are almost always present at the same time.
Karma farming is an issue when users can see karma as an absolute value. It’s not possible on Piefed, which only shows a percentage of attitude (downvotes given, visible to everyone: https://piefed.zip/u/Blaze ) and reputation (downvotes received, visible only to admins)
Right, though it’s a mitigating factor. I guess there’s something I don’t know about piefed: Lemmy comments all have a default upvote from the user that makes it. But it can be revoked by the user. Does Piefed work the same way? My thought only applies if that’s the case.
The upvote you give yourself is there, but IIRC it doesn’t count for your score.
While we are talking, this is the kind of users who gets the two warnings: https://piefed.zip/u/grimreaper@sopuli.xyz
That’s an interesting example of a user this is designed for/around.
The general system of up/downvotes seems to be doing its job quite as intended: their views appear routinely unpopular and there’s a seemingly pretty strong community consensus around that.
It looks like their threads have comments that solidly and clearly refute the garbage manosphere stuff. For some people it’s the opportunity to express a refutation of it publicly and directly. The public viewer gets to read those responses too.
So with that example: what do the flags do that the content of their posts don’t already communicate?
It warns other users that this commenter may be a bad faith user / troll.
Usually when I encounter a troll, I check their profile to see if they are indeed a troll. The warning saves some time on that, and is accurate the vast majority of the time.
I guess I approach it inversely. I encounter what looks like a troll post and I’ll only check profiles when either I am interacting with them, or there’s such deep downvoting already I’m just doing a morbid dive into someone’s history.
Most of the time though the user just has a deeply downvoted argument but otherwise normal and/or low engagement posts, so they wouldn’t be flagged by this.
So I understand that it can save some time with some niche cases.
But I can’t help but note that the system seems intentionally blind to targeted harassment, which can be a source, if not cause, of bad faith accounts. (And likely those need different approaches since those are also niche cases themselves.)
And maybe it’s all just because of my instance’s Local feed, so that’s what I see as a prominent problem on Lemmy.
If you mean using puppet accounts to massively downvote someone, that’s also tracked, but with another tool
Not necessarily puppet accounts, just brigading in general.
It’s the rationale many instances used to defederate hexbear. (Even though iirc hexbear disables downvotes, so they’re defederated for users mass posting, usually that hogshit image, instead of mass voting.) It wasn’t puppets or bot accounts at any rate.
But then there’s repost communities where users share comments (especially in places they or their audience is banned from) or DMs for a group response.
Not to mention the whole ‘block and downvote all .ml on sight’ mentality. But hopefully that might be something this tool could catch.