• 0 Posts
  • 3 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Hard disagree.

    The aggregation actually simplifies much of that detection.

    “Instance X dumps thousands of -1’s (or odd patterns of +1s) on comment/story Y” is much easier to look for bad behavior and run anomoly detection on, especially since bad actors will likely operate at the instance level (either creating many accounts on a low barrier instance or just making bad instances).

    Yes totally open votes helps for the edge case of detecting “account X always +1’s account Y” across instances but we’re paying a very heavy privacy cost to support an expensive to detect edge case that’s trivial to defeat (have multiple puppets). And individual instance operators can still do this analysis.

    If the number of puppets are small the correct fix is to rethink the scoring (tiny numbers of votes shouldn’t be allowed to distort thing so much we need to go to these extremes)
    If the number of puppets is not tiny then it’ll be easier to see in the instance aggregations (user X always gets +N votes from instance Y)



  • On some level I think you’re both right - this is roughly the problem that happened with email and spam.

    At one point it was trivial to run your own Mailserver, this got harder and harder as issues with spam got worse. Places started black holing servers they didn’t know and trust, this drove ever more centralization and a need for server level monitoring/moderation because a few bad actors could get a whole server blocked.

    We can know that bad actors will exist, both at the user and at the server level. We can also know that this has a history of driving centralization. All of this should be kept in mind as the community discusses and designs moderation tools.

    Ideally, I hope we can settle on systems and norms that allow small leaf nodes to exist and interconnect while also keeping out bad actors.