Admin of lemmy.blahaj.zone

I can also be found on the microblog fediverse at @ada@blahaj.zone or on matrix at @ada:chat.blahaj.zone

  • 80 Posts
  • 553 Comments
Joined 3 years ago
cake
Cake day: January 2nd, 2023

help-circle


  • Your gender is how society perceives you. It is a spectrum between masculine and feminine

    Not quite. It’s got nothing to do with how people perceive you. A closeted trans woman is still a woman, even though she’s perceived as a man.

    It’s also not inherently defined by femininity or masculinity. You can be a masculine woman or a feminine man, or you can simply not give a shit about masculinity or femininity (this is me). Society defines what we consider masculine and feminine, and creates powerful associations between these behaviours and gender, but the association is “after the fact”






  • I’ve been using my real name on the internet for 30 years or so now. I’ve hosted public radio shows, I run/admin several online LGBTQ communities and I’ve had newspapers articles done about my transition and activism.

    It’s absolutely possible that someone with the desire could utilise that against me. But it’s unlikely and it hasn’t happened yet. And in the mean time, having to hold myself back and be constantly on edge about what I say and where I say it would impact my use of the internet in a way I don’t like, every single time I use it.

    So for me, it’s worth the risk.






  • This is just regular moderation, though.

    It’s using the existing tool, but making a small portion of them (approving applications) available to a much larger pool of people

    it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

    If the instance that hosts it doesn’t think it’s a bot, then it stays, but is blocked by the instance that does think its a bot.

    And if the instance that thinks its a bot also hosts it, it gets shut down.

    That is regular fediverse moderation


  • Yeah, but that’s after the fact, and after their content has federated to other instances.

    It doesn’t solve the bot problem, but just plays whack a mole with them, whilst creating an ever large amount of moderation work, due to it federating to multiple instances.

    Solving the bot problem means stopping the content from federating, which either means stopping the bot accounts from registering, or stopping them from federating until they’re known to be legit.


  • I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.

    And if you don’t like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.

    My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn’t have to be the unquestioned approach going forward.



  • Make sign ups require approval and create a “trusted user” permission level that lets the regular trusted users on the instance see and process pending sign up requests and suspend/delete brand new spam accounts (say under 24 hours old) that slip through the cracks. You can have dozens of people across all timezones capable of approving requests as the are made, and capable of shutting down the bots that slip through.

    Boom, bot problem solved