I once read that there are some states in the U.S. where firefighters don’t put out fires in houses that don’t pay a monthly subscription.
I’m particularly concerned about the potential for automods to become a problem on Lemmy, especially if it gains popularity like Reddit. I believe a Discourse-style trust level system could be a better approach for Lemmy’s moderation, but instead of rewarding “positive contributions,” which often leads to karma farming, the system should primarily recognize user engagement based on time spent on the platform and reading content. Users would gradually earn privileges based on their consistent presence and understanding of the community’s culture, rather than their ability to game the system or create popular content. This approach would naturally distribute moderation responsibilities among seasoned users who are genuinely invested in the community, helping to maintain a healthier balance between user freedom and community standards, and reducing the reliance on bot-driven moderation and arbitrary rule enforcement that often plagues many Reddit communities.
A more robust approach could involve combining multiple user engagement metrics like votes, reading time and number of comments, along with a system that sorts posts depending on how they compare to their community averages. This system would be less susceptible to manipulation by new accounts or brigading, as it would require genuine engagement across multiple factors to influence a post’s ranking.
Incorporating User Engagement Metrics in Lemmy’s Sorting Algorithms
Reminds me of Custom Feeds
The decentralized nature of Lemmy, while appealing in theory, creates significant frustration in practice due to widespread instance blocking. Finding an ideal instance becomes a daunting task, as users must navigate a complex web of inter-instance politics and restrictions. This challenge is further compounded for those who prioritize factors like low latency or specific content policies. Lemmy’s architecture heavily favors instance-level configurations, leaving individual users with limited control over their experience. The only reliable solutions seem to be either hosting a personal instance—a technical hurdle for many—or simply hoping that your chosen instance’s admins align with your preferences and don’t block communities you enjoy. This politicking ultimately undermines the platform’s potential.
There were several issues on GitHub regarding proposals on how to solve the low visibility of small instances. However, after the Scaled Sort was implemented, all those issues were closed, yet the problem persists. I continue to use Reddit the same as before because I primarily used it for niche communities, which are lacking here. The few times I’ve posted to a niche community here, I’ve either received no answers or been subject to drive-by downvotes, likely from users not even subscribed to the community. As a result, I now only post on Lemmy when the post is directed to a large community, and I use Reddit for the rest.
Even large social media platforms have trouble dealing with bots, and with AI advancements, these bots will become more intelligent. It feels like a hopeless task to address. While you could implement rules, you would likely only eliminate the obvious bots that are meant to be helpful. There may be more sophisticated bots attempting to manipulate votes, which are more difficult to detect, especially on a federated platform.
I remember there was a lot of drama around this, I can’t believe it’s still an issue.
Nightmare on Lemmy Street (A Fediverse GDPR Horror Story)