I’d like to invite you all to share your thoughts and ideas about Lemmy. This feedback thread is a great place to do that, as it allows for easier discussions than Github thanks to the tree-like comment structure. This is also where the community is at.
Here’s how you can participate:
- Post one top-level comment per complaint or suggestion about Lemmy.
- Reply to comments with your own ideas or links to Github issues related to the complaints.
- Be specific and constructive. Avoid vague wishes and focus on specific issues that can be fixed.
- This thread is a chance for us to not only identify the biggest pain points but also work together to find the best solutions.
By creating this periodic post, we can:
- Track progress on issues raised in previous threads.
- See how many issues have been resolved over time.
- Gauge whether the developers are responsive to user feedback.
Your input may be valuable in helping prioritize development efforts and ensuring that Lemmy continues to meet the needs of its community. Let’s work together to make Lemmy even better!
Downvotes are an inherently unequal proposition, as they are now implemented. This allows everything from near and dear friends who respectfully disagree to randos with day-one accounts who don’t even know what a community is all about, to brigading events organized in a larger community (possibly on Reddit or in Matrix or Discord or such). e.g. iirc I can user-block someone or even an entire instance, but in retaliation they can see my profile and downvote everything I have ever done, or have a bot do so within seconds of new material coming out. Which would affect its discoverability.
Potential solutions would be to make them no longer anonymous, and/or when you block a user or an instance then they can no longer downvote that content - just like a user-level defederation. As it is now, user-level blocks are extremely weak and even notifications can be delivered by simply tagging someone’s username.
A more robust approach could involve combining multiple user engagement metrics like votes, reading time and number of comments, along with a system that sorts posts depending on how they compare to their community averages. This system would be less susceptible to manipulation by new accounts or brigading, as it would require genuine engagement across multiple factors to influence a post’s ranking.
Incorporating User Engagement Metrics in Lemmy’s Sorting Algorithms
I will suggest filtering, by term and by source URL. I think it would help customize individual feeds, making it easier and perhaps more comfortable navigating the news.
Example A: term filtering: This should be fairly obvious. Say I’m a Linux user who could care less about KDE. But people keep gushing over it in the Linux subs I subscribe to, and the damn developers keep pushing new releases that also get posted. Argh! Filter out posts (maybe even comments) that mention KDE, Bob’s your uncle. And I can still enjoy all those delicious GNOME posts. Definitely not a real world inspired scenario.
Example B: URL filtering: Simply(!) filtering out link posts by source URL. Not a fan of Fox News and/or WaPo? Filter out one site or the other by root URL, like
*.foxnews.com
or*.washingtonpost.com
. Me, I’d gladly filter out all and any YouTube links unseen by default. That’s a constant noise generator I could genuinely live without. But I digress.I hope the examples illustrate my point because I could clearly never explain a feature request succinctly nor to the point.
Reminds me of Custom Feeds
- Inspired by Firefish’s Antennas feature
- Similar to Reddit’s multireddit functionality
- Follow specific users, communities, and instances
- Include/exclude tags or keywords
- Choose post types (posts, comments, or both)
- Set custom feeds as default
I think by default bots should not be allowed anywhere. But if that’s a bridge too far, then their use should have to be regularly justified and explained to communities. Maybe it should even be a rule that their full code has to be released on a regular basis, so users can review it themselves and be sure nothing fishy is going on. I’m specifically thinking of the Media Bias Fact Checker Bot (I know, I harp on it too much). It’s basically a spammer bot at this point, cluttering up our feeds even when it can’t figure out the source, and providing bad and inaccurate information when it can. And mods refuse to answer for it.
Even large social media platforms have trouble dealing with bots, and with AI advancements, these bots will become more intelligent. It feels like a hopeless task to address. While you could implement rules, you would likely only eliminate the obvious bots that are meant to be helpful. There may be more sophisticated bots attempting to manipulate votes, which are more difficult to detect, especially on a federated platform.
For sure, it’s not an easy problem to address. But I’m not willing to give up on it just yet. Bad actors will always find a way to break the rules and go under the radar, but we should be making new rules and working to improve these platforms in good faith, with the assumption that most people want healthy communities that follow the rules.
I’m particularly concerned about the potential for automods to become a problem on Lemmy, especially if it gains popularity like Reddit. I believe a Discourse-style trust level system could be a better approach for Lemmy’s moderation, but instead of rewarding “positive contributions,” which often leads to karma farming, the system should primarily recognize user engagement based on time spent on the platform and reading content. Users would gradually earn privileges based on their consistent presence and understanding of the community’s culture, rather than their ability to game the system or create popular content. This approach would naturally distribute moderation responsibilities among seasoned users who are genuinely invested in the community, helping to maintain a healthier balance between user freedom and community standards, and reducing the reliance on bot-driven moderation and arbitrary rule enforcement that often plagues many Reddit communities.
There’s currently no way to delete an uploaded image.That’s especially problematic since pasting any image into a reply box auto-uploads it. So if your finger slips and you upload something sensitive, or if you want to take down something you uploaded previously, there’s no way to do it.What should happen is whenever you upload an image, the image and delete key get stored in some special part of your Lemmy account. Then from the Lemmy account management page you can see all your uploaded images and delete them individually or in bulk.So it seems you can now do this- Profile, Uploads shows you all your uploads. Go Lemmy!
I remember there was a lot of drama around this, I can’t believe it’s still an issue.
I really wish Lemmy had tags like RES
There were several issues on GitHub regarding proposals on how to solve the low visibility of small instances. However, after the Scaled Sort was implemented, all those issues were closed, yet the problem persists. I continue to use Reddit the same as before because I primarily used it for niche communities, which are lacking here. The few times I’ve posted to a niche community here, I’ve either received no answers or been subject to drive-by downvotes, likely from users not even subscribed to the community. As a result, I now only post on Lemmy when the post is directed to a large community, and I use Reddit for the rest.
The decentralized nature of Lemmy, while appealing in theory, creates significant frustration in practice due to widespread instance blocking. Finding an ideal instance becomes a daunting task, as users must navigate a complex web of inter-instance politics and restrictions. This challenge is further compounded for those who prioritize factors like low latency or specific content policies. Lemmy’s architecture heavily favors instance-level configurations, leaving individual users with limited control over their experience. The only reliable solutions seem to be either hosting a personal instance—a technical hurdle for many—or simply hoping that your chosen instance’s admins align with your preferences and don’t block communities you enjoy. This politicking ultimately undermines the platform’s potential.