• 11 Posts
  • 33 Comments
Joined 4 months ago
cake
Cake day: March 2nd, 2024

help-circle






  • The idea of a federated, decentralized Wikipedia alternative is intriguing, but implementing it successfully faces major hurdles. Federating moderation policies and privileges across different instances seems incredibly complex. I believe it would also require some kind of web of trust system. Quality control is also a huge challenge without centralized oversight and clear guidelines enforced universally.

    While it could potentially replace commercial wiki farms like Wikia/Fandom for niche topics, realistically replacing Wikipedia’s dominance as a general reference work seems highly ambitious and unlikely, at least in the short term. But as they say - shoot for the stars, and you may just land on the moon.

    That said, ambitious goals can spur innovation. Even if Ibis falls short of usurping Wikipedia, it could blaze new trails and pioneer federated wiki concepts that feed back into Wikipedia and other platforms. The federated model allowing more perspectives and focused communities is worth exploring, despite the technical obstacles around distributed moderation and content integration. The proof-of-concept shows the core pieces are in place as a starting point.


  • Would it be feasible to expose the metadata for posts in such a way that search queries could be customized to sort a front page any way a user wants to see it?

    There is already such an API endpoint which is available for mods and admins.

    @nutomic@lemmy.ml in https://discuss.online/comment/6718715

    Yeah, it would definitely be feasible to expose post metadata for customized search queries. Currently, the data is restricted to admins and mods, but having an API endpoint for users could enhance the sorting options without significant strain on the server. It could lead to more tailored and engaging user experiences on the platform.

    https://discuss.online/comment/6718201

    Perhaps even a sentiment analysis would be interesting to see: serious discussion, jokes and memes discussion, informative posters, political conversation left or right, etc.

    This reminds me of Slashdot moderation and Media Bias Fact Check Integration

    Slashdot moderation

    this was something I loved about slashdot moderation. When voting, people had to specify the reason for the vote. +1 funny, +1 insightful, +1 informative, -1 troll, -1 misleading, etc.

    That way you can, for example, set in your user preferences to ignore positive votes for comedy, and put extra value on informative votes.

    Then, to keep people from spamming up/down votes and to encourage them to think about their choices, they only gave out a limited number of moderation points to readers. So you’d have to choose which comments to spend your 5 points on.

    Then finally, they had ‘meta moderation’ where you’d be shown a comment, and asked “would a vote of insightful be appropriate for this comment” to catch people who down-voted out of disagreement or personal vandetta. Any users who regularly mis-voted would stop receiving the ability to vote.

    I don’t think this is directly applicable to a federated system, but I do think it’s one of the best-thought-out voting systems ever created for a discussion board.

    edit: a couple other points i liked about it:

    Comments were capped at (iirc) +5 and -1. Further votes wouldn’t change the comment’s score.

    User karma wasn’t shown. The user page would just say Karma: good. Or Excellent, or poor, or some other vague term.

    https://beehaw.org/comment/208569









  • Regrettably, complaining tends to be a common pastime for many individuals. I acknowledge your frustrations with certain users who may appear entitled or unappreciative of the considerable effort you’ve dedicated to developing Lemmy. Shifting towards a mindset that perceives complaints as opportunities for enhancement can be transformative. Establishing a set of transparent rules or guidelines on how you prioritize issues and feature requests could help turn critiques into opportunities for improvement. This transparency can help manage expectations and foster a more collaborative relationship with the users in your community. While not all complaints may be actionable, actively listening to feedback and explaining your prioritization criteria could go a long way in building trust and goodwill. Open communication and a willingness to consider diverse perspectives can lead to a stronger, more user-centric product in the long run.

    The philosophy of Complaint-Driven Development provides a simple, transparent way to prioritize issues based on user feedback:

    1. Get the platform in front of as many users as possible.
    2. Listen openly to all user complaints and feedback. Expect a lot of it.
    3. Identify the top 3 most frequently reported issues/pain points.
    4. Prioritize fixing those top 3 issues.
    5. Repeat the process, continuously improving based on prominent user complaints.

    Following these straightforward rules allows you to address the most pressing concerns voiced by your broad user community, rather than prioritizing the vocal demands of a few individuals. It keeps development efforts focused on solving real, widespread issues in a transparent, user-driven manner.

    Here’s a suggestion that could help you implement this approach: Consider periodically making a post like What are your complaints about Lemmy? Developers may want your feedback. This post encourages users to leave one top-level comment per complaint, allowing others to reply with ideas or existing GitHub issues that could address those complaints. This will help you identify common complaints and potential solutions from your community.

    Once you have a collection of complaints and suggestions, review them carefully and choose the top 3 most frequently reported issues to focus on for the next development cycle. Clearly communicate to the community which issues you and the team will be prioritizing based on this user feedback, and explain why you’ve chosen those particular issues. This transparency will help users understand your thought process and feel heard.

    As you work on addressing those prioritized issues, keep the community updated on your progress. When the issues are resolved, make a new release and announce it to the community, acknowledging their feedback that helped shape the improvements.

    Then, repeat the process: Make a new post gathering complaints and suggestions, review them, prioritize the top 3 issues, communicate your priorities, work on addressing them, release the improvements, and start the cycle again.

    By continuously involving the community in this feedback loop, you foster a sense of ownership and leverage the collective wisdom of your user base in a transparent, user-driven manner.






  • It certainly doesn’t help that Lemmy had and still has absolutely no sensible way to actually surface niche communities to its subscribers. Unlike Reddit, it doesn’t weigh posts by their relative popularity within the community but only by total popularity/popularity within the instance. There’s also zero form of community grouping (like Reddit’s multireddits) - all of which effectively eliminates all niche communities from any sensible main view mode and floods those with shitty memes and even shittier politics only. This pretty much suffocated the initially enthusiastic niche tech communities I had subscribed to. They stood no chance to thrive and their untimely death was inevitable.

    There are some very tepid attempts to remedy this in upcoming Lemmy builds, but I fear it’s too little too late.

    I fear that Lemmy was simply nowhere near mature enough when it mattered and it has been slowly bleeding users and content ever since. I sincerely hope I’m wrong, though.

    @PurpleTentacle@sh.itjust.works https://sh.itjust.works/comment/4451602






  • I did read the links, and I still strongly feel that no automated mechanical system of weights and measures can outperform humans when it comes to understanding context.

    But this is not a way to replace humans; it’s just a method to grant users moderation privileges based on their tenure on a platform. Currently, most federated platforms only offer moderator and admin levels of moderation, making setting up an instance tedious due to the time spent managing the report inbox. Automating the assignment of moderation levels would streamline this process, allowing admins to simply adjust the trust level of select users to customize their instance as desired.


  • Trust lvls themselves are just Karma plus login/read tracking aka extra steps.

    Trust Levels are acquired by reading posts and spending time on the platform, instead of receiving votes for posting. Therefore, it wouldn’t lead to low-quality content unless you choose to implement it that way.

    The Karma system is used more as a bragging right than to give any sort of moderation privilege to users.

    But in essence is similar, you get useless points with one and moderation privileges with the other.

    If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind.

    You are making things up just so you can call me crazy. I’m not advocating anything of the sort.