Hi all!
As many of you have noticed, many Lemmy.World communities introduced a bot: @MediaBiasFactChecker@lemmy.world. This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives. It has been helpful and we would like to keep it around in one form or another.
The !news@lemmy.world mods want to give the community a chance to voice their thoughts on some potential changes to the MBFC bot. We have heard concerns that tend to fall into a few buckets. The most common concern we’ve heard is that the bot’s comment is too long. To address this, we’ve implemented a spoiler tag so that users need to click to see more information. We’ve also cut wording about donations that people argued made the bot feel like an ad.
Another common concern people have is with MBFC’s definition of “left” and “right,” which tend to be influenced by the American Overton window. Similarly, some have expressed that they feel MBFC’s process of rating reliability and credibility is opaque and/or subjective. To address this, we have discussed creating our own open source system of scoring news sources. We would essentially start with third-party ratings, including MBFC, and create an aggregate rating. We could also open a path for users to vote, so that any rating would reflect our instance’s opinions of a source. We would love to hear your thoughts on this, as well as suggestions for sources that rate news outlets’ bias, reliability, and/or credibility. Feel free to use this thread to share other constructive criticism about the bot too.
In literally every thread I’ve seen it post in, it gets downvoted to hell.
This bot was introduced because modding can be pretty tough work at times and we are all just volunteers with regular lives.
Then maybe it can be an internal thing only. Let people do their own critical thinking. I believe that if you’re on Lemmy, you can make informed decision.
The news source of this post could not be identified. Please check the source yourself. Media Bias Fact Check | bot support
Okay, so maybe we don’t need a comment if it’s a meta post or a mod announcement. Thanks for your inadvertent feedback, bot!
It’s this uninvited commenting on the bots part that has me downvoting it. It’s presenting itself at an authority here. If a user in the comments called the bot to fact check something and the bot did a bad job, i’d just block the bot. I’d even be able to look over that users history to get an idea of the bot’s purpose. But this bot comes in and says “here’s the truth”, then spits out something i’d expect to see on twitters current itteration.
If the problem you’re trying to solve is the reliability of the media being posted here. Take the left/right bias call out and find a decent databse on new source quality. Start the bots post out with resources for people to develop their own skill at spotting bad news content.
If the problem you’re trying to solve is the visibility of political bias in content posted here. So the down vote button isnt acting as a proxy for that. Adding a function for the community to rate left/right lean like rotten tomatoes sounds interesting, so long as you take the reliability rating out of the bot. You can’t address both media reliability and political bias in one automated post. nyt and npr being too pearl clutchy for my taste. and some outlet that only exists only on facebook having the same assumed credibility as the associated press. are wildly different issues.
it also does this with a bunch of weird little local newspapers or etc which I’ve never heard of, which is like the one time I actually want it to be providing me with some kind of frame of reference for the source. MSNBC and the NYT, I feel like I already know what I think about them.
Yeah, it’s tricky because who reviews those small guys? Granted, most of them are probably owned by a giant like Gannett, but that doesn’t mean we can just apply a rating from 1 small Gannett-owned paper to another. We’d like there to be some way for users to share their feedback/ratings on those small guys. But then it’s also true that some people will create a news site and try to share links on here to promote their new website and that’s typically just spam bots.
One problem I’ve noticed is that the bot doesn’t differentiate between news articles and opinion pieces. One of the most egregious examples is the NYT. Opinion pieces aren’t held to the same journalistic standards as news articles and shouldn’t be judged for bias and accuracy in the same way as news content.
I believe most major news organizations include the word “Opinion” in titles and URLs, so perhaps that could be something keyed off of to have the bot label these appropriately. I don’t expect you to judge the bias and accuracy of each opinion writer, but simply labeling them as “Opinion pieces are not required to meet accepted journalistic standards and bias is expected.” would go a long way.
This contributes significantly to the noise issue most people complain about
Thanks for this. As a mod of /c/news, I hadn’t really thought about that. We don’t allow opinion pieces, but this is very relevant if we roll out a new bot for all the communities that currently use the MBFC bot.
Hi. I have a suggestion:
Try to make it more clear that this is not a flawless rating (as that is impossible).
Ways to implement:
- Make sure the bot says something along the lines of “MBFC rates X news as Y” and not “X news is Y”.
- Make a caveat (collapsable) at the bottom, that says something along the lines of “MBFC is not flawless. It has an american-centric bias, is not particularly clear on methodology, to the point where wikipedia deems it unreliable; however, we think it is better to have this bot in place as a rough estimate, to discourage posting from bad sources”
- If possible, add other sources, Like: “MBFC rates the Daily Beast as mostly reliable, Ad Fontes Media rates it as unreliable, and Wikipedia says it is of mixed reliability”
- Remove the left right ratings. We already have a reliability and quality rating, which is much more useful. The left-right rating is frankly poorly done and all over the place, and honestly doesn’t serve much purpose.
No problem. Specifically came to my attention about a week ago on this post where the bot reported on an opinion piece as if it was straight news.
BTW, I actually do appreciate the bot and think it’s doing about as well as it can given the technical limitations of the platform.
Interesting that people say that opinion pieces should not be held to the same standard. I personally see such pieces contribute to fake news going around. Shouldn’t a platform with reach, held accountable for wrong information, they hide behind an opinion piece?
Can you explain how a piece with a title like “Helldivers is awesome and fun” can be judged at all for factual accuracy?
The NYT ran an opinion recently where the author pretty clearly was using the NYT along with other outlets as part of a voter demobilization tactic in which the author lied about not voting. The NYT was skewered on twitter, and had to alter the opinion after the fact. It seems like some basic fact checking would have been useful in that situation. Or really, just any amount of critical thought on the part of the NYT in general.
This. Otherwise op-eds get a free pass to launder opinions the paper wants to publish, but can’t.
It’s not a question of “should” - an opinion piece is rhetoric, not reporting. You can fact check some of it sometimes but functionally can’t hold it to the same standards as a regular news article. I agree that this can sometimes lead to “alternative facts” and disingenuous arguments, but the only other option is to forbid the publication of them which is obviously an infringement of first amendment rights. It’s messy, and it can lead to people being misinformed, but it’s what we’re stuck with.
It has been helpful and we would like to keep it around in one form or another.
Bull fucking shit. The majority of feedback has been negative. I can’t recall a single person arguing in its favor, but I can think of many, myself included, arguing against it. I hope you can find my report of one particular egregious example, because Lemmy doesn’t let me see a history of things I reported. I recall that MBFC rated a particular source poorly because they dared to use the word “genocide” to describe what’s going on in Gaza. Trusting one person, who clearly starts from an American point of view, and has a clearly biased view of world events, to be the arbiter of what is liberal or conservative, or factual or fictional, is actively harmful.
No community, neither reddit nor Lemmy nor any other, has suffered for lack of such a bot. I strongly recommend removing it. Non-credible sources, misinformation, and propaganda are already prohibited under rule 8. If a particular source is so objectionable, it should be blacklisted entirely. And what is and is not acceptable should be determined in concert with the community, not unilaterally.
Just as a point of clarification, there is certainly not a community consensus among the feedback.
While you are absolutely correct in stating that there are vocal members of the community opposed to it in any form, there is also a significant portion of the community that would prefer to keep or modify how it works. The most team will be taking all of these perspectives into account. We hope that you will be respectful of community members with whom you disagree.
I haven’t seen any strong arguments for keeping it up.
I will start by saying that I feel like we are trying to address the criticism in your first paragraph with these changes. That being said, thanks for your feedback. I particularly like the comment you shared under the “edit,” because I hadn’t seen that sentiment shared before (not saying nobody else had that issue, just appreciating you for contributing that and challenging me to think more about how we execute things).
I also would like it not add to the comment count. I am now getting inured to comment counts of “1”.
I generally like the bot and its intentions, but feel it inaccurate with my perception too often.
Yes! The mods starting out the discussion with their preferred outcome is so incredibly telling. This is a tool to reinforce the mods bias, deliberately or not
I’m not sure what to do here. On my mobile device the compacted media bias fact check post still takes up 50% of my phone screen.
How a post tag if we have a tagging system in Lemmy, instead of a whole long comment?
Maybe the bot could just post a one line summary with a link to more information?
Thanks for the feedback. Can you elaborate a bit about the 50% of your screen thing? Is it the text itself, or is the issue that the app provides links at the bottom of the comment? I’m thinking of my experience on Voyager, where the links are summarized at the bottom of each comment, which does lead to a decent amount of screen being taken up. Would it be better if there weren’t any links?
yep I’m using Voyager on my iPhone. Maybe a super short summary without links. People could open the bot’s profile and look at the bot’s posts (not comments) if they want to dig deeper to understand a source.
Interesting, so you think the bot should make posts too? Like, a post for each source with a summary of relevant info? Just making sure I understand what you mean
Yeah. It’s an idea for a way to create a user repository within Lemmy that could be edited by the bot as needed. I’m sure there are better ways.
I think this tool, while probably well-intended, only adds to the polarization problem of the world.
Can you elaborate? Like, do you think the bot would be better if it didn’t label things as “left” or “right” (ie: remove the bias rating) or do you think the reliability/credibility ratings have the same issue?
While I think it’s important to have some sort of media bias understanding, I dislike the bot being the first (and sometimes only) comment on a post. Maybe it should be reserved only for posts that are garnering attention, and has a definitive media bias answer for (the no results comments are just damn annoying to see).
It also has the knock-on effect of boosting the post higher in whichever sorting algorithm users are using. So it often feels artificially controlled whenever something has 100+ upvotes and less than 10 comments, knowing the first comment is always a bot. Like, would it be fair for me to have 10 bots that comment factual information of posts I personally like, just to boost their visibility?
The bot is basically loud as fuck in a way that disrupts the comment feed.
Imagine how comments should create and add to a conversation. Imagine how various lemmy clients feed or service that conversation….
Now imagine how a double dropdown big as fuck post says “fuck you” to that conversation.
Just please consider how the form of your shit can be just as imposing as the content, which I really appreciate.
Yet somehow your posts always have me thinking “shut the fuck up” which seems antithetical to building a community.
I think it should be removed
You don’t need every post to have a comment basically saying “this source is ok”. Just post that the source is unreliable on posts with unreliable sources. The definition of what is left or right is so subjective these days, that it’s pretty useless. Just don’t bother.
I agree with that. Having a warning message when the source is known to be extremely biased and/or unreliable is probably a good thing, but it doesn’t need to be in every single thread.
If a source is that bad, it should be banned. I think bot comments on just some posts presents inconsistency.
How about just finally making the bot open source and let people comment or contribute there?
Removed by mod
The bot is pretty accurate and the comments are already pretty short. I feel like if people don’t like it they should just block it.
Removed by mod
Hi! 👋🏾
There’s no such thing as an objective left or right. It’s a relative scale. You shouldn’t have a bot calling things left or right at all.
Also don’t push Ground News. They already get plenty of press from their astroturfing.
Honestly, the first time I had heard of Ground News was in a discussion about implementing it with the bot. Do you have any thoughts on alternatives or would you prefer that bit just removed from the bot’s comment?
Someone else in this thread said to link to media literacy resources and I agree with them.
This. The bot is effectively just propaganda for the author’s biases.
Do you think aggregating ratings from multiple factors checkers would reduce that bias?
Hm… At some point a human will have to say “Yes, this response is correct.” to whatever the machine outputs. The output then takes the bias of that human. (This is unavoidable, I’m just pointing it out.) If this is really not an effort in ideological propaganda, a solution could be for the bot to provide arguments, rather han conclusions. Instead of telling me a source is “Left” or “Biased”, it could say: “I found this commentary/article/websites/video discussing this aource’s political leaning (or quality): Link 1 Link 2 Link 3”
Here you reduce bias by presenting information, instead of conclusions, and then letting the reader come to their own conclusions based on this information. This not only is better at education, but also helps readers develop their critical thinking.
Instead of… You know, being told what to think about what by a bot.
No. The problem with your current bot isn’t that the website authors have a particular axe to grind, it’s that they’re just in a rush and a bit lazy.
This means that they tend to say news sites which acknowledge and correct their own mistakes have credibility problems, because it’s right there - the news sites themselves acknowledged issues. Even though these are the often most credible sites, because they fix errors and care about being right.
Similarly the whole left-right thing is just half-assed and completely useless for anyone that doesn’t live in the US. While anyone that does live in the US probably already has an opinion about these US news sources.
Because these are lazy errors, lots of people will make similar mistakes, and aggregating ratings will amplify this, and let you pretend to be objective without fixing anything.
Keep in mind that if you base your judgements of left bias and right bias on the American overton window, that window has been highly influenced by fascism over the last 10 years, and now your judgement is based on the normalisation of fascism, which your bot is implicitly accepting. That’s bad. If you’re going to characterise sources as left or right in any form, you need to pick a point that you personally define as center. And now your judgements are all going to implicitly push people towards that point. You could say that Karl Marx is the center of the political spectrum, or you could say Mussolini is. Both of those statements are equally valid, and they are as valid as what you are doing now. If you don’t want to push any set of biases, you need to stop calling sources left and right altogether.
Adding more biases doesn’t remove the initial bias.
Get rid of the bot