I know MediaBiasFactCheck is not a be-all-end-all to truth/bias in media, but I find it to be a useful resource.

It makes sense to downvote it in posts that have great discussion – let the content rise up so people can have discussions with humans, sure.

But sometimes I see it getting downvoted when it’s the only comment there. Which does nothing, unless a reader has rules that automatically hide downvoted comments (but a reader would be able to expand the comment anyways…so really no difference).

What’s the point of downvoting? My only guess is that there’s people who are salty about something it said about some source they like. Yet I don’t see anyone providing an alternative to MediaBiasFactCheck…

  • finley@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 months ago

    i’m not here to waste time trying to convince of you of something about which you’ve clearly made up your mind, since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.

    not to mention: it’s not for me to prove your claims-- that’s on you, and you haven’t. all i have claimed is that i’m satisfied, and the only proof you need of that is my word ont he matter.

    so, once again, since you haven’t proven anything other than you disagree with it, i suggest you simply block it and move on with your life. you have no greater authority to decide what is or is not a “reliable source” than MBFC, but at least they show their work.

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 months ago

      Since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.

      I shift the goalposts but am just repeating myself? interesting.

      In any case… as for my “claims” perhaps I’ve missed something. Again. From their own methodology page:

      The primary aim of our methodology is to systematically evaluate the ideological leanings and factual accuracy of media and information outlets. This is achieved through a multi-faceted approach that incorporates both quantitative metrics and qualitative assessments in accordance with our rigorously defined criteria.

      Okay. so that’s the highlevel sales pitch. emphasis mine.

      Perhaps. just perhaps, I’ve missed where they dropped what those defined criteria are. lets keep reading.

      While the concept of bias is inherently subjective and lacks a universally accepted scientific formula, our methodology employs a series of objective indicators to approximate it. We utilize a visual representation—a yellow dot on a scale—to signify the extent of bias for each evaluated source. This scale is accompanied by a “Detailed Report” section which elaborates on the source’s characteristics and the basis for its bias rating.

      Our bias assessment encompasses various dimensions, including political orientation, factual integrity, and the utilization of credible, verifiable sources. It’s crucial to note that our bias scale is calibrated to the political spectrum of the United States, which may not align with the political landscapes of other nations.

      Objective indicators? what indicators? Where? for you or me to understand how they’re arriving at their analysis, I need to understand what “objective indicators” they’re using. they’re not listed anywhere I can find. Perhaps I’ve missed it. I don’t think I have. but. perhaps I have.

      Now, Skipping down to the specific categories…

      The categories are as follows:

      • Biased Wording/Headlines- Does the source use loaded words to convey emotion to sway the reader. Do headlines match the story?
      • Factual/Sourcing- Does the source report factually and back up claims with well-sourced evidence.
      • Story Choices: Does the source report news from both sides, or do they only publish one side.
      • Political Affiliation: How strongly does the source endorse a particular political ideology? Who do the owners support or donate to?

      Alright. now we’re getting to the stuff I’m asking for! maybe. uh. shit. The just “Biased Wording/Headlines” at that. So they have no list of common loaded words, (For example, is “Deadly Wildfire” okay but “Deadly Attack” not? both are describing events in which people presumably died. What you, I or anyone else perceives as “loaded” is going to be entirely different. You want to rigorously define criteria for bias? you’re gonna have to at least provide examples. And not on the individual ratings. Protip. the lack of strong or emotional language is also an indication of bias- for examples of that, watch reports surrounding any cops that killed a subject. you’re almost certainly going to be seeing the pro-cop news agencies shy away from language that evokes anger.

      Then then get into their “comprehensive” analysis:

      For a thorough evaluation, we review a minimum of 10 headlines and 5 news stories from each source. Our methodology employs a variety of search techniques to ensure a comprehensive understanding of the source’s political affiliation and ideological leanings. This process can be time-consuming or very simple, depending on the source.

      yeah. uhm. that’s not “comprehensive”. at all. MPR news, just from today, just the ones that get highlighted, Minnesota Public Radio news has 28 articles. from today. and that’s not even bothering to look at all of the massive amounts of MPR/NPR affiliated podcasts and such being pumped out sometimes 3 times a day.

      Further, there’s no information on which articles are selected. Which can have a profound impact on whether or not they get a passing grade for factualness. If you’re only checking ten out of literal thousands of articles a year. or, even a hundred articles, out of thousands a year, how you select articles to review are going to have a profound impact. Is it random? is it by top rating? are they cherry picked? top headlines from random dates?

      And lets draw attention to that last line. “This process can be time consuming or very simple, depending on the source”. meaning… it varies based on the source. Even if there’s more to work with for a given source… the process should probably not be any more or less simple- the process should be the process. that’s the purpose of a methodology.

      Skipping the descriptions of their fact check ratings… all I’m going to say here is that there’s no objective standard for what “consistent” or “often” or any sort of miss-rate on being factual. I will submit that, for example VOA news probably should be given a low factual score based on this statement: >A “Low” rating indicates the source is often unreliable and should be fact-checked for fake news, conspiracy theories, and propaganda.
      you know, considering VOA is literally a state media outlet. whose entire purpose is to pump out propaganda; yet it’s given a ‘high’ rating. but what do I know, they certainly weren’t forbidden from broadcasting inside US boarders because of their propagandist nature.

      their critera for who they use as a factcheck service is useful:

      Our methodology incorporates findings from credible fact-checkers who are affiliated with the International Fact-Checking Network (IFCN). Only fact checks from the last five years are considered, and any corrected fact checks do not negatively impact the source’s rating.

      IFCN is good. the date restriction is good. explaining how correct fact checks affect things… is good. I would like to see a comment about which fact checkers they always use, or always use when it’s relevant (for example, reviewing a french news service using, idunno, a taiwanese fact checker seems kinda sketchy.) Do they search all 115 current signatories and the other 54 that are in the renewal process? do they search only those from the source’s home country? when do they elect to expand beyond that? do they only use one service at all?

      I’d assume they use some sort of aggregator service to look for fact checks across all of them at once. Personally, my preferred choice would be an aggregation service combining all of them, and searching for articles tagged as fact checking the specific source, rather than for each of the articles being reviewed. Then organize those by some sort of pass/mostly-pass/fail/epically-fail sort of metric. but that’s just me.

      TL:DR? my goal post has always been that their methodology is opaque and not useful to determine that their method reasonably eliminates their bias. that has never changed. they don’t describe what acceptable error rates are for factualness (never mind severity of the error. reporting a person wore a green shirt when they wore a blue shirt might be factually incorrect, but does it really matter if the story isn’t about what shirt they wore?). they don’t describe even in brief detail what ‘loaded’ or ‘biased’ headlines actually look like. They describe a literal propaganda service as being “Least Biased”.

      They cite newsguard as a competitor (i’m not sure about that, but they’re in the same space. from what I see on their website… they’re selling their service to different audiences. Like brands looking to advertise on a specific site, etc.) Lets look at their methodology page. I’m not going to go into detail. but you see how it’s broken down? how specific. each criterion is specifically listed, with reasons for it passing or failing a given criterion listed, as well as express explanations of what things mean. When you’re looking through it. not ‘we judge on bias… which means that we look for biased words…’. Like a phrase you see is ‘that a regular use would not likely see it on a daily basis’.

      Check their scoring process. They have a researcher (described as a trained journalist), research the website, make a report, then they write the article. that article is then put on pause for comment from the company in question … then it is reviewed by a people (“at least one senior ediitor and Co-CEO”…) to check for factual accuracy and what have you. Only then is it published. I assume that MBFC has something similar, but that’s an assumption. no where does it describe the editorial process. for all we know, it really is just one guy in a cat suit working the one article, doing it his way while the lady in the dog suit is doing it her way and the editorial staff are in a two-person horse suit searching for organic oats. I’d rather assume not, but again. that is an assumption on my part.

      • finley@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 months ago

        With all due respect: I’m not reading that.

        Ya know, I’ve had some great interactions with you here in the past, and generally we’re on the same page, but on this, we disagree. And I doubt we’re going to change each other’s minds, so I’m not really going to waste any more time on this discussion with you.

        And, I know this is me repeating myself, but i again suggest that you just block the bot and move on. It’s not worth the energy you’re putting into it over a disagreement.

        Peace, buddy