Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the “Science, Public Health Policy and the Law” website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT’s page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡

  • Wojwo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    124
    ·
    1 month ago

    Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.

    • ALoafOfBread@lemmy.ml
      link
      fedilink
      English
      arrow-up
      70
      ·
      1 month ago

      That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.

      • sheogorath@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        1 month ago

        Fuck, this is why I’m feeling dumber myself after getting promoted to more senior positions and had only had to work in architectural level and on stuff that the more junior staffs can’t work on.

        With LLMs basically my job is still the same.

      • rebelsimile@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.

    • vacuumflower@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 month ago

      My dad around 1993 designed a cipher better than RC4 (I know it’s not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.

      My dad around 2003 still was intelligent enough, he’d explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.

      My dad around 2005 was promoted to a management position and was already becoming kinda dumber.

      My dad around 2010 was a fucking idiot, you’d think he’s mentally impaired.

      My dad around 2015 apparently went to a fortuneteller to “heal me from autism”.

      So yeah. I think it’s a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.

    • socphoenix@midwest.social
      link
      fedilink
      English
      arrow-up
      15
      ·
      1 month ago

      I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.

  • DownToClown@lemmy.world
    link
    fedilink
    English
    arrow-up
    89
    ·
    1 month ago

    The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there’s a clear link between vaccines and autism.

    Neat.

    • Tad Lispy@europe.pub
      link
      fedilink
      English
      arrow-up
      64
      ·
      edit-2
      1 month ago

      Thanks for the warning. Here’s the link to the original study, so we don’t have to drive traffic to that guys website.

      https://arxiv.org/abs/2506.08872

      I haven’t got time to read it and now I wonder if it was represented accurately in the article.

    • Arthur Besse@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      1 month ago

      Thanks for pointing this out. Looking closer I see that that “journal” was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they’re also anti-trans, and they’re gold bugs… and they’re asking tough questions like “do viruses exist” 🤡

      I edited the post to link to MIT instead, and added a note in the post body explaining why.

  • QuadDamage@kbin.earth
    link
    fedilink
    arrow-up
    51
    ·
    1 month ago

    Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf Abstract for those too lazy to click:

    The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

  • Korkki@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 month ago

    You write essay with AI your learning suffers.

    One of these papers that are basically “water is wet, researches discover”.

    • masterofn001@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      To date, after having gooned once (ongoing since September 2023), my core executive functions, my cognitive abilities and my behaviors have not suffered in the least. In fact, potato.

  • suddenlyme@lemmy.zip
    link
    fedilink
    English
    arrow-up
    23
    ·
    1 month ago

    Its so disturbing. Especially the bit about your brain activity not returning to normal afterwards. They are teaching the kids to use it in elementary schools.

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 month ago

      I think they meant it doesn’t return to non-AI-user levels when you do the same task on your own immediately afterwards. But if you keep doing the task on your own for some time, I’d expect it to return to those levels rather fast.

      • xthexder@l.sw0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        That’s probably true, but it sure can be hard to motivate yourself to do things yourself when that AI dice roll is right there to give you an immediate dopamine hit. I’m starting to see things like vibecoding being as addictive as gambling.
        Personally I don’t use AI because I see all the subtle ways it’s wrong when programming, and the more I pay attention to things like AI search results, it seems like there’s almost always something misrepresented or subtly incorrect in the output, and for any topics I’m not already fluent in, I likely won’t notice these things until it’s already causing issues

        • hisao@ani.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          This “dopamine hit” isn’t a permanent source of happiness, just repeatedly clicking “randomize” button not going to make you feel constantly high, after 3 maybe 5 hits you will start noticing a common pattern that gets old really fast. And to make it better you need to come up with ways to declare different structures, to establish rulesets, checklists, to make some unique pieces at certain checkpoints yourself, while allowing LLM to fill all the boilerplate around it, etc. Which is more effort but also produces more rewarding results. I like to think about it this way: LLM produces the best most generic thing possible for the prompt. Then I look at it and consider which parts I want to be less generic and reprompt. In programming or scripting, I’m okay with “best generic thing” that solves the problem I have. If I were writing novels, maybe it’s usable for some kind of top-down writing where you start with high-level structure, then clarify it step by step down to the lowest level. You can use AI to write around this structure, and if something is too boring/generic it’s again simply a matter of refining this structure more and expanding something into multiple more detailed things.

  • lechekaflan@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 month ago

    cognitive decline.

    Another reason for refusing those so-called tools… it could turn one into another tool.

    • surph_ninja@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      It’s a clickbait title. Using AI doesn’t actually cause cognitive decline. They’re saying using AI isn’t as engaging for your brain as the manual work, and then broadly linking that to the widely understood concept that you need to engage your brain to stay sharp. Not exactly groundbreaking.

      • mika_mika@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        30 days ago

        Sir this is Lemmy & I’m afraid I have to downvote you for defending AI which is always bad. /s

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 month ago

      You’re not a dinosaur. Making people feel old and out of the trend is exactly one of the strategies used by big techs to shove their stuff into people.

      • sidelove@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 month ago

        Not only that but *broad gestures at society and the state of the world post-Internet*

        • sqgl@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 month ago

          All of us Geeks were evangelical about the Internet in the 90’s. It is very humbling.

    • QuadDamage@kbin.earth
      link
      fedilink
      arrow-up
      16
      ·
      1 month ago

      The people the paper talks about are the masses who think LLMs are “intelligent”, then outsource their frontal lobe to Silicon Valley datacenters because it’s seemingly easier. People who see LLMs as tools are much less (if at all) affected by this, if anything it’s a trap for people who already have lower critical thinking skills in the first place and want GPUs to think for them.

    • the_q@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      You don’t think it’s odd that you use AI and here you are defending it?

      • Ganbat@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        ·
        1 month ago

        Realistically speaking, why would anyone think it’s odd to defend something they use and/or enjoy? That doesn’t really point to anything abnormal.

          • Artisian@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            I wouldn’t want to apply that to my favorite policy interventions.

            You’re probably right that ranked choice voting won’t unlock utopia, and your favorite flavor of communism probably leads to the worst endless meeting. But we don’t have to like it.

      • Angelusz@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        It is not. As with all things, LLMs have their use. Unfortunately, they are slightly overhyped and the tech is very resource hungry, contributing to environmental and societal problems in at least the USA, probably everywhere to at least some extent.

        The hope is, ofcourse, that the same tech will help alleviate those problems in turn. Time will tell who’s right.

        • Korkki@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 month ago

          Haven’t you heard wallstreet needs AI to be “good” or 75% of the tech companies + Nvidia take a nosedive we’ll get a another -08 recession,

      • QuadDamage@kbin.earth
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        …you are in a technology community? They’re barely defending anything either, just a reasonable take about people saying the same thing about earlier technologies.