• TheAlbatross
    link
    fedilink
    arrow-up
    140
    ·
    5 months ago

    I think AI can take far fewer jobs than people will try to replace with AI, that’s kind of the issue

    • rockerface 🇺🇦@lemm.ee
      link
      fedilink
      arrow-up
      44
      ·
      5 months ago

      High skilled jobs will just start using AI as a tool to automate routine (or have already started, in some cases). The most efficient use of AIs we have now is to pair it with a human, anyway

      • Dabundis@lemmy.world
        link
        fedilink
        arrow-up
        52
        ·
        5 months ago

        The worry is focused on the amount of damage that is likely to be done by the people in decision-making positions thinking they can save money by removing more paid positions.

        • marcos@lemmy.world
          link
          fedilink
          arrow-up
          23
          ·
          5 months ago

          Companies will save so much money once they decide to replace their CEOs with AIs…

          • Drigo@sopuli.xyz
            link
            fedilink
            arrow-up
            3
            ·
            5 months ago

            I never understood this? How could the CEO be replaced? Who would be controlling the AI? Whould’t that person just be the new CEO? I have so many questions…

            • marcos@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              5 months ago

              If you are trying to seriously understand how to do it… well, you can’t. Current AIs can’t fully replace anybody, and it’s an open question if they can partially replace (AKA improve the productivity) anybody to any impactful extent.

              • Jolteon@lemmy.zip
                link
                fedilink
                arrow-up
                1
                ·
                5 months ago

                Depending on how loosely you define AI, current AIs are great at replacing warehouse workers and jobs that rely heavily on routine and have little to no innovation and critical thinking involved.

          • Denjin@lemmings.world
            link
            fedilink
            arrow-up
            2
            ·
            5 months ago
            Fire all staff
            
            Receive billion dollar check
            
            Walk away before it all collapses
            
            Repeat
            

            Look, I already got the algorithm written right here!

      • greenskye@lemm.ee
        link
        fedilink
        English
        arrow-up
        17
        ·
        5 months ago

        The problem with humans reviewing AI output is that humans are pretty shit at QA. Our brains are literally built to ignore small mistakes. Digging through the output of an AI that’s right 95% of the time is nightmare fuel for human brains. If your task needs more accuracy, it’s probably better to just have the human do it all, rather than try to review it.

        • Buddahriffic@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 months ago

          Then each QA human will be paired with a second AI that will catch those mistakes the human ignores. And another human will be hired to watch that AI and that human will get an AI assistant to catch their mistakes.

          Eventually they’ll need a rule that you can only communicate with the human/AI directly above you or below you in the chain to avoid meetings with entire countries of people.

      • UnderpantsWeevil@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        5 months ago

        Should note that a lot of the Microsoft Recall project revolves around capturing human interactions on the computer in real time continuously, with the hope of training a GPT-5 model that can do basic office tasks automagically.

        Will it work? To some degree, maybe. It’ll definitely spit out some convincing looking gibberish.

        But the promise is to increasingly automate away office and professional labor.

      • MeatsOfRage@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        5 months ago

        “Take this code and give me jest tests with 100% coverage. Don’t describe, don’t scaffold, full output.”

        Saves me hours.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      5 months ago

      Basically it is going the following way:

      • Company gets AI to do stuff.
      • Company fires its workforce.
      • AI isn’t up to the task, and often disliked by people, see its unpopularity in the arts.
      • Company has to rehire staff, first to try to salvage the AI’s output, then to just go back to the good old days of human creativity.

      AI isn’t magic, no matter how much techbros try to humanize the technology because NeuRAl nEtWOrKs.

      • sunbytes@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 months ago

        How about:

        Company rehires a percentage of its workforce, with the lowered demand for those specific workers driving salaries down.

    • Leate_Wonceslace@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      Do you mean AI, just Generative models, or LLMs in particular? I’m pretty thoroughly convinced that AI is a general solution to automation, while generative models are only a partial but very powerful solution.

      I think the larger issue is actually that displacement from the workforce causes hardship to those who have been displaced. If that were not the case, most people either wouldn’t care or would actively celebrate their jobs being lost to automation.