• kibiz0r@midwest.social
    link
    fedilink
    English
    arrow-up
    52
    ·
    2 days ago

    Tim Harford mentioned this in his 2016 book “Messy”.

    They just wanna call it AI and make it sound like some mysterious intelligence we can’t comprehend.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      4 hours ago

      It sorta is.

      A key way that human intelligence works is to break a problem down into smaller components that can be solved individually. This is in part due to the limited computational ability of the human brain; there’s not enough there to tackle the complete problem.

      However, there’s no particular reason AI would need to be limited that way, and it often isn’t. Expert Go players see this in AI for that game. The AI tends to make all sorts of moves early on that don’t seem to be following the usual logic, and it’s because it’s laid out the complete game in its “head” and going directly for the goal. Go is basically impossible for humans to win against the best AIs at this point.

      This is a different kind of intelligence than we’re used to, but there’s no reason to discount it as invalid.

      See the paper Understanding Human Intelligence through Human Limitations

  • clucose@lemmy.ml
    link
    fedilink
    English
    arrow-up
    92
    ·
    2 days ago

    It is possible for AI to hallucinate elements that don’t work, at least for now. This requires some level of human oversight.

    So, the same as LLMs and they got lucky.

    • ATDA@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      14 hours ago

      It’s like putting a million monkeys in a writers’ room, but super charged on meth and consuming insane resources.

      • john89@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        That monkey analogy is so far removed from reality, I think less of anyone who perpetuates it.

        A room full of monkeys banging on keyboards will always generate gibberish, because they’re fucking monkeys.

  • RedWeasel@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    2 days ago

    This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    37
    ·
    2 days ago

    “We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better.

    Great, so we will eventually have black box chips running black box algorithms for corporations where every aspect of the tech is proprietary and hidden from view with zero significant oversight by actual people…

    The true cyber-dystopia.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      Man so you have personally vetted all code your devices execute? It’s already true

    • Doorbook@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      This has been going on in chess for a while as well. Computer can detect patterns that human cannot because it has a better memory and knowledge base.

    • meliante@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Well, that’s kind of like the human brain isn’t it? You don’t really know how it does its thing but it does it.

      • Lettuce eat lettuce@lemmy.ml
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        2 days ago

        Nope, we actually have entire fields of study that focus on the brain and cognition with thousands of experts and decades of research and experimentation to effectively understand a ton about how our brains work and why we behave the way we do.

        Plus, your brain is not created and owned entirely by trillion dollar megacorps with the primary incentive to use it to increase profitability.

        • meliante@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          We also know how “AI” works and how it creates its outputs in the same way we know the brain.

          Don’t try to equate having fields of study and experts is definitive knowledge of something, that’s being fallacious.

          • Lettuce eat lettuce@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 day ago

            And yet, this AI expert stated that we don’t know why the AI designed the chip in specific ways. There’s a difference between understanding the rough mechanism for something, and understanding why something happened.

            Imagine hiring an engineer to design something, they hand you a finished design; they cannot explain what it is, how they actually designed it, how it works, or why they made the specific choices they did.

            I never made the false equivalency you claimed I did, and you also never addressed my second criticism, which is telling.

    • KeenFlame@feddit.nu
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      They are all of the same breed and it’s an ongoing field of study. The megacorps have soiled the use of them but they are still extremely strong support tools for some things, like detecting cancer on xrays and stuff

    • brlemworld@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.

    • FourPacketsOfPeanuts@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      2 days ago

      Read the article, it’s still ‘dreaming’ and spewing garbage, it’s just that in some iterations it’s gotten lucky. “Human oversight needed” they say. The AI has no idea what it’s doing.

      • Flaqueman@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        17
        ·
        2 days ago

        Yeah I got that. But I still prefer “AI doing science under a scientist’s supervision” over “average Joe can now make a deepfake and publish it for millions to see and believe”

      • BrianTheeBiscuiteer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        2 days ago

        I wonder how well it could work to use AI in developing an algorithm to generate chip designs. My annoyance with all of this stuff is how much people say, “Look! AI invented something new! It only took a few hours and 100x the resources!”

        AI is mainly the capitalist dream of a drinking bird toy keeping a nuclear reactor online and paying a layman slave wages to make sure the bird does its job (obligatory “Simpsons did it”).

        • FourPacketsOfPeanuts@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Maybe, but remember generative AI isn’t any kind of deductive or methodical reasoning. It’s literally “mash up the publicly available info and give a crowd sourced version of what to add next”. This works for art because this kind of random harmony appeals to us asthetically and art is an area where people seek fewer constraints. But when you’re engineering it’s the opposite. Maybe it’s useful to get engineers out of a rut and imagine new possibilities. But that’s it. Generative AI has no idea if what’s it’s smushed together is garbage or randomly insightful.

    • fl42v@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Idk, kinda the same, but instead of misinformation we get ICs that release a cloud of smoke in a shape of a cat when presented with specific pattern of inputs (or smth equally batshit crazy)