• Dragon Rider (drag)@lemmy.nz
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    The sweet release of death.

    Or, you know, we could devote serious resources to studying the nature of consciousness instead of just pretending like we already have all the answers, and we could use this knowledge to figure out how to treat AI ethically.

    Utilitarians believe ethics means increasing happiness. What if we could build AI farms with trillions of simulants doing heroin all the time with no ill effects?

    • VeganCheesecake
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      39 minutes ago

      End commercial usage of LLMs? Honestly, I’m fine with that, why not. Don’t have to agree on the reason.

      I am not saying understanding the nature of consciousness better wouldn’t be great, but there’s so much research that deserves much more funding, and that isn’t really a LLM problem, but a systemic problem. And I just haven’t seen any convincing evidence current Models are conscious, and I don’t see how they could be, considering how they work.

      I feel like the last part is something the AI from the paperclip thought experiment would do.