Former team members have either resigned or been absorbed into other research groups.

    • Bolt@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago
      1. Misalignment is a huge problem in any black-box system, not just in AGIs.
      2. What would it look like for us to be close to AGI? I have doubts that we’re close, but it seems at least plausible.
        • errer@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 months ago

          I can’t believe I’m aligned with Meta here but Yan LeCun is on the right side of history releasing these models for free. Giving everyone the ability to be competitive with LLMs is a much better outcome than only someone like Sam Altman having the keys.

    • Echo Dot@feddit.uk
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      We’re also so incredibly far off from AGI that it’s just roleplay to pretend like it’s relevant.

      Oh you knew that for certain do you. Well that’s reassuring, please share your evidence.

      Are we sure AI isn’t already as intelligent as some humans, because the bar isn’t really very high is it?

        • Echo Dot@feddit.uk
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          6 months ago

          Yeah but you’re the one that made the claim though. I’ve heard plenty of counterclaims from people in the industry saying the opposite so who am I going to believe some random on the internet or people who are actually in the industry?