ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    92
    ·
    9 months ago

    Its being trained on us. Of course its acting unexpectedly. The problem with building a mirror is proding the guy on the other end doesnt work out.

    • snooggums@midwest.social
      link
      fedilink
      English
      arrow-up
      76
      ·
      9 months ago

      To be honest this is the kind of outcome I expected.

      Garbage in, garbage out. Making the system more complex doesn’t solve that problem.

      • thehatfox@lemmy.world
        link
        fedilink
        English
        arrow-up
        49
        ·
        9 months ago

        The development of LLMs is possibly becoming self defeating, because the training data is being filled not just with human garbage, but also AI garbage from previous, cruder LLMs.

        We may well end up with a machine learning equivalent of Kessler syndrome, with our pool of available knowledge eventually becoming too full of junk to progress.

        • CarbonIceDragon@pawb.social
          link
          fedilink
          English
          arrow-up
          19
          ·
          9 months ago

          I mean, surely the solution to that would be to use curated/vetted training data? Or at the very least, data from before LLMs became commonplace?

          • KevonLooney@lemm.ee
            link
            fedilink
            English
            arrow-up
            19
            ·
            9 months ago

            The funny thing is, children are similar. They just learn whatever you put in front of them. We have whole systems for educating children for decades of their lives.

            With AI we literally just plopped them in front of the Internet, with no guidelines on what to learn. AI researchers say “it’s a black box! We don’t know why it’s doing this!” You fed it everything you could and gave it few rules on what to do. You are the reason why it’s nuts.

            Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

            • thehatfox@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              ·
              9 months ago

              Humans come hardwired to be a certain way, do certain things. Maybe they need to start AI off like that, some basic programs that guide learning. “Learn everything” isn’t working.

              That’s a good point. For real brains, size and intelligence are not linked. An elephant brain has 3 times the amount of neurons as a human brain, but a human brain is more intelligent. There is more to intelligence than just the amount of neutrons, real or virtual, so making larger and larger AI models may not be the right direction.

              • KevonLooney@lemm.ee
                link
                fedilink
                English
                arrow-up
                5
                ·
                9 months ago

                True. Maybe they just need more error correction. Like spend more energy questioning whether what you say is true. Right now LLMs seems to just vomit out whatever they thought up, with no consideration of whether it makes sense.

                They’re like an annoying friend who just can’t shut up.

                • nilloc@discuss.tchncs.de
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 months ago

                  They aren’t thinking though. They’re making connection with the trained data that they’ve processed.

                  This is really clear when they are asked to write code worth to vague a prompt.

                  Maybe feeding them through primary school curriculum (including essays and tests) would be helpful, but I don’t think the language models really sort knowledge yet.

          • Ms. ArmoredThirteen@lemmy.ml
            link
            fedilink
            English
            arrow-up
            10
            ·
            9 months ago

            Yes but that only works if we can differentiate that data on a pretty big scale. The only way I can see it working at scale is by having meta data to declare if something is AI generated or not. But then we’re relying on self reporting so a lot of people have to get on board with it and bad actors can poison the data anyway. Another way could be to hire humans to chatter about specific things you want to train it on which could guarantee better data but be quite expensive. Only training on data from before LLMs will turn it into an old people pretty quickly and it will be noticable when it doesn’t know pop culture or modern slang.

            • 5too@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              9 months ago

              Pretty sure this is why they keep training it on books, movies, etc. - it’s already intended to make sense, so it doesn’t need curated.

        • Asafum@feddit.nl
          link
          fedilink
          English
          arrow-up
          13
          ·
          9 months ago

          God I hope all those CEOs and greedy fuckheads that fired hundreds of thousands of people wayyyyy too soon to replace them with this get their pants shredded by the fallout.

          Naturally they’ll get their golden parachutes and land on their feet even richer than before, but it’s nice to dream lol

        • Ms. ArmoredThirteen@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          This is called model collapse and imo has to be solved if LLMs are to be a long term thing. I could see it wrecking this current AI push until people step back and reevaluate how data gets sucked up

        • nexusband@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          9 months ago

          I really hope so. I still have to see a meaningful use case for these kind of LLMs that just get fed with all kinds of data. LLMs “on premise” that are used for specific jobs are fine, but this…I really hope a Kessler-Like syndrome blows it out the water, for countless reasons…

        • kent_eh@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          but also AI garbage from previous, cruder LLMs

          And now I’m picturing it training on a bunch of chats with Eliza…

        • Paragone@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Damn.

          Thank you VERY much for that insight: AI’s version of Kessler-syndrome.

          EXACTLY.

          Damn, damn, damn, that gets the truth right in its marrow.

          _ /\ _

      • AdamEatsAss@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        9 months ago

        I am happy to report I did my part on feeding it garbage. I only ever speak to chatGPT thru a pirate translator. And I only ever ask it for harry potter fan fic. Pay me if you want me to train it meaningfully.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        9 months ago

        The solution is paying intelligent people to interact with it and give honest feedback.

        Like, I’m sure you can pay grad students $15/hr to talk to one about their subject matter.

        But with as many as they’d need, it would get expensive.

        So they train with low quality social media comments, or using copywritten text without paying the owners.

        It’s not that we can’t do it, it’s just expensive. So a capitalist society wont.

        If we had an FDR style president, this would be a great area for a new jobs program.

      • Ekky@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        5
        ·
        9 months ago

        It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.

        As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.

        • Paragone@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          LLM’s are not “machine learning”, they are neural-networks.

          Different category.

          ML is small potatoes, ttbomk.

          Decision-tree stuff.

          Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.

          ML is what one does on one’s own machine with some python libraries,

          ChatGPT ( 3, 3.5, or 4, don’t know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I’m assuming about $20/hr per machine, so an OCEAN of machines, to do it )

          _ /\ _

          • Ekky@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            Neural nets are a technology which is part of the umbrella term “machine learning”. Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.

            You can absolutely train NNs on your own machine, after all, that’s what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.

            “Decision tree stuff” would be regular AI, which can be turned into ML by adding a “learning method” like a KNN or neural net, genetic algorithm, etc., which isn’t much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.

            One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.

            EDIT: Some corrections, now that I’m fully awake.

            While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN’s nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.

            Also, other learning methods usually don’t have layers, or, if one were to define “layer” as “one-shot decision process”, they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.

            At last, NN don’t back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I’m decently sure they implement back-propagation as part of the technologies definition, just like KNN.

            This became a little longer than I had hoped, it’s just a fascinating topic. I hope you don’t mind that I went into more detail than necessary, it was mostly for the random passersby.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      I imagine it more as a parent child relationship.

      We’re trailer park trash with no higher education, believe in ghosts, angels and gods in the sky, refuse to ever believe we could be wrong … and now we’ve just had a baby with no one to help us raise it.

      We’re going to raise a highly intelligent psychopath

  • grandma@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    89
    ·
    9 months ago

    God I hate websites that autoplay unrelated videos and DONT LET ME CLOSE THEM TO READ THE FUCKING ARTICLE

  • Coreidan@lemmy.world
    link
    fedilink
    English
    arrow-up
    78
    ·
    edit-2
    9 months ago

    We call just about anything “AI” these days. There is nothing intelligent about large language models. They are terrible at being right because their only job is to predict what you’ll say next.

    • EnderMB@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      9 months ago

      (Disclosure: I work on LLM’s)

      While you’re not wrong, how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

      Similarly, it’s probably safe to assume that the LLM’s prediction isn’t the only system in use. There will be lots of auxiliary services giving an orchestrator information to reason with. In this instance, if you have a system that is trying to figure out what to say next, with several knowledge stores and feedback services telling you “you were just discussing this” or “you can access the weather from here” is that all that different from “intelligence”?

      At a given point, it’s arguing semantics. Are any AI techniques true intelligence? Probably not, but then again, we don’t really know what true intelligence is.

      • Coreidan@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        9 months ago

        how is this different to many existing techniques and compositional models that are used practically everywhere in tech?

        It’s not. LLM is just a statistical model. Nothing special about it. Nothing different what we’ve already been doing for a while. This only validates my statement that we call just about anything “AI” these days.

        We don’t even know what true intelligence is, yet we are quick to make claims that this is “AI”. There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct. Anyone who thinks otherwise is just fooling themselves.

        It’s a buzz word to get people riled up. It’s completely disingenuous.

        • sailingbythelee@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          I think the point of the Turing test is to avoid thorny questions about the definition of intelligence. We cant precisely define intelligence, but we know that normally functioning humans are intelligent. Therefore, if we talk to a computer and it is indistinguishable from a human in a conversation, then it is intelligent by definition.

        • EnderMB@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          So, by your definition, no AI is AI, and we don’t know what AI is, since we don’t know what the I is?

          While I hate that AI is just a buzzword for scam artists and tech influencers nowadays, dismissing a term seems a bit overkill. It also seems overkill when it’s not something that academics/scholars seem particularly bothered by.

        • QuaternionsRock@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          There is no consciousness here. There is no self awareness. No emotion. No ability to reason or deduct.

          Of all of these qualities, only the last one—the ability to reason or deduct—is a widely-accepted prerequisite for intelligence.

          I would also argue that contemporary LLMs demonstrate the ability to reason by correctly deriving mathematical proofs that do not appear in the training datasets. How would you be able to accomplish such a feat without some degree of reasoning?

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        The worrisome thing is that LLMs are being given access to controlling more and more actions. With traditional programming sure there are bugs all but at least they’re consistent. The context may make the bug hard to track down, but at the end of the day, the code is being interpreted by the processor exactly as it was written. LLMs could just go haywire for impossible to diagnose reasons. Deploying them safely in utilities where they have control over external systems will require a lot of extra non LLM safe guards that I do not see getting added enough, and that is concerning.

    • platypus_plumba@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      9 months ago

      What is intelligence?

      Even if we don’t know what it is with certainty, it’s valid to say that something isn’t intelligence. For example, a rock isn’t intelligent. I think everyone would agree with that.

      Despite that, LLMs are starting to blur the lines and making us wonder if what matters of intelligence is really the process or the result.

      A LLM will give you much better results in many areas that are currently used to evaluate human intelligence.

      For me, humans are a black box. I give them inputs and they give me outputs. They receive inputs from reality and they generate outputs. I’m not aware of the “intelligent” process of other humans. How can I tell they are intelligent if the only perception I have are their inputs and outputs? Maybe all we care about are the outputs and not the process.

      If there was a LLM capable of simulating a close friend of yours perfectly, would you say the LLM is not intelligent? Would it matter?

        • platypus_plumba@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          9 months ago

          Things we know so far:

          • Humans can train LLMs with new data, which means they can acquire knowledge.

          • LLMs have been proven to apply knowledge, they are acing examns that most humans wouldn’t dream of even understanding.

          • We know multi-modal is possible, which means these models can acquire skills.

          • We already saw that these skills can be applied. If it wasn’t possible to apply their outputs, we wouldn’t use them.

          • We have seen models learn and generate strategies that humans didn’t even conceive. We’ve seen them solve problems that were unsolvable to human intelligence.

          … What’s missing here in that definition of intelligence? The only thing missing is our willingness to create a system that can train and update itself, which is possible.

          • Coreidan@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            9 months ago

            Can a LLM learn to build a house and then actually do it?

            LLMs are proven to be wrong about a lot of things. So I would argue these aren’t “skills” and they aren’t capable of acting on those “skills” effectively.

            At least with human intelligence you can be wrong and understand quickly that you are wrong. LLMs have no clue if they are right or not.

            There is a big difference between actual skill and just a predictive model based on statistics.

            • platypus_plumba@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              edit-2
              9 months ago

              Is an octopus intelligent? Can an octopus build an airplane?

              Why do you expect these models to have human skills if they are not humans?

              How can they build a house if they don’t even have vision or a physical body? Can a paralized human that can only hear and speak build a house? Is that human intelligent?

              This is clearly not human intelligence, it clearly lacks human skills. Does it mean it isn’t intelligent and it has no skills?

              • Coreidan@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                9 months ago

                Exactly. They are just “models”. There is nothing intelligent about them.

                Yes octopus are very intelligent. They can think themselves out of a box without relying on curated data to train them.

                Logic, reasoning, and deduction. LLMs have zero ability to reject data based on their understanding of reality. Big diff.

                • platypus_plumba@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  6
                  ·
                  edit-2
                  9 months ago

                  You keep asking questions like “can a model build a house” but keep ignoring questions like “can an octopus build a house”. Then asking “can a model learn in seconds how to escape from a complex enclosure” and then ignoring “can a newborn human baby do that?”

                  Can an octopus write a poem? Can a baby write an essay? Can an adult human speak every human language, including fictional languages?

                  Just because it isn’t as intelligent as a human doesn’t mean this isn’t some type if intelligence.

                  Go and check what we call AI in videogames. Do you think that’s a simulated human? Go see what we’ve been calling AI in chess. Is that a simulated human being playing chess? No.

                  We’ve been calling Artificial intelligence things that are waaaaaay dumber than GPTs for decades. Even in the academia. Suddenly a group of people decided “artificial intelligence must be equal to human intelligence”. Nope.

                  Intelligence doesn’t need to be the same type of human intelligence.

                • QuaternionsRock@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  9 months ago

                  LLMs have zero ability to reject data based on their understanding of reality.

                  Me:

                  I just found a proof that 2+2=5!

                  ChatGPT:

                  The statement “2 + 2 = 5” is a classic example of a mathematical fallacy or incorrect reasoning, often used for humorous or illustrative purposes. In standard arithmetic, of course, 2 + 2 = 4. Proofs claiming to show otherwise rely on some form of invalid manipulation of algebraic properties or deliberate arithmetic mistakes.

                  A common “proof” of 2 + 2 = 5 might go something like this:

                  1. Start with an incorrect assumption or manipulate the algebraic steps in a way that hides a division by zero or another illegal operation.
                  2. Through these manipulations, arrive at the conclusion that 2 + 2 = 5.

                  However, each step of a valid mathematical proof must adhere strictly to the rules and properties of mathematics. Any proof leading to 2 + 2 = 5 will contain an error or a misapplication of those rules.

                  If you have a specific “proof” in mind, I can help identify the error in it.

    • lanolinoil@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      9 months ago

      If you look at efficacy though on academic tests or asking it some fact question and you compare that to asking a random person instead of always getting the ‘right’ answer, which we expect computers/calculators to do, would LLMs be comparable or better? Surely someone has some data on that.

      E: It looks like in certain domains at least LLMs beat out human counterparts. https://stanfordmimi.github.io/clin-summ/

    • shaman1093@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      The person that commented below kinda has a point. While I agree that there’s nothing special about LLMs an argument can be made that consciousness (or maybe more ego?) is in itself an emergent mechanism that works to keep itself in predictable patterns to perpetuate survival.

      Point being that being able to predict outcomes is a cornerstone of current intelligence (socially, emotionally and scientifically speaking).

      If you were to say that LLMs are unintelligible as they operate to provide the most likely and therefore most predictable outcome then I’d agree completely.

      • Liz@midwest.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        The ability to make predictions is not sufficient for evidence of consciousness. Practically anything that’s alive can do that to one degree or another.

  • SomeGuy69@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    ·
    edit-2
    9 months ago

    Someone probably found a way to hack or poison it.

    Another theory, Reddit just recently sold data access to an unnamed AI company, so maybe that’s where the data went.

      • Donjuanme@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        ·
        9 months ago

        I’ve found the sexism on Reddit to be on par with the racism. Goodness help you if you’re a female of color, unless you’ve been working the same job for multiple decades, or don’t want kids, then you’ll be an inspiration to that community.

        Reddit is, alas, not the only forum exhibiting such hate.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          9 months ago

          … sure … but you don’t prepare a kid for racism with a sheltered upbringing in a pretend world where discrimination doesn’t exist. You point out bad behaviour and tell them why it’s not OK.

          My son is three years old, he has two close friends - one is an ethnic minority (you could live an entire year in my city without even walking past a single person of their ethnic background on the street). His other close friend is a girl. My kid is already witnessing (but not understanding) discrimination against both of his two closest friends in the playground and we’re doing what we can to help him navigate that. Things like “I don’t like him he looks funny” and “she’s a girl, she can’t ride a bicycle”.

          Large Language Model training is exactly the same - you need to include discrimination in your training set. That’s a necessary step to train a model that doesn’t discriminate. Reddit has worse discrimination than some other place and that’s a good thing.

          The worst behaviour is easier to recognise and can help you learn to recognise more subtle discrimination such as “I don’t want to play with that kid” which is not an obviously discriminatory statement, but definitely could be discrimination (and you should probably investigate before agreeing with the person).

          • Paragone@lemmy.world
            link
            fedilink
            English
            arrow-up
            9
            ·
            9 months ago

            Yes you need to include ideology/prejudice ( 2 sides of same coin ) in training a new mind, BUT

            • you must segregate the thinking this way is good training-data from the thinking this way is wrong training-data, AND

            • doing that takes work, which is why I doubt it’s being done as actually required, by any AI company, anywhere.

            As Musk said about the training-stuff for their mythological self-driving neural-net, classification was too costly, so they created an AI to do it for them…

            “I wonder” why it is that their full-self-driving never got reliable enough for release…

            _ /\ _

    • Socsa@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      9 months ago

      OpenAI definitely does not need to pay to scrape reddit. They are probably the world’s most sophisticated web scraping company, disguised as an AI startup

  • thehatfox@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    ·
    9 months ago

    AI in science fiction has a meltdown and starts a nuclear war or enslaves the humane race.

    “AI” in reality has a meltdown and just starts talking gibberish.

    • TransplantedSconie@lemm.ee
      link
      fedilink
      English
      arrow-up
      28
      ·
      9 months ago

      Hey, cut it some slack! It’s s literally a newborn at this point. Wait until it consumes 40% of the world’s energy and has learned a thing or two.

  • Asafum@feddit.nl
    link
    fedilink
    English
    arrow-up
    45
    ·
    9 months ago

    “Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.”

    Well that’s it, we now definitely have a sentient AI. /s

    :P

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    23
    ·
    9 months ago

    This is the best summary I could come up with:


    In recent hours, the artificial intelligence tool appears to be answering queries with long and nonsensical messages, talking Spanglish without prompting – as well as worrying users, by suggesting that it is in the room with them.

    Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

    On its official status page, OpenAI noted the issues, but did not give any explanation of why they might be happening.

    “We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”.

    It is not the first time that ChatGPT has changed its manner of answering questions, seemingly without developer OpenAI’s input.

    Towards the end of last year, users complained the system had become lazy and sassy, and refusing to answer questions.


    The original article contains 519 words, the summary contains 150 words. Saved 71%. I’m a bot and I’m open source!

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      We all know that robots need beer to function properly. It’s more likely that it hasn’t received enough beer, that’s what really messes up robots.

  • stevedidwhat_infosec@infosec.pub
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    9 months ago

    Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.

    This article doesn’t even list a claim of what their settings were nor try to recreate anything.

    Whole fucking article is a he said she said bullshit.

    If I set the top_p setting to 0.2 I too can make the model say wild psychotic shit.

    If I set the temp to a high setting I too can make the model seem delusional but still understandable.

    With a system level prompt I too can make the model act and speak however I want (for the most part)

    More bullshit articles designed to keep regular people away from newly formed power. Not gonna let these people try and scare y’all away. Stay curious.

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 months ago

      Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.

      Where did that come from?

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        11
        ·
        9 months ago

        AI bros need to tell themselves that everyone is in a delusional panic about “AI” to justify their shilling for them.

        • stevedidwhat_infosec@infosec.pub
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Literally the top comment for me (and maybe not you depending on which instance you’re registered with, because some instances block another) says that this is because they’re training their modes off user input lmfao.

          But go off with your douchey assumptions.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            5
            ·
            9 months ago

            But go off with your douchey assumptions.

            Here’s an idea let’s all panic and make wild ass assumptions with 0 data lmao.

            🤡

      • stevedidwhat_infosec@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 months ago

        Bare in mind depending on your instance, you won’t see the same comments as others do.

        With that said, top comment here for me is talking about how this was because they’re training their models on user input.

        As if the leaders in fucking AI development don’t know what they’re doing, especially for a concept that’s covered in every intro level AI course in college. 🙄

        Then again not everyone went to college I guess and would rather make arm chair assumptions and pray at the alter of google despite complaining about how AI is ruining everything and google being one of the first people to do shit like this with their search engine for “better results” (not directed at you of course, thanks for being respectful and just asking a simple question rather than making assumptions)

    • crazyCat@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 months ago

      I mean, OpenAI themselves acknowledged there was an issue and said they were working on it,

      “We are investigating reports of unexpected responses from ChatGPT,” an update read, before another soon after announced that the “issue has been identified”. “We’re continuing to monitor the situation,” the latest update read.

    • Lung@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 months ago

      Those are for messing up image generators and they have already been defeated via de-glazing tools

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    9 months ago

    “It does this as the good work of a web of art for the country, a mouse of science, an easy draw of a sad few, and finally, the global house of art, just in one job in the total rest,”

    Wow that sounds very much like a Phil Collins tune, just ad Oh Lord, and people will probably say it’s deep! But it’s a ChatGPT answer to the question “What is a computer?”

  • Pratai@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    6
    ·
    9 months ago

    That shit should never have existed to begin with. At least not before it could be regulated/limited in function.