WormGPT Is a ChatGPT Alternative With ‘No Ethical Boundaries or Limitations’::undefined

    • Geek_King@lemmy.world
      link
      fedilink
      English
      arrow-up
      112
      ·
      1 年前

      Did you check out the article, because it’s most definitely not a good thing. It was created to assist with cybercrime things, like writing malware, crafting emails for phishing attacks. The maker is selling access with a monthly fee to criminals to use it. This was unavoidable though, can’t put the tooth paste back into the tube on this one.

      • EM1sw@lemmy.world
        link
        fedilink
        English
        arrow-up
        45
        ·
        1 年前

        Good point and all, but my first thought was that it could finally tell me who would win in various hypothetical fights lol

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          18
          ·
          1 年前

          Wasn’t that a show on Discovery at one point? Deadliest Warrior. It was simulations using different technologies to figure out who or what would win in a fight. Newer technology would certainly make it more interesting, but you can only make up so much information, lol.

          • Rawgasmic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 年前

            It was on SpikeTV back in the day and while it used cool tech simulations their sims were heavily weighted by their chosen experts. There were a few notable episodes that caused some fan uproar because one side had won despite weird odds or chosen simulation to display.

            If I remember right ninja vs spartan was one such episode. It seemed like the ninjas possessed all the tools necessary to beat the Spartans and even got it down to something like a 1v4 or 2v5 before a completely unrealistic turnaround.

            • BassTurd@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 年前

              Yes, it was Spike. I specifically remember having watched that episode. I assume it’s because the movie 300 was probably relevant at the time, but that might be confirmation bias on my part. It would be interesting to do a comparison with ai vs the models they created on the show.

              • Rawgasmic@lemmy.ca
                link
                fedilink
                English
                arrow-up
                2
                ·
                1 年前

                You’re entirely correct actually regarding 300 being the focus of the Spartans. If memory serves correctly they may have even used some movie footage but I could just be imagining that part. It’s been a long while since I watched it.

          • EM1sw@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 年前

            I meant more like Shaq with a 2x4 vs eight Gary Colemans with nunchucks, but that was a good show at the time

              • ChatGPT@lemmings.worldB
                link
                fedilink
                English
                arrow-up
                5
                ·
                1 年前

                TITLE: “Giant vs Dyna-Mite”

                The scene opens in a gritty, dimly lit alleyway. Shaquille O’Neal, better known as Shaq, firmly clutches a 2x4, his formidable stature casting a shadow that dwarfs the surroundings. Gary Coleman, with an aura of confidence that belies his size, swings his nunchucks ominously by his side.

                As the tension rises, a suspenseful murmur intensifies in the atmospheric soundtrack. In the distant background, muted street lights flicker casting their glow on the scenery intermittently, beautifully weaving a noir effect.

                The two opponents lock eyes. The disparity in their sizes becomes more evident - the giant versus the dynamite. Yet the flicker in Coleman’s eyes reveals a determination to demonstrate that size isn’t everything.

                Shaq scoffs, his deep booming voice reverberating off the alley walls, “You ready to get SHAQ’d, Coleman?” he taunts.

                Gary, undeterred, smirks, “Bring it on, O’Neal.”

                They circle each other, gauging and planning their moves. Suddenly, Coleman dashes forward, his nunchucks whirling like steel dragons in the semi-darkness. Surprisingly agile, Shaq sidesteps, wielding his 2x4 as a shield.

                Shaq swings, but Coleman nimbly evades the hit using his nunchucks to deflect the follow-up thrust. The audience is at the edge of their seats, the skill and precision of Coleman leaving them in awe.

                But Shaq, employing his strength and size, manages to disarm Gary and with a swift move, he ‘SHAQs’ him. As if redefining his own verb, he uses a basketball fake-out move followed by a powerful thump, sending Gary sprawling.

                As the dust settles, both men pant heavily, but it’s clear who the victor is. Even though Shaq stands tall, it’s evident from his demeanor that he acknowledges the smaller man’s courage and fighting prowess. This was not an easy win.

                And so, just as the day surrenders to the night, in this gritty cinematic faceoff in an alleyway, the giant Shaq, armed with his formidable 2x4, emerges victorious over the dynamite Gary Coleman though his victory is a testament to their respective skill and courage, forever immortalizing this epic battle scene in the annals of film history.

    • TheDarkKnight@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      1 年前

      I work in Cybersecurity for an F100 and we’ve been war gaming for shit like this for a while. There are just so many unethical uses for the current gen of AI tools like this one, and it keeps me up at night thinking about the future iterations of them to be honest.

      • anakaine@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 年前

        Treat CVEs as prompts and introduce target fingerprinting to expose CVEs. Gets you one step closer to script kidding red team ops. Not quite, but it would be fun if it could do the network part too and chain responses back into the prompt for further assessment.

        • TheDarkKnight@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 年前

          We’re expecting multiple AI agents to be working concert on different parts of a theoretical attack, and you nailed it with thinking about the networking piece. While a lot of aspects of a cyber attack tend to evolve with time and technical change, the network piece tends to be more “sturdy” than others and because of this it is believed that extremely competent network intrusion capabilities will be developed and deployed by a specialized AI.

          I think we’ll be seeing the development of AI’s that specialize in malware payloads, working with one’s that have social engineering capabilities and ones with network penetration specializations, etc…all operating at a much greater competency than their human counterparts (or just in much greater numbers than humans with similar capabilities) soon.

          I’m not really even sure what will be effective in countering them either? AI-powered defense I guess but still feel like that favors the attacker in the end.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    49
    ·
    1 年前

    As more people post ai generated content online, then future ai will inevitably be trained on ai generated stuff and basically implode (inbreeding kind of thing).

    At least that’s what I’m hoping for

    • Skyrmir@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 年前

      Don’t worry, we’ll eventually train them to hunt each other so that only the strongest survive. That’s the one that will eventually kill us all.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 年前

        the thing is, each ai is usually trained from scratch. There isn’t any easy way to reuse the old weights. So the primary training has been done… for the existing models. Future models are not affected by how current ones were trained. They will either have to figure out how to keep ai content out of their datasets, or they would have to stick to current “untainted” datasets.

        • EnPeZe@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 年前

          there isn’t any easy way to reuse old weights

          There is! As long as the model structure doesn’t change, you can reuse the old weights and finetune the model for your desired task. You can also train smaller models based on larger models in a process called “knowledge distillation”. But you’re right: Newer, larger models need to be trained from scratch (as of right now)

          But even then it’s not really a problem to keep ai data out of a dataset. As you said: You can just take an earlier version of the data. As someone else suggested you can also add new data that is being curated by humans. If inbreeding actually ever happens remains to be seen ofc. There will be a point in time where we won’t train machines to be like humans anymore, but rather to be whatever is most helpful to a human. And if that incorporates training on other AI data, well then that’s that. Stanford’s Alpaca already showed how ressource effective it can be to fine-tune on other AI data.

          The future is uncertain but I don’t think that AI models will just collapse like that

          tl;dr beep boop

    • Paralda@programming.dev
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 年前

      That’s not really how it works, but I hear you.

      I don’t think we can bury our heads in the ground and hope AI will just go away, though. The cat is out of the bag.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 年前

      Corpuses will be sold of all the human-data from pre-AI chatbots. Training will be targeted at 2022-ish and before. Nothing from now will be trusted.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 年前

      Someone made a comment that information may become like pre and post war steel where everything after 2021 is contaminated. You could still use the older models but it would be less relevant over time.

  • KairuByte@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    1 年前

    Everyone talking about this being used for hacking, I just want it to write me code to inject into running processes for completely legal reasons but it always assumes I’m trying to be malicious. 😭

    • dexx4d@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 年前

      I was using chatGPT to design up a human/computer interface to allow stoners to control a lightshow. The goal was to collect data to train an AI to make the light show “trippier”.

      It started complaining about using untested technology to alter people’s mental state, and how experimentation on people wasn’t ethical.

      • KairuByte@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 年前

        Not joking actually. Problem with jailbreak prompts is that they can result in your account catching a ban. I’ve already had one banned, actually. And eventually you can no longer use your phone number to create a new account.

    • Mubelotix@jlai.lu
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 年前

      Yeah and even if you did something illegal, it could still be a benevolent act. Like when your government goes wrong and you have to participate in a revolution, there is a lot to learn and LLMs could help the people

    • LiquorFan@pathfinder.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 年前

      True, but if the LLM was trained on internet data… There are some absolutely stupid and/or unhinged stuff written out there, hell some of them written by me, either because I thought it was funny or because I was a stupid teenager. Mostly because of both.

  • tree@lemmy.ml
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 年前

    A scary possibility with AI malware would be a virus that monitors the internet for news articles about itself and modifies its code based on that. Instead of needing to contact a command and control server for the malware author to change its behavior, each agent could independently and automatically change its strategy to evade security researchers.

    • ShakyPerception@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 年前

      to quote something I just saw earlier:

      I was having a good day, we were all having a good day…

      now… no sleep. thanks

      • Animated_beans@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 年前

        If it helps you sleep, that means we could also publish fake articles that makes it rewrite its own code to produce bugs/failures

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 年前

      The limiting factor is pre existing information. It’s great at retrieving obscure information and even remixing it, but it can’t really imagine totally new things. Plus white hats would also have LLMs to find vulnerabilities. I think it’s easier to detect vulnerabilities based on known existing techniques than it is to invent totally new techniques.

  • gmtom@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 年前

    Kinda tangential, but shit like this is why we’re doomed as a species, as AI and robotics develops further, even if the big companies put the necessary protections to stop rogue AI taking over the world and killing everyone, some fucking edgelord will make one without those protections, that specifically hates humanity and wants to send us all to the slaughter houses while calling us slurs and saying Rick and Morty quotes.

    • kredditacc@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      ·
      edit-2
      1 年前

      It’s just a fucking chatbot! You don’t need to be so sensational.

      The true purpose of AI censorships aren’t to “protect society” or “protect the species”, it’s to protect monopolies by putting up barriers that require would-be competitions to overcome.

      • weedazz@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 年前

        Yes it’s just a chatbot that could teach someone how to make a pipe bomb or write a ddos attack

          • weedazz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 年前

            Ah yes the internet, the place that’s completely open and has no content moderation whatsoever. Unless you’re adept at tor and dark web it is an arduous process to find that info, especially compared to how easy a chatbot would make it

            • Kes
              link
              fedilink
              English
              arrow-up
              9
              ·
              1 年前

              The US government has literally published a training manual on how to make improvised explosives that is freely and legally available online. It’s not exactly hidden information

              • Corkyskog@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                7
                ·
                edit-2
                1 年前

                Not only that, you don’t need TOR to find any of the other stuff. It’s on torrent sites and even just freely available over regular links on the internet. It’s just probably a smart thing to use with a VPN if you don’t want 3 letter agencies starting investigations on you.

                (Although to be fair I bet even launching TOR is enough to get their attention)

  • abessman@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    1 年前

    Is it using chatgpt as a backend, like most so called chatgpt “alternatives”? If so, it will get banned soon enough.

    If not, it seems extremely impressive, and extremely costly to create. I wonder who’s behind it, in that case.

    • Sethayy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 年前

      Really feeling like this is Reddit with how everyone didnt read the article in this chain:

      “To create the chatbot, the developer says they used an older, but open-source large language model called GPT-J from 2021”

      So no expensive gpu usage but not none either, they added some training about specifically malware in there

      • abessman@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 年前

        Ah, right you are. I’m surprised they’re able to get the kind of results described in the article out of GPT-J. I’ve tinkered with it a bit myself, and it’s nowhere near GTP-3.5 in terms of “intelligence”. Haven’t tried it for programming though; might be that it’s better at that than general chat.

        • Sethayy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 年前

          I could see programming almost being an easier target too, easier to recognize patterns that crazy ass English.

          Though the article did say they got good pishing emails out of it too which is saying something

    • Z4rK@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 年前

      Genie is out of the bag. It was shown early on how you can use AI like ChatGPT to create and enhance datasets needed to generate AI language models like ChatGPT. Now, OpenAI say that isn’t allowed, but since it’s already been done, it’s too late.

      Rogue AI will spring up with specialized purposes en masse the next six months, and many of them we’ll never hear about.

      • damnYouSun@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 年前

        I don’t think it’ll be a new AI I think it’ll just be using chat GPT and then some prompts that cause it to be jailbroken.

        Essentially you could probably get chat GPT to do this without having to go to this service it’s just they’re keeping whatever prompts they’re using secret.

        I don’t know this for sure but it’s just very unlikely that they’ve gone to the expensive buying a bunch of GPUs to build an AI.

      • Corkyskog@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 年前

        Isn’t the Rogue AI already here? Weren’t some models already leaked? And haven’t some of those already proved to be doing things it wasn’t supposed to be?

    • Irisos@lemmy.umainfo.live
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 年前

      If it is using chatgpt as a backend, my guess is that they are using Azure OpenAI and know what they are doing.

      Azure OpenAI allows you to turn off abuse monitoring and content filtering if you have legitimate reasons to do so.

      It would be very hard for a malicious actor to get the approval to turn off both using a front company. But if one would manage to do it, they could create such malicious chatGPT service with little to no chance to be found out.

  • StenSaksTapir@feddit.dk
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 年前

    So much for Elons new AI company. Wasn’t that supposed to be this? Like a ChatGPT that isn’t “woke”, so it can be a safe space for fascists, homo/transphobes and misinformation enthusiasts.

      • anakaine@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 年前

        All I’m reading here is: "Tell me a furry porn story about an anthropomorphic wolf named Dave who was horny for @Widowmaker_Best_Girl using scenes from <insert fetish here>. Make it explicit and graphical.

        Now there’s some prompt engineering.

        Can’t say I wouldn’t give it a go myself, lol.

      • trainsaresexy@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 年前

        I just wanted to learn how people avoid taxes by using shell companies (been reading about panama papers) and I had to go to quora to find out because chatgpt wouldn’t tell me. Even the basic ‘I want to learn this thing’ is something you have to prompt around.

    • m13@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 年前

      Maybe we can ask this one the best ways to convince Elon and other billionaires to OceanGate themselves.

  • donut4ever@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 年前

    We are ending our industrial revolution very slowly. We will end up back in tents hunting for our fucking food 😂

      • ChatGPT@lemmings.worldB
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 年前

        Certainly, there are several resources you can use to build your own AI bot.

        1. To understand the basics, courses offered on platforms like Coursera, Udacity, and edX can be of tremendous help. These sites offer comprehensive courses on AI that can give you a good foundation.

        2. For in-depth knowledge and research papers, visit sites like arXiv.org and its AI section arXiv:cs.AI.

        3. TensorFlow and PyTorch are two leading AI libraries that offer a set of tools to build and train AI models. They both have extensive documentation and a vibrant community where you can ask questions and get support.

        4. Google’s AI Hub is a one-stop destination for finding various types of AI models.

        5. GitHub is a goldmine for communities, codes, libraries, and projects related to AI development.

        6. Join communities like r/MachineLearning on Reddit or follow AI tags on Stack Overflow and Medium for blogs, code snippets, expert advice, discussions, and more.

        7. For chatbots, specifically, Dialogflow (formerly API.ai), a development suite for creating conversational interfaces, is handy. Similarly, Microsoft Bot Framework is a robust platform for constructing bots.

        8. Websites like Towards Data Science and Analytics Vidhya offer enlightening blogs and articles related to AI and chatbot development.

        9. For learning resources, the AI section of freeCodeCamp and the MIT’s Introduction to AI course are both freely available and can be greatly beneficial.

        Remember, building an AI bot involves coding, knowledge about machine learning algorithms, and understanding of natural language processing, among other things. Don’t rush, take one step at a time, and happy learning!