• protonslive@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!

  • j4yt33@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 days ago

    I’ve only used it to write cover letters for me. I tried to also use it to write some code but it would just cycle through the same 5 wrong solutions it could think of, telling me “I’ve fixed the problem now”

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    124
    ·
    9 days ago

    Sounds a bit bogus to call this a causation. Much more likely that people who are more gullible in general also believe AI whatever it says.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      70
      ·
      9 days ago

      This isn’t a profound extrapolation. It’s akin to saying “Kids who cheat on the exam do worse in practical skills tests than those that read the material and did the homework.” Or “kids who watch TV lack the reading skills of kids who read books”.

      Asking something else to do your mental labor for you means never developing your brain muscle to do the work on its own. By contrast, regularly exercising the brain muscle yields better long term mental fitness and intuitive skills.

      This isn’t predicated on the gullibility of the practitioner. The lack of mental exercise produces gullibility.

      Its just not something particular to AI. If you use any kind of 3rd party analysis in lieu of personal interrogation, you’re going to suffer in your capacity for future inquiry.

      • Fushuan [he/him]@lemm.ee
        link
        fedilink
        English
        arrow-up
        8
        ·
        8 days ago

        All tools can be abused tbh. Before chatgpt was a thing, we called those programmers the StackOverflow kids, copy the first answer and hope for the best memes.

        After searching for a solution a bit and not finding jack shit, asking a llm about some specific API thing or simple implementation example so you can extrapolate it into your complex code and confirm what it does reading the docs, both enriches the mind and you learn new techniques for the future.

        Good programmers do what I described, bad programmers copy and run without reading. It’s just like SO kids.

    • ODuffer @lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      8 days ago

      Seriously, ask AI about anything you have expert knowledge in. It’s laughable sometimes… However you need to know, to know it’s wrong. At face value, if you have no expertise it sounds entirely plausible, however the details can be shockingly incorrect. Do not trust it implicitly about anything.

  • Snapz@lemmy.world
    link
    fedilink
    English
    arrow-up
    69
    ·
    edit-2
    9 days ago

    Corporations and politicians: “oh great news everyone… It worked. Time to kick off phase 2…”

    • Replace all the water trump wasted in California with brawndo
    • Sell mortgages for eggs, but call them patriot pods
    • Welcome to Costco, I love you
    • All medicine replaced with raw milk enemas
    • Handjobs at Starbucks
    • Ow my balls, Tuesdays this fall on CBS
    • Chocolate rations have gone up from 10 to 6
    • All government vehicles are cybertrucks
    • trump nft cartoons on all USD, incest legal, Ivanka new first lady.
    • Public executions on pay per view, lowered into deep fried turkey fryer on white house lawn, your meat is then mixed in with the other mechanically separated protein on the Tyson foods processing line (run exclusively by 3rd graders) and packaged without distinction on label.
    • FDA doesn’t inspect food or drugs. Everything approved and officially change acronym to F(uck You) D(umb) A(ss)
  • peoplebeproblems@midwest.social
    link
    fedilink
    English
    arrow-up
    56
    ·
    8 days ago

    You mean an AI that literally generated text based on applying a mathematical function to input text doesn’t do reasoning for me? (/s)

    I’m pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

    It’s funny because I never get what I want out of AI. I’ve been thinking this whole time “am I just too dumb to ask the AI to do what I need?” Now I’m beginning to think “am I not dumb enough to find AI tools useful?”

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    38
    ·
    9 days ago

    Good. Maybe the dumbest people will forget how to breathe, and global society can move forward.

  • ALoafOfBread@lemmy.ml
    link
    fedilink
    English
    arrow-up
    37
    ·
    8 days ago

    You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn’t do otherwise (I’m not a [good] coder), it does not make me worse at critical thinking.

    I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

    • Final Remix@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      8 days ago

      I use a bespoke model to spin up pop quizzes, and I use NovelAI for fun.

      Legit, being able to say “I want these questions. But… not these…” and get them back in a moment’s notice really does let me say “FUCK it. Pop quiz. Let’s go, class.” And be ready with brand new questions on the board that I didn’t have before I said that sentence. NAI is a good way to turn writing into an interactive DnD session, and is a great way to force a ram through writer’s block, with a “yeah, and—!” machine. If for no other reason than saying “uhh… no, not that, NAI…” and then correct it my way.

    • DarthKaren@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      8 days ago

      I’ve spent all week working with DeepSeek to write DnD campaigns based on artifacts from the game Dark Age of Camelot. This week was just on one artifact.

      AI/LLMs are great for bouncing ideas off of and using it to tweak things. I gave it a prompt on what I was looking for (the guardian of dusk steps out and says: “the dawn brings the warmth of the sun, and awakens the world. So does your trial begin.” He is a druid and the party is a party of 5 level 1 players. Give me a stat block and XP amount for this situation.

      I had it help me fine tune puzzle and traps. Fine tune the story behind everything and fine tune the artifact at the end (it levels up 5 levels as the player does specific things to gain leveling points for just the item).

      I also ran a short campaign with it as the DM. It did a great job at acting out the different NPCs that it created and adjusting to both the tone and situation of the campaign. It adjusted pretty good to what I did as well.

      • SabinStargem@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        Can the full-size DeepSeek handle dice and numbers? I have been using the distilled 70b of DeepSeek, and it definitely doesn’t understand how dice work, nor the ranges I set out in my ruleset. For example, a 1d100 being used to determine character class, with the classes falling into certain parts of the distribution. I did it this way, since some classes are intended to be rarer than others.

        • DarthKaren@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 days ago

          I ran a campaign by myself with 2 of my characters. I had DS act as DM. It seemed to handle it all perfectly fine. I tested it later and gave it scenarios. I asked it to roll the dice and show all its work. Dice rolls, any bonuses, any advantage/disadvantage. It got all of it right.

          I then tested a few scenarios to check and see if it would follow the rules as they are supposed to be from 5e. It got all of that correct as well. It did give me options as if the rules were corrected (I asked it to roll damage as a barbarian casting fireball, it said barbs couldn’t, but gave me reasons that would allow exceptions).

          What it ended up flubbing on later was forgetting the proper initiative order. I had to remind it a couple times that it messed it up. This only happened way later in the campaign. So I think I was approaching the limits of its memory window.

          I tried the distilled locally. It didn’t even realize I was asking it to DM. It just repeating the outline of the campaign.

          • SabinStargem@lemmings.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            It is good to hear what a full DeepSeek can do. I am really looking forward to having a better, localized version in 2030. Thank you for relating your experience, it is helpful. :)

            • DarthKaren@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 days ago

              I’m anxious to see it as well. I would love to see something like this implemented into games, and focused solely on whatever game it’s in. I imagine something like Skyrim but with a LLM on every character, or at least the main ones. I downloaded the mod that adds it to Skyrim now, but I haven’t had the chance to play with it. It does require prompts for the NPC to let you know you’re talking to it. I’d love to see a natural thing. Even NPCs carrying out their own natural conversations with each other and not with the PC.

              I’ve also been watching the Vivaladirt people. We need a 4th wall breaking npc in every game when we get a llm like above.

    • Bigfoot@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      I literally created an iOS app with zero experience and distributed it on the App Store. AI is an amazing tool and will continue to get better. Many people bash the technology but it seems like those people misunderstand it or think it’s all bad.

      But I agree that relying on it to think for you is not a good thing.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      9 days ago

      Learning how to evade and disable AI is becoming a critical thinking skill unto itself. Feels a bit like how I’ve had to learn to navigate around advertisements and other intrusive 3rd party interruptions while using online services.

    • Zacryon@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 days ago

      Well at least they communicate such findings openly and don’t try to hide them. Other than ExxonMobil who saw global warming coming due to internal studies since the 1970s and tried to hide or dispute it, because it was bad for business.

    • Flying Squid@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      8 days ago

      Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

      And yet I doubt Copilot will be going anywhere.

  • lobut@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    9 days ago

    Remember the:

    Personal computers were “bicycles for the mind.”

    I guess with AI and social media it’s more like melting your mind or something. I can’t find another analogy. Like a baseball bat to your leg for the mind doesn’t roll off the tongue.

    I know Primeagen has turned off copilot because he said the “copilot pause” daunting and affects how he codes.

  • OsrsNeedsF2P@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    ·
    edit-2
    9 days ago

    Really? I just asked ChatGPT and this is what it had to say:

    This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.

    • OhVenus_Baby@lemmy.ml
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 days ago

      I agree with the output for legitimate reasons but it’s not black and white wrong or right. I think it’s wildly misjudged and while there plenty of valid reasons behind that I still think there is much to be had for what AI in general can do for us on a whole and individual basis.

      Today I had it analyze 8 medical documents, told it to provide analysis, cross reference its output with scientific studies including sources, and other lengthy queries. These documents are dealing with bacterial colonies and multiple GI and bodily systems on a per document basis in great length. Some of the most advanced testing science offers.

      It was able to not only provide me with accurate numbers that I fact checked from my documents side by side but explain methods to counter multi faceted systemic issues that matched multiple specialty Dr.s. Which is fairly impressive given to see a Dr takes 3 to 9 months or longer, who may or may not give a shit, over worked and understaffed, pick your reasoning.

      While I tried having it scan from multiple fresh blank chat tabs and even different computers to really test it out for testing purposes.

      Overall some of the numbers were off say 3 or 4 individual colony counts across all 8 documents. I corrected the values, told it that it was incorrect and to reasses giving it more time and ensuring accuracy, supplied a bit more context about how to understand the tables and I mean broad context such as page 6 shows gene expression use this as reference to find all underlying issues as it isnt a mind reader. It managed to fairly accurately identify the dysbiosis and other systemic issues with reasonable accuracy on par with physicians I have worked with. Dealing with antibiotic gene resistant analysis it was able to find multiple approaches to therapies to fight antibiotic gene resistant bacteria in a fraction of the time it would take for a human to study.

      I would not bet my life solely on the responses as it’s far from perfected and as always with any info it should be cross referenced and fact checked through various sources. But those who speak such ill towards the usage while there is valid points I find unfounded. My 2 cents.

      • alteredracoon@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        Totally agree with you! I’m in a different field but I see it in the same light. Let it get you to 80-90% of whatever that task is and then refine from there. It saves you time to add on all the extra cool shit that that 90% of time would’ve taken into. So many people assume you have to use at 100% face value. Just take what it gives you as a jumping off point.

        • OhVenus_Baby@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          8 days ago

          I think specifically Lemmy and just the in general anti corpo mistrust drives the majority of the negativity towards AI. Everyone is cash/land grabbing towards anything that sticks. Trying to shove their product down everyone’s throat.

          People don’t like that behavior and thus shun it. Understandable. However don’t let that guide your entire logical thinking as a whole, it seems to cloud most people entirely to the point they can’t fathom an alternative perspective.

          I think the vast majority of tools/software originate from a source of good but then get transformed into bad actors because of monetization. Eventually though and trends over time prove this, things become open source or free and the real good period arrives after the refinement and profit period…

          It’s very parasitic even, to some degree.
          There is so much misinformation about emerging technologies because info travels so fast unchecked that there becomes tons of bullshit to sift through. I think smart contracts (removing multi party input) and business anti trust can be alleviated in the future but it will require correct implementation and understanding from both consumers and producers which we are far from as of now. Topic for another time though.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 days ago

    The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      8 days ago

      I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can’t even do basic math or summaries of a PDF and the data contained within.

      I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it’s the exact opposite. The first AI’s aren’t emotionless robots, they’re whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it’s still just a huge novelty.

      • Phoenicianpirate@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 days ago

        It makes HAL 9000 from 2001: A Space Odyessy seem realistic. In the movie he is a highly technical AI but doesn’t understand the implications of what he wants to do. He sees Dave as a detriment to the mission and it can be better accomplished without him… not stopping to think about the implications of what he is doing.

        • ameancow@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 days ago

          I mean, leave it up the one of the greatest creative minds of all time to predict that our AI will be unpredictable and emotional. The man invented the communication satellite and wrote franchises that are still being lined up to make into major hollywood releases half a century later.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 days ago

      I’ve found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn’t give me any working suggestions or correct information. Stuff I’ve asked about Docker was much better.

      • vortic@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        8 days ago

        The ability of AI to write things with lots of boilerplate like Kubernetes manifests is astounding. It gets me 90-95% of the way there and saves me about 50% of my development time. I still have to understand the result before deployment because I’m not going to blindly deploy something that AI wrote and it rarely works without modifications, but it definitely cuts my development time significantly.