• mynameisigglepiggle@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    5 hours ago

    I know there is a lot of hate for AI here but it has its uses.

    However. I still haven’t found a use for Gemini. What a massive steaming pile of shit llm google have cooked up. It is utterly useless gutter trash that nobody could possibly love.

    Pull your finger out google. What the fuck are you doing, you stupid pieces of dog shit?

    • Buglefingers@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 hour ago

      One of the features I thought it could be wicked useful for is printer clarity. When scanning or copying bad/old papers, hand written, etc. It could clean up, focus, or turn handwriting to print. That would actually be a good use for it IMO, ofc it’ll fuck shit up sometimes but it’ll probably be better than nothing at all

      • Acters@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        My phone is starting to take better document pictures than my printer. Especially if I hold my phone steady at the right height. I bet if I used a 3D printer to create a stand to hold it for me. Heck I could buy a giant bucket or rack or whatever and bright high color index lights to further improve image clarity to ungodly levels. Still, the Samsung camera app with the AI features already being preloaded that will likely be improved in the next 5 years will just get better. Printers and scanners are completely a pita.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 hours ago

      Even Llama gives better results than Gemini. They’re just perpetually behind everyone else. It’s like they took GPT3 and tried to bolt one their search results, let someone forgot to make their search results not suck again for their own internal tool.

  • MirthfulAlembic@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    18 hours ago

    I mean, if you’re googling that without even providing a model number, I can excuse the AI choosing to show it. It’s not a mind reader.

      • Zorsith
        link
        fedilink
        English
        arrow-up
        6
        ·
        15 hours ago

        At one point they were packing a shitload of usb ports onto the IO panel, 5 stacks of 4 ports wouldnt surprise me

    • dingus@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      14 hours ago

      Yeah, I never get these strange AI results.

      Except, the other day I wanted to convert some units and the AI results was having a fucking stroke for some reason. The numbers did not make sense at all. Never seen it do that before, but alas, I did not take a screenshot.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        52 minutes ago

        Usually I’ll see something mild or something niche get wildly messed up.

        I think a few times I managed to get a query from a post in, but I think they are monitoring for viral bad queries and very quickly massage it one way or another to not provide the ridiculous answer. For example a fair amount of times the AI overview just would be seemingly disabled for queries I found in these sorts of posts.

        Also have to contend with the reality that people can trivially fake it and if the AI isn’t weird enough, they will inject a weirdness to get their content to be more interesting.

      • kadup@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        13 hours ago

        Those LLMs can’t handle numbers, they have zero concept of what a number is. They can pull some definitions, they can sorta get very basic arithmetic to work in a limited domain based on syntax rules, but it will mess up most calculations. ChatGPT tries to work around it by recognizing the prompt is related to math, passing it to a more normal Wolfram-Alpha style algorithm, and then using the language model to format the reply into something more appealing, but even this approach often fails because if the AI gets confused for any reason it will feed moronic data to the maths algorithm.

          • ImplyingImplications@lemmy.ca
            link
            fedilink
            arrow-up
            7
            ·
            12 hours ago

            LLMs don’t verify their output is true. Math is something where verifying its truth is easy. Ask an LLM how many Rs in strawberry and it’s plain to see if the answer is correct or not. Ask an LLM for a summary of Columbian history and it’s not as apparent. Ask an LLM for a poem about a tomato and there really isn’t a wrong answer.

      • synae[he/him]@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        16
        ·
        13 hours ago

        The “sauce vs dressing” one worked for me when I first heard about it, but in the following days it refused to give an AI answer and now has a “reasonable” AI answer

        The original, if you haven’t seen it:

        1000020167