• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • De_Narm@lemmy.world
    link
    fedilink
    English
    arrow-up
    115
    ·
    7 months ago

    Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

    • cron@feddit.de
      link
      fedilink
      English
      arrow-up
      58
      ·
      7 months ago

      What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

      • no banana@lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        7 months ago

        That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

        • knotthatone@lemmy.one
          link
          fedilink
          English
          arrow-up
          9
          ·
          7 months ago

          Plenty of free apps get monetized just fine. They just have to offer something people want to use that they can slather ads all over. The AI doo-dads haven’t shown they’re useful. I’m guessing the dedicated hardware strategy got them more upfront funding from stupid venture capital than an app would have, but they still haven’t answered why anybody should buy these. Just postponing the inevitable.

      • dimeslime@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        ·
        7 months ago

        It’s a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I’d be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it’s never “I’d never thought of doing it that way!” levels of amazing.

      • Bahnd Rollard@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        7 months ago

        You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

        Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

        • Semi-Hemi-Demigod@kbin.social
          link
          fedilink
          arrow-up
          11
          ·
          7 months ago

          I think the reason they’re useful for writing code is that there’s a third party - the parser or compiler - that checks their work. I’ve used LLMs to write code as well, and it didn’t always get me something that worked but I was easily able to catch the error.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          Are they useless?

          Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

          • Bahnd Rollard@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            7 months ago

            This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 months ago

              I don’t agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I’d feel more able to be creative, not less.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              7 months ago

              So you want me to go into one of my codebases, remember what came from copilot and then paste it here? Lol no

                  • best_username_ever@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    7 months ago

                    You say it’s magical but never post proof. That’s all I need to think it’s shit. No need to debate about it for hours. Come back when you entice us with something instead of the billion REST APIs that are useless but seem to give a hard on to all the AI bros out there.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 months ago

        It’s no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus… To them, generative AI is nothing more than the free version of chatgpt from a year ago, they’ve not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

          • AIhasUse@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            Downvotes by a few uneducated people mean nothing. The tools are already there. You are free to use them and think about this for yourself. I’m not even talking about what will be here in the future. There is some really great stuff right now. Even if doing some very simple setup is too daunting for you, you can just watch people on youtube doing it to see what is available. People in this thread have literally already told you what to type into your search box.

            In the early 90s, people exactly like you would go on and on about how stupid the computerbros were for thinking anyone would ever use this new stupid “intertnet” thing. You do you, it is totally fine if you think because a handful of uneducated, vocal people on the internet agree with you that technology has mysteriously frozen for the first time in history, then you must all be right.

        • GluWu@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          They aren’t trying to have a conversation, they’re trying to convince themselves that the things they don’t understand are bad and they’re making the right choice by not using it. They’ll be the boomers that needed millennials to send emails for them. Been through that so I just pretend I don’t understand AI. I feel bad for the zoomers and genas that will be running AI and futilely trying to explain how easy it is. Its been a solid 150 years of extremely rapid invention and innovation of disruptive technology. But THIS is the one that actually won’t be disruptive.

            • AIhasUse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              7 months ago

              Tell me about how when you used Llama 3 with Autogen locally, and how in the world you managed to pay a large company to use disproportionate amounts of energy for it. You clearly have no idea what is going on on the edge of this tech. You think that because you made an openai account that now you know everything that’s going on. You sound like an AOL user in the 90 that thinks the internet has no real use.

                • AIhasUse@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  7 months ago

                  You’re just saying that you will only taste free garbage wine, and nobody can convince you that expensive wine could ever taste good. That’s fine, you’ll just be surprised when the good wine gets cheap enough for you to afford or free. Your unwillingness to taste it has nothing to do with what already exists. In this case, it’s especially naive since you could just go watch videos of people using actually good wine.

              • best_username_ever@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                7 months ago

                In one of those weird return None combination. Also I don’t get why it insists on using try catch all the time. Last but not least, it should have been one script only with sub commands using argparse, that way you could refactor most of the code.

                Also weird license, overly complicated code, not handling HTTPS properly, passwords in ENV variables, not handling errors, a strange retry mechanism (copy pasted I guess).

                It’s like a bad hack written in a hurry, or something a junior would write. Something that should never be used in production. My other gripe is that OP didn’t learn anything and wasted his time. Next time he’ll do that again and won’t improve. It’s good if he’s doing that alone, but in a company I would have to fix all this and it’s really annoying.

      • sudo42@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 months ago

        Who’s going to tell them that “QA” just ran the code through the same AI model and it came back “Looks Good”.

        :-)

      • knotthatone@lemmy.one
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        I don’t think LLMs are useless, but I do think little SoC boxes running a single application that will vaguely improve your life with loosely defined AI features are useless.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 months ago

      Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 months ago

      I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

      So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        7 months ago

        Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          7 months ago

          Alexa is a fundamentally different architecture from the LLMs of today.

          Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

          Maybe actually read the entire post before being an ass.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 months ago

      I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        7 months ago

        I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.

    • BaroqueInMind@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 months ago

      There is s fuck ton on money laundering coming from China nowadays and they invest millions in any tech-bro stupid idea to dump their illegal cash.

    • OneOrTheOtherDontAskMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 months ago

      I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

      A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

      So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.