• Smorty [she/her]
    link
    fedilink
    arrow-up
    20
    ·
    1 month ago

    apparently not. it seems they are refering to the official bs deepseek ui for ur phone. running it on your phone fr is super cool! Imma try that out now - with the smol 1.5B model

      • Smorty [she/her]
        link
        fedilink
        arrow-up
        13
        ·
        1 month ago

        i kno! i’m already running a smol llama model on the phone, and yeaaaa that’s a 2 token per second speed and it makes the phone lag like crazy… but it works!

        currently i’m doing this with termux and ollama, but if there’s some better foss way to run it, i’d be totally happy to use that instead <3

        • gandalf_der_12te
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 month ago

          i think termux is probably already the best way to go, it ensures linux-like flexibility, i guess. but yeah, properly wiring it up, with a nice Graphical User Interface, would be nice, i guess.

          Edit: now that i think about it, i guess running it on some server that you rent, is maybe better, because then you can access that chat log from everywhere, and also, it doesn’t drain your battery so much. But then, you need to rent a server, so, idk.

          Edit again: Actually, somebody should hook up the DeepSeek chatbot to Matrix chat, so you can message it directly through your favorite messaging protocol/app.


          edit again: (one hour later) i tried setting up deepseek 8b model on my rented server, but it doesn’t have enough RAM. i tried adding swap space, but it doesn’t let me. I figured out that you can’t easily add swap space in a container. somehow, there seems to be a reason to that. too tired to explore further. whatever.

          • Smorty [she/her]
            link
            fedilink
            arrow-up
            2
            ·
            1 month ago

            big sad :(

            wish it would be nice and easi to do stuff like this - yea hosting it somewhere is probably best for ur moni and phone.

            • gandalf_der_12te
              link
              fedilink
              arrow-up
              1
              ·
              1 month ago

              actually i think it kinda is nice and easy to do, i’m just too lazy/cheap to rent a server with 8GB of RAM, even though it would only cost $15/month or sth.

              • Smorty [she/her]
                link
                fedilink
                arrow-up
                2
                ·
                1 month ago

                it would also be super slow, u usually want a GPU for LLM inference… but u already know this, u are Gandald der zwölfte after all <3