• Lazycog@sopuli.xyz
    link
    fedilink
    arrow-up
    36
    ·
    1 day ago

    Quick answer: Don’t give any non-locally running non-opensource LLM’s sensitive info / private info.

      • frightful_hobgoblin@lemmy.ml
        link
        fedilink
        arrow-up
        10
        ·
        1 day ago

        ‘locally-running’ means it is on your computer, will work without an internet connection

        anything you access using the internet is not ‘locally-running’

        The comment means don’t send information over the internet that you don’t want to share.

        • Lazycog@sopuli.xyz
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          Thanks @frightful_hobgoblin@lemmy.ml for filling OP in! I want to add a few things incase OP is unaware of more than just what you explained:

          LLM = large language model, one of the types of AI. Examples: ChatGPT, DeepSeek, Meta’s LLaMA

          Open-Source: the program code of the AI is available to look at, in its entirety

          If you are not sure if you understand these terms and what frightful_hobgoblin said, then just assume whatever AI you are using is going to share your chat with the company behind it.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            7 hours ago

            open source is also very tricky with LLMs: i’d argue if you can’t recreate it from scratch, it’s not open source… deep seek does not contain all the data necessary to recreate it from scratch: it’s open weights (the model itself can be downloaded and run) but not open source… i’d classify it as free (as in beer) software; not open source

            • Lazycog@sopuli.xyz
              link
              fedilink
              arrow-up
              1
              ·
              7 hours ago

              Excellent addition, I agree!

              That’s the criteria of many FOSS catalogue repositories: they won’t add any software that is not completely reproducible.

  • fxomt@lemmy.dbzer0.comM
    link
    fedilink
    arrow-up
    15
    ·
    1 day ago

    Anything that is not local AI cannot be trusted.

    Have you ever thought to yourself, where the fuck do these corporations get the funding to make me use such a service for free? By harvesting your data and selling it.

    From your other comment i saw you aren’t using a PC, i haven’t tested this out but you may be interested in it (local LLM and android only): https://github.com/Vali-98/ChatterUI

    Best of luck to you.

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Depends on what you ask.

    Go ask it about NATO or Tienanmen Square and see what happens. The data model is heavily redacted, filtered, suppressed, biased…

    So if you ask it a question, it will always be pro-China/anti-America. It also changes responses on the fly to fit with Chinese law, which includes denying the Tienanmen Square massacre, and other historic events and even goes as far as to imply or outright say they never happened at all.

    So can the content be trusted? Not really.

    • erin (she/her)
      link
      fedilink
      arrow-up
      4
      ·
      15 hours ago

      This is incorrect. This only applies if not hosted locally. I host it myself it has none of these restrictions. If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        If you’re using it from their app or website it’s hosted in China and must follow Chinese law.

        This is literally what I’ve just said…

        It also changes responses on the fly to fit with Chinese law. You called what Is aid wrong, and then immediately exactly reiterated what I’ve said…

        Why? What do you get out of it?

        • erin (she/her)
          link
          fedilink
          arrow-up
          2
          ·
          8 hours ago

          I suppose if that line is a catch-all, sure. Your message didn’t make it clear that self-hosting removes Chinese bias and censorship. This is an important bit of information for OPs question, and what I get out of it is a valid and important addition to the conversation. I genuinely don’t know why you’re defensive. Being incorrect, or I suppose in this case, lacking nuance, isn’t a character flaw. I do it all the time.

      • erin (she/her)
        link
        fedilink
        arrow-up
        1
        ·
        8 hours ago

        Refer to my other comments above. Self-hosting it removes censorship and bias. It’s only biased as long as it’s on Chinese servers and therefore following Chinese law.

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        DeepSeek has some of the most syntactically correct and accurate English to Chinese translations I’ve ever seen–so it’s super useful for that.

  • frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    1 day ago

    You can’t trust anything.

    You always have to use trustless software.

    ‘Trusting’ is privacy-by-policy.

    Trustlessness is privacy-by-design.

    Deepseek’s models can be run truslessly locally, or can be hosted on a server.


    Wait were you talking about privacy or fact-checking? LLMs don’t stick to the truth.