• Lucidlethargy@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Local AI will be harvested - if not today, then as soon as tomorrow. I recommend not trusting any system like this with any sensitive information… Or, honestly, with most non-sensitive information.

    • A1kmm@lemmy.amxl.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      When people say Local AI, they mean things like the Free / Open Source Ollama (https://github.com/ollama/ollama/), which you can read the source code for and check it doesn’t have anything to phone home, and you can completely control when and if you upgrade it. If you don’t like something in the code base, you can also fork it and start your own version. The actual models (e.g. Mistral is a popular one) used with Ollama are commonly represented in GGML format, which doesn’t even carry executable code - only massive multi-dimensional arrays of numbers (tensors) that represent the parameters of the LLM.

      Now not trusting that the output is correct is reasonable. But in terms of trusting the software not to spy on you when it is FOSS, it would be no different to whether you trust other FOSS software not to spy on you (e.g. the Linux kernel, etc…). Now that is a risk to an extent if there is an xz style attack on a code base, but I don’t think the risks are materially different for ‘AI’ compared to any other software.

    • Dizzy Devil Ducky@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      If you connect it to the Internet then sure it can be easily harvested by large companies. Pretty sure you can host an offline AI in a device you have made sure the hardware isn’t gonna be phoning home and it’ll probably be fairly safe if you aren’t an idiot like me and actually know what you’re doing.