• bamboo@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The privacy and security issues with LLMs are mitigated by the majority of it being on-device. Anything on device, in my opinion, has zero privacy or security issues. Anything taking place on a server has a potential to be a privacy issue, but Apple seems to be taking extraordinary measures to ensure privacy with their own systems, and ChatGPT, which doesn’t have the same protections, will be strictly opt in separately from Apple’s service. I see this as basically the best of all options, maximizing privacy while retaining more complex functionality.

    • LostWanderer@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      ChatGPT is a disaster in my opinion, it really soured my opinion on LLMs. Despite your educated opinion on the matter of Apple Intelligence; I have deep-seated mistrust of LLMs. Hopefully, it does turn out fine in the case of Apple’s implementation. I’m hesitant to be as optimistic about it. Once this is out in the wild and has been rigorously tested and prodded like ChatGPT; only then might my opinion on Apple Intelligence be changed.

      • bamboo@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Is the distrust in the quality of the output? If so, I think the main thing Apple has going for it is that they use many fine tuned models for context constrained tasks. ChatGPT can be arbitrarily prompted and is expected to give good output for everything, sometimes long output. Being able to do that is… hard. However, most of apple’s applications are much, much narrower. Like, the writing assistant which will rephrase at most a few paragraphs: the output is relatively short, and the model has to do exactly one task. Or in Siri: the model has to take a command, and then select one or more intents to call. It’s likely that choosing which intents to call, and what kinds of arguments to provide are handled by separate models optimized for each case. Despite all that, it is very possible that errors can still occur, but there are fewer chances for them to occur. I think part of Apple’s motivation for partnering with OpenAI specifically for certain complex Siri questions, is that this is an area they aren’t comfortable putting Apple branding on due to output quality concerns, and by providing it with a partner, they can pass blame onto the partner. Someday if LLMs are better understood and their output can be better controlled and verified for open ended questions, that’s when Apple might dump OpenAI and advertise their in house replacement as being accurate and reliable in a way ChatGPT isn’t.