• Ganbat@lemmyonline.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    7 months ago

    They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).

    This sounds more like a huge fuckup with site, not the AI itself.

    Edit: A depressing amount of people commenting here obviously didn’t read the article…

    • Feathercrown@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      To be fair the article headline is a straight up lie. OpenAI leaked it by sending a user someone else’s chat history, ChatGPT didn’t leak anything.

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        The ChatGPT service leaked the data. Maybe that can be attributed to the OpenAI organization that owns and operates ChatGPT, too, but it’s not “a straight up lie” to say that ChatGPT leaked information, when ChatGPT is the name of both the service and the LLM that powers the interesting part of that service.

    • 𝚝𝚛𝚔@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Edit: A depressing amount of people commenting here obviously didn’t read the article…

      Every time

      • HelloHotel@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 months ago

        Stupid is too harsh. They could be as intelegent as you or me. but… they are told propaganda/marketing, the thing is made to hide its rough edges and the hype from the propaganda machine puts people in a hazey mindset where its hard to think.

        • doctorcrimson@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          I think the average person is not very smart, especially considering the USA, Russia, China, and India are large parts of the world population. Now realize that half of everyone below median intellect is even dumber than that. The fact that propaganda and hype are highly effective to start with is evidence of our lacking capabilities as a species.

      • stoly@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I had a student graduate recently who told me that he thought that technology just worked before joining my team of computer lab managers. I suspect that people think that tech in general JUST GOES.

          • SparrowRanjitScaur@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            No need for personal attacks. Since you won’t define it I will:

            The ability to acquire and apply knowledge and skills (from Oxford Languages)

            I would argue this applies to ChatGPT. ChatGPT exists under the hood a as neutral network, and is clearly capable of acquiring knowledge during training. And ChatGPT is also clearly capable of applying that knowledge in producing answers to questions or novel solutions to problems.

            Based on this definition, I would argue that ChatGPT is intelligent. Whether ChatGPT is sentient or not is a very different question. I would argue not, but again, that depends on the definition of sentience.

  • HiramFromTheChi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    It also literally says to not input sensitive data…

    This is one of the first things I flagged regarding LLMs, and later on they added the warning. But if people don’t care and are still gonna feed the machine everything regardless, then that’s a human problem.

      • MystikIncarnate@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        People literally do this though. I work in IT and people have literally said, out loud, with people around that can hear what we’re saying clearly, this exact thing.

        I’m like… I don’t want your password. I never want your password. I barely know what my password is. I use a password manager.

        IT should never need your password. Your boss and work shouldn’t need it. I can log in as you without it most of the time. I don’t, because I couldn’t give any less of a fuck what the hell you’re doing, but I can if I need to…

        If your IT person knows what they’re doing, most of the time for routine stuff, you shouldn’t really see them working, things just get fixed.

        Gah.

        • Wogi@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          Lmao my IT guy asks for our passwords to certain things on an annual basis, stores them as plain text in a fucking email.

          First Time he did it I was like “uhh, not supposed to share that?” And he just insisted he needed it. Whatever, he wants to log in to my Autodesk account he’s free to. Not sure how much damage he could do.

          • MystikIncarnate@lemmy.ca
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            That’s the problem, right there.

            Companies either don’t allow for IT oversight of accounts or charge more for accounts that can be overseen. Companies don’t want to pay the extra, if that’s even an option on the platform, so some passwords end up being fairly common knowledge among the IT staff.

            As for your computer login? No thanks. Microsoft has been built pretty much from the ground up to be administratable. I can get into your files, check what you’re running, extract data, modify your settings, adjust just about anything I want if I know what I’m doing. All without you realizing that I’ve done anything.

            Companies like Autodesk really don’t have that kind of oversight available for accounts that they’re willing to provide to an administrator that’s managing your access. I should be able to list the license that you’ve been given, download whatever software that license is associated to, and purchase/apply new licensing, all from a central control panel for the company under my own administrative user account for their site, whether I’m assigned any software/licensing or not. They don’t. It makes my job very complicated when that’s the case.

            In the event you brick your computer (or lose it, or destroy it, or something… Whether intentional or not), I sometimes need your password to go download your software and install it, then apply your license to it, so that it’s ready to go when you get your system back. You might lose any customizations, but you’ll at least have the tools to do the job.

            On the flip side, an example of good access is with Microsoft 365. You’re having a problem finding an email, I can trace the message in the control panel, get it’s unique ID, set your mailbox to provide myself full access to see it, then switch mailboxes to yours, while I’m still signed in as myself, find the message you accidentally moved into the draft messages folder and move it back to your inbox. Then remove my access and the message just appears in your inbox without you doing anything. I didn’t need to talk to you, I didn’t need your password… Nothing. No interaction, just fixed.

            There’s hundreds of examples of both good and bad administrative access, and it varies dramatically depending on the software vendor. In a perfect world I would have tools like what I get from exchange online for all the software and tools you use. Fact is, most companies are just too lazy to do it, instead of paying the developers to do things well, they’d rather give the money to their shareholders and let us IT folks suffer. They don’t give a shit about us.

  • Fades@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Why the fuck would you give any AI your password??? People are so goddamn stupid

    • HelloHotel@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I absolutely agree. Use somthing like ollama. do keep in mind that it takes a lot of compiting resources to run these models. ~5GB ram ~3GB filesize for the smaller sized ollama-unsensored.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        It’s not great, but an old GTX GPU can be had cheaply if you look around refurb, as long as there is a warranty, you’re gold. Stick it into a 10 year old Xeon workstation off eBay, you can have a machine with 8 cores, 32GB RAM and a solid GPU cheaply under $200 easily.

        • HelloHotel@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          Its the RAM requirement that stings rn, I beleave ive got the specs but was told or misremember a 64 GB ram requirement for a model.

  • Chozo@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    7 months ago

    If you paste plaintext passwords into ChatGPT, the problem is not ChatGPT; the problem is you.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        A huge value add of.chatgpt is that you can have running, contextual conversation. That requires memory.

        • Farid@startrek.website
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 months ago

          It doesn’t actually have memory in that sense. It can only remember things that are in the training data and within its limited context (4-32k tokens, depending on model). But when you send a message, ChatGPT does a semantic search of everything in the conversation and tries to fit the relevant parts inside the context, if there’s room.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            7 months ago

            I’m familiar, it’s just easiest for the layman to consider the model having “memory” as historical search is a lot like it at arm’s length

        • GamingChairModel@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          All of these LLMs should have walls between individual users, though, so that the chat history of one user is never accessible to any other user. Applying some kind of restriction to the LLM training and how chats are used is a conversation we can have, but the article and the example given is a much, much simpler problem that a user checking his own chat history was able to see other user’s chats.

      • Ganbat@lemmyonline.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Did you read the article? It didn’t. Someone received someone else’s chat history appended to one of their own chats. No prompting, just appeared overnight.

          • Ganbat@lemmyonline.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 months ago

            Well, yeah, but the point is, ChatGPT didn’t “remember and then leak” anything, the web service exposed people’s chat history.

            • wildginger@lemmy.myserv.one
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Well, that depends. Do you mean gpt the specific chunk of lln code? Or do you mean gpt the website and service?

              Because while the nitpicking details matter to the programmers fixing it, how much does that distinction matter to you or I, the laymen using the site?

  • lowleveldata@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    ChatGPT doesn’t leak passwords. Chat history is leaking which one of those happens to contain a plain text password. What’s up with the current trend of saying AI did this and that while the AI really didn’t?

  • NedRyerson@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Who knew everyone had the same password as me? I always thought I was the only ‘hunter2’ out there!

      • Gordito@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Me too! I see

        “Who knew everyone had the same password as me? I always thought I was the only ‘*******’ out there!”

        Lemmy rocks!

  • realharo@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Not directly related, but you can disable chat history per-device in ChatGPT settings - that will also stop OpenAI from training on your inputs, at least that’s what they say.

    • Xyphius@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      you can go hunter2 my hunter2-ing hunter2.

      haha, does that look funny to you?

    • DreamButt@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      Back in the RuneScape days people would do dumb password scams. My buddy was introducing me to the game. We were sitting in his parents garage and he was playing and showing me his high lvl guy. Anyway, he walks around the trading area and someone says something like “omg you can’t type your password backwards *****”. In total disbelief he tries it out. Instantly freaks out, logs out to reset his password, and fails due to to the password already being changed

      • JustUseMint@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        Absolutely. Host your own. Like the other person said, Hugging Face and look upon llama.cpp as well, vicuna wizard uncensored probably spelled that wrong

      • BananaOnionJuice@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        I finally found some offline ones jan.ai and koboldcpp you download the GGUF model and run everything from your own pc, it just takes a lot of CPU and GPU for it to work acceptable, my setup can’t really manage much more than a model with 7B.

    • zeluko@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      7 months ago

      To be fair, they are talking about the OpenAI end user version, not the models themselves.
      Its still sketchy to send your data willingly to them and hope because you pay per request, its not getting tracked and saved.
      My company is deep into microsoft, so we all get Bing Chat Enterprise.
      Microsoft says it doesnt store anything and runs on separate systems… i guess with a company-offer they are more likely to put more protections in place because a breach would mean real consequences.
      (opposed to a breach with end-users, most of which dont care or would ever go through the legal trouble)

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      You could just watch what you input into it lol ChatGPT is a pretty good tool to have in the toolkit and like any tool there’s warnings and cautions on its use.

      • TherouxSonfeir@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        It’s an amazing tool. I think it’s funny how many people fight it tooth and nail. I like to think they’re the kind of person who refused to use spell check, or the touch tone phone.

        • Ilovethebomb@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          I’m not against chat GPT or other AI, but I am thoroughly sick of hearing about it.

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 months ago

          There are very valid philosophical and ethical reasons not to use it. We’re not just being luddites for the hell of it. In many cases, we’re engineers and scientists with interest, experience, or expertise in neural nets and LLMs ourselves, and we don’t like how fast and loose (in a lot of really, really important ways) all these big companies are playing it with the training datasets, nor how they’re actively disregarding any sort of legal or ethical responsibility around the technology writ large.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              7 months ago

              Uh, no. Why would that be the case? Every technology has unique upsides and downsides and the downsides of this one are not being handled correctly and are in fact being exacerbated.