The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

  • Alto@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    So while this is obviously bad, did any of you actually think for a moment that this was stopping anything? If the military wants to use ChatGPT, they’re going to find a way whether or not OpenAI likes it. In their minds they may as well get paid for it.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    This is the best summary I could come up with:


    OpenAI this week quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

    “We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email to The Intercept.

    Suchman and Myers West both pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which has invested $13 billion in the LLM maker to date and resells the company’s software tools.

    The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; the Pentagon is still tentatively exploring how it might use ChatGPT or other large-language models, a type of software tool that can rapidly and dextrously generate sophisticated text outputs.

    While some within U.S. military leadership have expressed concern about the tendency of LLMs to insert glaring factual errors or other distortions, as well as security risks that might come with using ChatGPT to analyze classified or otherwise sensitive data, the Pentagon remains generally eager to adopt artificial intelligence tools.

    Last year, Kimberly Sablon, the Pentagon’s principal director for trusted AI and autonomy, told a conference in Hawaii that “[t]here’s a lot of good there in terms of how we can utilize large-language models like [ChatGPT] to disrupt critical functions across the department.”


    The original article contains 1,196 words, the summary contains 254 words. Saved 79%. I’m a bot and I’m open source!

  • funkforager@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Remember when open ai was a nonprofit first and foremost, and we were supposed to trust they would make AI for good and not evil? Feels like it was only Thanksgiving…

    • Moira_Mayhem@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      It seems to be a trend that any service that claims not to be evil is just waiting for the right moment to drop that pretense.

    • wooki@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      6 months ago

      I wouldnt be too worried they’ve just made an over glorified word predictor and blender of peoples art

          • pinkdrunkenelephants@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            6 months ago

            And that totally justifies having a robot that does it so efficiently it allows people to deepfake shit that’s hard to invalidate, robbing people of their ability to discern what is reality and what is not

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                Nope, not deepfakes that convincing.

                Keep lying to yourself though. Keep convincing yourself it’s worthwhile to destroy the world you claim to love just so you can keep your shiny new toy. Keep trying to tell yourself it’s not going to harm everyone else around you and that you’re still a good person.

                • afraid_of_zombies@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  6 months ago

                  Right all those people eating fucking horse dewormer were perfectly rational before.

                  Oh noes AI is going to destroy us all.

            • wooki@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              6 months ago

              Again not new stop grandstanding it as a new effect. Media outlets have been doing this since the dawn of journalism. Scientific process created to combat it, political standards to help reduce it fand laws to make it financially unattractive act remains its not new.

              The only thing that is new. The financial gain from the hype of abusing the word AI and thr media not calling it out. But hey here we are back at the start. Its not new.

              • pinkdrunkenelephants@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                6 months ago

                And that totally makes it okay for you to use an LLM to do so far more effectively and far more efficiently, destroying humanity’s ability to discern reality

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I mean, there was all that drama where the board formed to prevent this from happening kicked out the CEO trying to do this stuff, then the board got booted out and replaced with a new board and brought back that CEO guy. So this was pretty much going to happen.

      • Sasha@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Effective altruism is just capitalism camoflauge, it’s also just really bad at being camoflauge

      • hoshikarakitaridia@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        And some people pointed it out even back then. There were signs that the employees were very loyal to Altmann, but Altmann didn’t meet the security concerns of the board. So stuff like this was just a matter of time.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I can’t wait until we find out AI trained on military secrets is leaking military secrets.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      That would count as harm and be disallowed by the current policy.

      But a military application of using GPT to identify and filter misinformation would not be harm, and would have been prevented by the previous policy prohibiting any military use, but would be allowed under the current policy.

      Of course, it gets murkier if the military application of identifying misinformation later ends up with a drone strike on the misinformer. In theory they could submit a usage description of “identify misinformation” which appears to do no harm, but then take the identifications to cause harm.

      Which is part of why a broad ban on military use may have been more prudent than a ban only on harmful military usage.