The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for AI deepfakes::Biden’s AI advisor Ben Buchanan said a method of clearly verifying White House releases is “in the works.”

    • realharo@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      No, all you need for this is a digital signature and to publish the public key on an official government website. And maybe for platforms like YouTube and TikTok to integrate check status in their UI (e.g. flag any footage of candidates that was not signed by the government private key as “unverified”).

      How would an NFT help in any way?

      • dejected_warp_core@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I was being glib, but as NFTs are (typically) images signed by a blockchain, it meets the criteria of “cryptographically signed image” in a way.

        In reality, you are correct.

  • FrostKing@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Can someone try to explain, relatively simply, what cryptographic verification actually entails? I’ve never really looked into it.

    • 0xD@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I’ll be talking about digital signatures which is the basis for such things. I assume basic understanding of asymmetric cryptography and hashing.

      Basically, you hash the content you want to verify with a secure hashing function and encrypt the value with your private key. You can now append this encrypted value to the content or just release it alongside it.

      To now verify this content they can use your public key to decrypt your signature and get the original hash value, and compare it to their own. To get that, they just need to hash the content themselves with the same function.

      So by signing their videos with the white house private key and publishing their public key somewhere, you can verify the video’s authenticity like that.

      For a proper understanding check out DSA :)

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Only RSA uses a function equivalent to encryption when producing signatures, and only when used in one specific scheme. Every other algorithm has a unique signing function.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Click the padlock in your browser, and you’ll be able to see that this webpage (if you’re using lemmy.world) was encrypted by a server that has been verified by Google Trust Services to be a server which is controlled by lemmy.world. In addition, your browser will remember that… and if you get a page from the same server that has been verified by another cloud provider, the browser (should) flag that and warn you it might be

      The idea is you’ll be able to view metadata on an image and see that it comes from a source that has been verified by a third party such as Google Trust Services.

      How it works, mathematically… well, look up “asymmetric cryptography and hashing”. It gets pretty complicated and there are a few different mathematical approaches. Basically though, the white house will have a key, that they will not share with anyone, and only that key can be used to authorise the metadata. Even Google Trust Services (or whatever cloud provider you use) does not have the key.

      There’s been a lot of effort to detect fake images, but that’s really never going to work reliably. Proving an image is valid, however… that can be done with pretty good reliability. An attack would be at home on Mission Impossible. Maybe you’d break into a Whitehouse photographer’s home at night, put their finger on the fingerprint scanner of their laptop without waking them, then use their laptop to create the fake photo… delete all traces of evidence and GTFO. Oh and everyone would know which photographer supposedly took the photo, ask them how they took that photo of Biden acting out of character, and the real photographer will immediately say they didn’t take the photo.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Or more likely they will only discredit fake news and not verify actual footage that is a poor reflection. Like a hot mic calling someone a jackass, white House says no comment.

  • drathvedro@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I’ve been saying for a long time now that camera manufacturers should just put encryption circuits right inside the sensors. Of course that wouldn’t protect against pointing the camera at a screen showing a deepfake or someone painstakingly dissolving top layers and tracing out the private key manually, but that’d be enough of the deterrent from forgery. And also media production companies should actually put out all their stuff digitally signed. Like, come on, it’s 2024 and we still don’t have a way to find out if something was filmed or rendered, cut or edited, original or freebooted.

      • drathvedro@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        Oh, they’ve actually been developing that! Thanks for the link, I was totally unaware of C2PA thing. Looks like the ball has been very slowly rolling ever since 2019, but now that the Google is on board (they joined just a couple days ago), it might fairly soon be visible/usable by ordinary users.

        Mark my words, though, I’ll bet $100 that everyone’s going to screw it up miserably on their first couple of generations. Camera manufacturers are going to cheap out on electronics, allowing for data substitution somewhere in the pipeline. Every piece of editing software is going to be cracked at least a few times, allowing for fake edits. And production companies will most definitely leak their signing keys. Maybe even Intel/AMD could screw up again big time. But, maybe in a decade or two, given the pace, we’ll get a stable and secure enough solution to become the default, like SSL currently is.

    • hyperhopper@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      If you’ve been saying this for a long time please stop. This will solve nothing. It will be trivial to bypass for malicious actors and just hampers normal consumers.

      • drathvedro@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        You must be severely misunderstanding the idea. The idea is not to encrypt it in a way that it’s only unlockable by a secret and hidden key, like DRM or cable TV does, but to do the the reverse - to encrypt it with a key that is unlockable by publicly available and widely shared key, where successful decryption acts as a proof of content authenticity. If you don’t care about authenticity, nothing is stopping you from spreading the decrypted version, so It shouldn’t affect consumers one bit. And I wouldn’t describe “Get a bunch of cameras, rip the sensors out, carefully and repeatedly strip the top layers off and scan using electron microscope until you get to the encryption circuit, repeat enough times to collect enough scans undamaged by the stripping process to then manually piece them together and trace out the entire circuit, then spend a few weeks debugging it in a simulator to work out the encryption key” as “trivial”

        • hyperhopper@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          I think you are misunderstanding things or don’t know shit about cryptography. Why the fuck are y even talking about publicly unlockable encryption, this is a use case for verification like a MAC signature, not any kind of encryption.

          And no, your process is wild. The actual answer is just replace the sensor input to the same encryption circuits. That is trivial if you own and have control over your own device. For your scheme to work, personal ownership rights would have to be severely hampered.

          • drathvedro@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I think you are misunderstanding things or don’t know shit about cryptography. Why the fuck are y even talking about publicly unlockable encryption, this is a use case for verification like a MAC signature, not any kind of encryption.

            Calm down. I was just dumbing down public key cryptography for you

            The actual answer is just replace the sensor input to the same encryption circuits

            This will not work. The encryption circuit has to be right inside the CCD, otherwise it will be bypassed just like TPM before 2.0 - by tampering with unencrypted connection in between the sensor and the encryption chip.

            For your scheme to work, personal ownership rights would have to be severely hampered.

            You still don’t understand. It does not hamper with ownership rights or right to repair and you are free to not even use that at all. All this achieves is basically camera manufacturers signing every frame with “Yep, this was filmed with one of our cameras”. You are free to view and even edit the footage as long as you don’t care about this signature. It might not be useful for, say, a movie, but when looking for original, uncut and unedited footage, like, for example, a news report, this’ll be a godsend.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              Analog hole, just set up the camera in front of a sufficiently high resolution screen.

              You have to trust the person who owns the camera.

              • drathvedro@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                Yes, I’ve mentioned that in the initial comment, and, I gotta confess, I don’t know shit about photography, but to me it sounds like a very non-trivial task to make such shot appear legitimate.

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 months ago

            A MAC is symmetric and can thus only be verified by you or somebody who you trust to not misuse or leak the key. Regular digital signatures is what’s needed here

            You can still use such a signing circuit but treat it as an attestation by the camera’s owner, not as independent proof of authenticity.

            • hyperhopper@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              A MAC is symmetric and can thus only be verified by you or somebody who you trust to not misuse or leak the key.

              You sign them against a known public key, so anybody can verify them.

              Regular digital signatures is what’s needed here You can still use such a signing circuit but treat it as an attestation by the camera’s owner, not as independent proof of authenticity.

              If it’s just the cameras owner attesting, then just have them sign it. No need for expensive complicated circuits and regulations forcing these into existence.

  • Flying Squid@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I don’t blame them for wanting to, but this won’t work. Anyone who would be swayed by such a deepfake won’t believe the verification if it is offered.

      • Flying Squid@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I honestly do not see the value here. Barring maybe a small minority, anyone who would believe a deepfake about Biden would probably also not believe the verification and anyone who wouldn’t would probably believe the administration when they said it was fake.

        The value of the technology in general? Sure. I can see it having practical applications. Just not in this case.

        • throw4w4y5@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          If a cryptographic claim/validation is provided then anyone refuting the claims can be seen to be a bad faith actor. Voters are one dimension of that problem but mainstream media being able to validate election videos is super important both domestically, but also internationally as the global community needs to see efforts being undertaken to preserve free and fair elections. This is especially true given the consequences if america’s enemies are seen to have been able to steer the election.

        • Zink@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Sure, the grandparents that get all their news via Facebook might see a fake Biden video and eat it up like all the other hearsay they internalize.

          But, if they’re like my parents and have the local network news on half the damn time, at least the typical mainstream network news won’t be showing the forged videos. Maybe they’ll even report a fact check on it?!?

          And yeah, many of them will just take it as evidence that the mainstream media is part of the conspiracy. That’s a given.

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          It helps journalists, etc, when files have digital signatures verifying who is attesting to it. If the WH has their own published public key for signing published media and more then it’s easy to verify if you have originals or not.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Problem is that broadly speaking, you would only sign the stuff you want to sign.

            Imagine you had a president that slapped a toddler, and there was a phone video of it from the parents. The white house isn’t about to sign that video, because why would they want to? Should the journalists discard it because it doesn’t carry the official White House blessing?

            It would limit the ability for someone to deep fake an official edit of a press briefing, but again, what if he says something damning, and the ‘official’ footage edits it out, would the press discard their own recordings because they can’t get it signed, and therefore not credible?

            That’s the fundamental challenge in this sort of proposal, it only allows people to endorse what they would have wanted to endorse in the first place, and offers no mechanism to prove/disprove third party sources that are the only ones likely to carry negative impressions.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              But then the journalists have to check if the source is trustworthy, as usual. Then they can add their own signature to help other papers check it

              • jj4211@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                To that extent, we already have that.

                I go to ‘https://cnn.com’, I have cryptographic verification that cnn attests to the videos served there. I go to youtube, and I have assurances that the uploader is authenticated as the name I see in the description.

                If I see a repost of material claimed to be from a reliable source, I can go chase that down if I care (and I often do).

            • AA5B@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              It’s not a challenge, because this is only valid for photos and videos distributed by the White House, which they already wouldn’t do.

              The challenge is that it would have to leave out all the photos and videos taken by journalists and spectators. That’s where the possible baby slapping would come out, and we would still have no idea whether to trust it

          • Flying Squid@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I don’t even think that matters when Trump’s people are watching media that won’t verify it anyway.

            • EatATaco@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              The world is not black and white. There are not just trump supporters and Biden supporters. I know it’s hard to grasp but there are tons of people in the the toss up category.

              You’re right that this probably won’t penetrate the deeply perverted world of trump cultists, but the wh doesn’t expect to win the brainwashed over. They are going for those people who could go one way or another.

              • Flying Squid@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                I find it hard to believe that there are too many people who truly can’t decide between Trump and Biden at this point. The media really wants a horse race here, but if your mind isn’t made up by this point, I think you’re unlikely to vote in the first place.

                I’ll be happy to be proven wrong and have this sway people who might vote for Trump to vote for Biden though.

                • EatATaco@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 months ago

                  So, the race is basically already decided but there is a conspiracy among the media and polling companies to make it look like the race is actually close and that there are undecides. Of course, the only way to prove this wrong would be with polls, but we’ve conveniently already just rejected that evidence. Very convenient.

    • ilinamorato@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I don’t think that’s what this is for. I think this is for reasonable people, as well as for other governments.

      Besides, passwords can be phished or socially engineered, and some people use “abc123.” Does that mean we should get rid of password auth?

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    This is the best summary I could come up with:


    The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.

    Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden’s voice.

    Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.

    Ben Buchanan, Biden’s Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.

    While last year’s executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.

    Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.


    The original article contains 367 words, the summary contains 218 words. Saved 41%. I’m a bot and I’m open source!

  • CyberSeeker@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media

      linking an article is also ok :)

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        it would potentially be associated with a law that states that you must not misrepresent a “verified” UI element like a check mark etc, and whilst they could technically add a verified mark wherever they like, the law would prevent that - at least for US companies

        it may work in the same way as hardware certifications - i believe that HDMI has a certification standard that cables and devices must be manufactured to certain specifications to bear the HDMI logo, and the HDMI logo is trademarked so using it without permission is illegal… it doesn’t stop cheap knock offs, but it means if you buy things in stores in most US-aligned countries that bear the HDMI mark, they’re going to work

        • Kairos@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          There’s already some kind of legal structure for what you’re talking about: trademark. It’s called “I’m Joe Biden and I approve this message.”

          If you’re talking about HDCP you can break that with an HDMI splitter so IDK.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 months ago

            TLDR: trademark law yes, combined with a cryptographic signature in the video metadata… if a platform sees and verifies the signature, they are required to put the verified logo prominently around the video

            i’m not talking about HDCP no. i’m talking about the certification process for HDMI, USB, etc

            (random site that i know nothing about): https://www.pacroban.com/en-au/blogs/news/hdmi-certifications-what-they-mean-and-why-they-matter

            you’re right; that’s trademark law. basically you’re only allowed to put the HDMI logo on products that are certified as HDMI compatible, which has specifications on the manufacturing quality of cables etc

            in this case, you’d only be able to put the verified logo next to videos that are cryptographically signed in the metadata as originating from the whitehouse (or probably better, some federal election authority who signs any campaign videos as certified/legitimate: in australia we have the AEC - australian electoral commission - a federal body that runs our federal elections and investigations election issues, etc)

            now this of course wouldn’t work for sites outside of US control, but it would at least slow the flow of deepfakes on facebook, instagram, tiktok, the platform formerly known as twitter… assuming they implemented it, and assuming the govt enforced it

            • brbposting@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Once an original video is cryptographically signed, could future uploads be automatically verified based on pixels plus audio? Could allow for commentary to clip the original.

              Might need some kind of minimum length restriction to prevent deceptive editing which simply (but carefully) scrambles original footage.

              • Pup Biru@aussie.zone
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                not really… signing is only possible on exact copies (like byte exact; not even “the same image” but the same image, formatted the same, without being resized, etc)… there are things called perceptual hashes, and ways of checking if images are similar, but cryptography wouldn’t really help there

          • Captain Aggravated@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Relying on trademark law to combat deepfake disinformation campaigns has the same energy as “Murder is already illegal, we don’t need gun control.”

      • Starbuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Adobe is actually one of the leading actors in this field, take a look at the Content Authenticity Initiative (https://contentauthenticity.org/)

        Like the other person said, it’s based on cryptographic hashing and signing. Basically the standard would embed metadata into the image.

      • AtHeartEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it’s valid, probably the manufacturer of the equipment it was recorded on, etc.

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Do not show a checkmark by default! This is why cryptographers kept telling browsers to de-emphasize the lock icon on TLS (HTTPS) websites. You want to display the claimed author and if you’re able to verify keypair authenticity too or not.

          • AtHeartEngineer@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Fair point, I agree with this. There should probably be another icon in the browser that shows if all, some, or none of the media on a page has signatures that can be validated. Though that gets messy as well, because what is “media”? Things can be displayed in a web canvas or SVG that appears to be a regular image, when in reality it’s rendered on the fly.

            Security and cryptography UX is hard. Good point, thanks for bringing that up! Btw, this is kind of my field.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              I run /r/crypto at reddit (not so active these days due to needing to keep it locked because of spam bots, but it’s not dead yet), usability issues like this are way too common

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          The issue is making that green check mark hard to fake for bad actors. Https works because it is verified by the browser itself, outside the display area of the page. Unless all sites begin relying on a media player packed into the browser itself, if the verification even appears to be part of the webpage, it could be faked.

          • dejected_warp_core@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 months ago

            The only thing that comes to mind is something that forces interactivity outside the browser display area; out of the reach of Javascript and CSS. Something that would work for both mobile and desktop would be a toolbar icon that is a target for drag-and-drop. Drag the movie or image to the “verify this” target, and you get a dialogue or notification outside the display area. As a bonus, it can double for verifying TLS on hyperlinks while we’re at it.

            Edit: a toolbar icon that’s draggable to the image/movie/link should also work the same. Probably easier for mobile users too.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              9 months ago

              If you set the download manager icon in the browser as permanently visible, then dragging it there could trigger the verification to also run if the metadata is detected, and to then also show whichever metadata it could verify.

          • brbposting@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Hope verification gets built in to operating systems as compromised applications present a risk too.

            But I’m sure a crook would build a MAGA Verifier since you can’t trust liberal Apple/Microsoft technology.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        For the average end-user, it would look like “https”. You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.

        In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov.ru), but if it was really signed by the right institution.

        • TheKingBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          As someone points out above, this just gives them the power to not authenticate real videos that make them look bad…

          • dejected_warp_core@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            I honestly feel strategies like this should be mitigated by technically savvy journalism, or even citizen journalism. 3rd parties can sign and redistribute media in the public domain, vouching for their origin. While that doesn’t cover all the unsigned copies in existence, it provides a foothold for more sophisticated verification mechanisms like a “tineye” style search for media origin.

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Videos by third parties, like Trump’s pussy grabber clip, would obviously have to be signed by them. After having thought about it, I believe this is a non-starter.

            It just won’t be as good as https. Such a signing scheme only makes sense if the media is shared away from the original website. That means you can’t just take a quick look at the address bar to make sure you are not getting phished. That doesn’t work if it could be any news agency. You have to make sure that the signer is really a trusted agency and not some scammy lookalike. That takes too much care for casual use, which defeats the purpose.

            Also, news agencies don’t have much of an incentive to allow sharing their media. Any cryptographic signature would only make sense for them if directs users to their site, where they can make money. Maybe the potential for more clicks - basically a kind of clickable watermark on media - could make this take off.

      • Ð Greıt Þu̇mpkin@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Probably you’d notice a bit of extra time posting for the signature to be added, but that’s about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying “I never said that”, they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it’s really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.

        It basically gives absolute control of a person’s own image and voice to themself, unless a piece of media is provable to have been made with that person’s consent, or by that person themself, it can be wiped from the internet no trouble.

        Where it comes to second party posters, news agencies and such, it’d be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.

        Of course there could still be a YouTube “Stats for Nerds”-esque addin to the options tab on a given post that allows you to sign-check it against the account it’s attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin’ that particular scroll session.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it’s verified to be from. But it’s up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):

        • Cryptographic hashes: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different.

        • Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other can only decrypt data that was encrypted by the matching key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using that key, it proves it came from them.

        • Serinus@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          a shorter, (effectively) unique text string

          A note on this. There are other videos that will hash to the same value as a legitimate video. Finding one that is coherent is extraordinarily difficult. Maybe a state actor could do it?

          But for practical purposes, it’ll do the job. Hell, if a doctored video with the same hash comes out, the White House could just say no, we punished this one, and that alone would be remarkable.

          • AbouBenAdhem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Finding one that is coherent is extraordinarily difficult.

            You’d need to find one that was not just coherent, but that looked convincing and differed in a way that was useful to you—and that likely wouldn’t exist, even theoretically.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Pigeon hole principle says it does for any file substantially longer than the hash value length, but it’s going to be hard to find

            • ReveredOxygen@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Even for a 4096 bit hash (which isn’t used afaik, usually only 1024 bit is used (but this could be outdated)), you only need to change 4096 bits on average. Even for a still 1080p image, that’s 1920x1080 pixels. If you change the least significant bit of each color channel, you get 6,220,800 bits you can change within anyone noticing. That means on average there are 1,518 identical-looking variations of any image with a given 4096 bit hash, on average. This goes down a lot when you factor in compression: those least significant bits aren’t going to stay the same. But using a video brings it up by orders of magnitude: rather than one image, you can tweak colors in every frame The difficulty doesn’t come from the existence, it comes because you need to check 2⁵¹² = 10¹⁵⁴ different images to guarantee you’ll find a match. Hash functions are designed to take a while to compute, so you’d have to run a supercomputer for an extremely long time to brute force a hash collision

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                Most hash functions are 256 bit (they’re symmetric functions, they don’t need more in most cases).

                There are arbitrary length functions (called XOF instead of hash) which built similarly (used when you need to generate longer random looking outputs).

                Other than that, yeah, math shows you don’t need to change more data in the file than the length of the hash function internal state or output length (whichever is less) to create a collision. The reason they’re still secure is because it’s still extremely difficult to reverse the function or bruteforce 2^256 possible inputs.

          • CyberSeeker@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            There are other videos that will hash to the same value

            This concept is known as ‘collision’ in cryptography. While technically true for weaker key sizes, there are entire fields of mathematics dedicated to probably ensuring collisions are cosmically unlikely. MD5 and SHA-1 have a small enough key space for collisions to be intentionally generated in a reasonable timeframe, which is why they have been deprecated for several years.

            To my knowledge, SHA-2 with sufficiently large key size (2048) is still okay within the scope of modern computing, but beyond that, you’ll want to use Dilithium or Kyber CRYSTALS for quantum resistance.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              SHA family and MD5 do not have keys. SHA1 and MD5 are insecure due to structural weaknesses in the algorithm.

              Also, 2048 bits apply to RSA asymmetric keypairs, but SHA1 is 160 bits with similarly sized internal state and SHA256 is as the name says 256 bits.

              ECC is a public key algorithm which can have 256 bit keys.

              Dilithium is indeed a post quantum digital signature algorithm, which would replace ECC and RSA. But you’d use it WITH a SHA256 hash (or SHA3).

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      i wouldn’t say signature exactly, because that ensures that a video hasn’t been altered in any way: no re-encoded, resized, cropped, trimmed, etc… platforms almost always do some of these things to videos, even if it’s not noticeable to the end-user

      there are perceptual hashes, but i’m not sure if they work in a way that covers all those things or if they’re secure hashes. i would assume not

      perhaps platforms would read the metadata in a video for a signature and have to serve the video entirely unaltered if it’s there?

      • thantik@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        9 months ago

        You don’t need to bother with cryptographically verifying downstream videos, only the source video needs to be able to be cryptographically verified. That way you have an unedited, untampered cut that can be verified to be factually accurate to the broadcast.

        The White House could serve the video themselves if they so wanted to. Just use something similar to PGP for signature validation and voila. Studios can still do all the editing, cutting, etc - it shouldn’t be up to the end user to do the footwork on this, just for the studios to provide a kind of ‘chain of custody’ - they can point to the original verification video for anyone to compare to; in order to make sure alterations are things such as simple cuts, and not anything more than that.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          9 months ago

          you don’t even need to cryptographically verify in that case because you already have a trusted authority: the whitehouse… of the video is on the whitehouse website, it’s trusted with no cryptography needed

          the technical solutions only come into play when you’re trying to modify the video and still accurately show that it’s sourced from something verifiable

          heck you could even have a standard where if a video adds a signature to itself, editing software will add the signature of the original, a canonical immutable link to the file, and timestamps for any cuts to the video… that way you (and by you i mean anyone; likely hidden from the user) can load up a video and be able to link to the canonical version to verify

          in this case, verification using ML would actually be much easier because you (servers) just download the canonical video, cut it as per the metadata, and compare what’s there to what’s in the current video

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Rather that using a hash of the video data, you could just include within the video the timestamp of when it was originally posted, encrypted with the White House’s private key.

          • AbouBenAdhem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            9 months ago

            It does if you can also verify the date of the file, because the modified file will be newer than the timestamp. An immutable record of when the file was first posted (on, say, YouTube) lets you verify which version is the source.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Apple’s scrapped on-device CSAM scanning was based on perceptual hashes.

        The first collision demo breaking them showed up in hours with images that looked glitched. After just a week the newest demos produced flawless images with collisions against known perceptual hash values.

        In theory you could create some ML-ish compact learning algorithm and use the compressed model as a perceptual hash, but I’m not convinced this can be secure enough unless it’s allowed to be large enough, as in some % of the original’s file size.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          you can definitely produced perceptual hashes that collide, but really you’re not just talking about a collision, you’re talking about a collision that’s also useful in subverting an election, AND that’s been generated using ML which is something that’s still kinda shakey to start with

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Perceptual hash collision generators can take arbitrary images and tweak them in invisible ways to make them collide with whichever hash value you want.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              from the comment above, it seems like it took a week for a single image/frame though… it’s possible sure but so is a collision in a regular hash function… at some point it just becomes too expensive to be worth it, AND the phash here isn’t being used as security because the security is that the original was posted on some source of truth site (eg the whitehouse)

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                0
                ·
                9 months ago

                No, it took a week to refine the attack algorithm, the collision generation itself is fast

                The point of perceptual hashes is to let you check if two things are similar enough after transformations like scaling and reencoding, so you can’t rely on that here

                • Pup Biru@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  9 months ago

                  oh yup that’s a very fair point then! you certainly wouldn’t use it for security in that case, however there are a lot of ways to implement this that don’t rely on the security of the hash function, but just uses it (for example) to point to somewhere in a trusted source to manually validate that they’re the same

                  we already have the trust frameworks; that’s unnecessary… we just need to automatically validate (or at least provide automatic verifyability) that a video posted on some 3rd party - probably friendly or at least cooperative - platform represents reality

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Maybe deepfakes are enough of a scare that this becomes standard practice, and protects encryption from getting government backdoors.

        • mods_are_assholes@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Hey, congresscritters didn’t give a shit about robocalls till they were the ones getting robocalled.

          We had a do not call list within a year and a half.

          That’s the secret, make it affect them personally.

          • Daft_ish@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Doesn’t that prove that government officials lack empathy? We see it again and again but still we keep putting these unfeeling bastards in charge.

            • mods_are_assholes@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              9 months ago

              Well sociopaths are really good at navigating power hierarchies and I’m not sure there is an ethical way of keeping them from holding office.

    • Muehe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Cryptography ⊋ Blockchain

      A blockchain is cryptography, but not all cryptography is a blockchain.

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      …no

      Think of generating an md5sum to verify that the file you downloaded online is what it should be and hasn’t been corrupted during the download process or replaced in a Man in the Middle attack.

  • circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I’m sure they do. AI regulation probably would have helped with that. I feel like congress was busy with shit that doesn’t affect anything.

    • ours@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I salute whoever has the challenge of explaining basic cryptography principles to Congress.

      • johnyrocket@feddit.ch
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Should probably start out with the colour mixing one. That was very helpfull for me to figure out public key cryptography. The difficulty comes in when they feel like you are treating them like toddlers so they start behaving more like toddlers. (Which they are 99% if the time)

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          That’s why I feel like this idea is useless, even for the general population. Even with some sort of visual/audio based hashing, so that the hash is independant of minor changes like video resolution which don’t change the content, and with major video sites implementing a way for the site to verify that hash matches one from a trustworthy keyserver equivalent…

          The end result for anyone not downloading the videos and verifying it themselves is the equivalent of those old ”✅ safe ecommerce site, we swear" images. Any dedicated misinformation campaign will just fake it, and that will be enough for the people who would have believed the fake to begin with.

    • lemmyingly@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      I see no difference between creating a fake video/image with AI and Adobe’s packages. So to me this isn’t an AI problem, it’s a problem that should have been resolved a couple of decades ago.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Honestly I’d say that’s on the way for any video or photographic evidence.

    You’d need a device private key to sign with, probably internet connectivity for a timestamp from a third party.

    Could have lidar included as well so you can verify that it’s not pointing at a video source of something fake.

    Is there a cryptographically secure version of GPS too? Not sure if that’s even possible, and it’s the weekend so I’m done thinking.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      It’s way more feasible to simply require social media sites to do the verification and display something like a blue check on verified videos.

      This is actually a really good idea. Sure there will still be deepfakes out there, but at least a deepfake that claims to be from a trusted source can be removed relatively easily.

      Theoretically a social media site could boost content that was verified over content that isn’t, but that would require social media sites to not be bad actors, which I don’t have a lot of hope in.

      • kautau@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        I agree that it’s a good idea. But the people most swayed by deepfakes of Biden are definitely the least concerned with whether their bogeyman, the “deep state”

    • Natanael@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      Positioning using distance bounded challenge-response protocols with multiple beacons is possible, but none of the positioning satellite networks supports it. And you still can’t prove the photo was taken at the location, only that somebody was there.

  • Thirdborne@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    When it comes to misinformation I always remember when I was a kid I’m the early 90s, another kid told me confidently that the USSR had landed on Mars, gathered rocks, filmed it and returned to earth(it now occurs to me that this homeschooled kid was confusing the real moon landing.) I remember knowing it was bullshit but not having a way to check the facts. The Internet solved that problem. Now, by God , the Internet has recreated the same problem.

  • recapitated@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    I’ve always thought that bank statements should require cryptographic signatures for ledger balances. Same with individual financial transactions, especially customer payments.

    Without this we’re pretty much at the mercy of trust with banks and payment card providers.

    I imagine there’s a lot of integrity requirements for financial transactions on the back end, but the consumer has no positive proof except easily forged statements.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Yeah but that would require banks to actually invest money to improve customer trust… Not something banks are very interested in, really. It’s easier and cheaper to just have the marketing department come up with some nonsense claim and advertise that instead.

  • ZombiFrancis@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    It would become quite easy to dismiss anything for not being cryptographically verified simply by not cryptographically verifying.

    I can see the benefit of having such verification but I also see how prone it might be to suppressing unpopular/unsanctioned journalism.

    Unless the proof is very clear and easy for the public to understand the new method of denial just becomes the old method of denial.

    • jabjoe@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Once people get used to cryptographical signed videos, why only trust one source? If a news outlet is found signing a fake video, they will be in trouble. Loss of said trust if nothing else.

      We should get to the point we don’t trust unsigned videos.

      • ZombiFrancis@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Not trusting unsigned videos is one thing, but will people be judging the signature or the content itself to determine if it is fake?

        Why only one source should be trusted is a salient point. If we are talking trust: it feels entirely plausible that an entity could use its trust (or power) to manufacture a signature.

        And for some it is all too relevant that an entity like the White House, (or the gambit of others, past or present), have certainly presented false informstion as true to do things like invade countries.

        Trust is a much more flexible concept that is willing to be bent. And so cryptographic verification really has to demonstrate how and why something is fake to the general public. Otherwise it is just a big ‘trust me bro.’

        • jabjoe@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          Your right in that cryptographic verification only can prove someone signed the video. But that will mean nutters sharing “BBC videos”, that don’t have the BBC signature can basically be dismissed straight off. We are already in a soup of miss information, so sourcing being cryptographically provable is a step forward. If you trust those sources or not is another matter, but at least your know if it’s the true source or not. If a source abuse trust it has, it loses trust.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      9 months ago

      It would be nice if none of this was necessary… but we don’t live in that world. There is a lot of straight up bullshit in the news these days especially when it comes to controversial topics (like the war in Gaza, or Covid).

      You could go a really long way by just giving all photographers the ability to sign their own work. If you know who took the photo, then you can make good decisions about wether to trust them or not.

      Random account on a social network shares a video of a presidential candidate giving a speech? Yeah maybe don’t trust that. Look for someone else who’s covered the same speech instead, obviously any real speech is going to be covered by every major news network.

      That doesn’t stop a ordinary people from sharing presidential speeches on social networks. But it would make it much easier to identify fake content.

  • nutsack@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    the technology to do this has existed for decades and it’s crazy to me that people aren’t doing it all the time yet