But just as Glaze’s userbase is spiking, a bigger priority for the Glaze Project has emerged: protecting users from attacks disabling Glaze’s protections—including attack methods exposed in June by online security researchers in Zurich, Switzerland. In a paper published on Arxiv.org without peer review, the Zurich researchers, including Google DeepMind research scientist Nicholas Carlini, claimed that Glaze’s protections could be “easily bypassed, leaving artists vulnerable to style mimicry.”

  • FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    5 months ago

    Setting aside the hypocrisy, there’s simply no “service” to DDoS here. There’s hardly even a tool. According to the article:

    Hönig told Ars that breaking Glaze was “simple.” His team found that “low-effort and ‘off-the-shelf’ techniques”—such as image upscaling, “using a different finetuning script” when training AI on new data, or “adding Gaussian noise to the images before training”—“are sufficient to create robust mimicry methods that significantly degrade existing protections.”

    So automatically running a couple of basic Photoshop tools on the image will do it.

    I had to check the date on this article because I’m not sure why it’s suddenly news, these techniques for neutralizing Glaze have been mentioned since Glaze itself was first introduced. Maybe Hönig just formalized it?