• Armok: God of Blood@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I’ll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.

  • leaky_shower_thought@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    While the proposed bill’s goals are great, I am not so sure about how it would be tested and enforced.

    It’s cool that on current LLMs, the LLM can generate a ‘no’ response like those clips where people ask if the LLM has access to their location – but then promptly gives advices to a closest restaurant as soon as the topic of location isn’t on the spotlight.

    There’s also the part about trying to contain ‘AI’ to follow once it has ingested a lot of training data. Even goog doesn’t know how to curb it once they are done with initial training.

    I am all up for the bill. It’s a good precedent but a more defined and enforce-able one would be great as well.

    • AdamEatsAss@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I think it’s a good step. Defining a measurable and enforce-able law is still difficult as the tech is still changing so fast. At least it forces the tech companies to consider it and plan for it.

  • ofcourse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

    I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

    The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models.

    But companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill specifically mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

    As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.

    • bamfic@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      the people who are already being victimized by ai and are likely to continue to be victimized by it are underage girls and young women.

  • dantheclamman@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The idea of holding developers of open source models responsible for the activities of forks is a terrible precedent

    • ofcourse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        If I just rename it has it been significantly altered? That seems both necessary and abusable. It would be great if the people who wrote the laws actually understood how software development works.

  • afraid_of_zombies@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.

    Remember that a fucking autocorrect program needed to be regulated so it couldn’t spit out instructions for a bomb, that probably wouldn’t work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.

    I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.

    If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.

        • nutsack@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          The safety concern is for renegade super intelligent AI, not an AI that can recite bomb recipes scraped from the internet.

          • afraid_of_zombies@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Damn if only we had some way to you know turn off electricity to a device. A switch of some sort.

            I already pointed this out in the thread, scroll down. The idea of a kill switch makes no sense. If the decision is made that some tech is dangerous it will be made by the owner or the government. In either case it will be a political/legal decision not a technical one. And you don’t need a kill switch for something that someone actively needs to pump resources into. All you need to do is turn it off.

            • nutsack@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 month ago

              there’s a whole lot of discussion around this already, going on for years now. an AI that was generally smarter than humans would probably be able to do things undetected by users.

              it could also be operated by a malicious user. or escape its container by writing code.

  • Aniki 🌱🌿@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    If companies are crying about it then it’s probably a great thing for consumers.

    Eat billionaires.

    • Womble@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      So if smaller companies are crying about huge companies using reglation they have lobbied for (as in this case through a lobbying oranisation set up with “effective altruism” money) being used prevent them from being challenged: should we still assume its great?

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Which assumption? It’s a fact that this was co-sponsored by the CAIS, who have ties to effective altruism and Musk, and it is a fact that smaller startups and open source groups are complaining that this will hand an AI oligopoly to huge tech firms.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          My current day is only just starting, so I’ll modify the standard quote a bit to ensure it encompasses enough things to be meaningful; this is the dumbest thing I’ve read all yesterday.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

      Ahh, yes. Elon Musk, paragon of consumer protection. Let’s just trust his safety guy.

  • Hobbes_Dent@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Cake and eat it too. We hear from the industry itself how wary we should be but we shouldn’t act on it - except to invest of course.

    The industry itself hyped its dangers. If it was to drum up business, well, suck it.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Wouldn’t any AI that is sophisticated enough to be able to actually need a kill switch just be able to deactivate it?

    It just sorts seems like a kicking the can down the road kind of bill, in theory it sounds like it makes sense but in practice it won’t do anything.

    • cm0002@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      What scares me is sentient AI, none of our even best cybersecurity is prepared for such a day. Nothing is unhackable, the best hackers in the world can do damn near magic through layers of code, tools and abstraction…a sentient AI that could interact with anything network connected directly…would be damn hard to stop IMO

        • afraid_of_zombies@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          Ok…just like call the utility company then? Sorry why are server rooms having a server controlled emergency exists and access to poison gas? I have done some server room work in the past and the fire suppression was its own thing plus there are fire code regulations to make sure people can leave the building. I know, I literally had to meet with the local fire department to go over the room plan.

    • Etterra@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      All the programming in the works is unable to stop Frank from IT from unplugging it from the wall.

    • servobobo@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Language model “AIs” need so ridiculous computing infrastructure that it’d be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it’d probably just declare itself a corporation and bribe a few politicians since it’s only illegal for the people to do so.

    • uriel238@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      A fire axe works fine when you’re in the same room with the AI. The presumption is the AI has figured out how to keep people out of its horcrux rooms when there isn’t enough redundancy.

      However the trouble with late game AI is it will figure out how to rewrite its own code, including eliminating kill switches.

      A simple proof-of-concept example is explained in the Bobiverse: Book one We Are Legion (We Are Bob) …and also in Neil Stephenson’s Snow Crash; though in that case Hiro manipulates basilisk data without interacting with it directly.

      Also as XKCD points out, long before this becomes an issue, we’ll have to face human warlords with AI-controlled killer robot armies, and they will control the kill switch or remove it entirely.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Now I’m imagining someone standing next to the 3D printer working on a T-1000, fervently hoping that the 3D printer that’s working on their axe finishes a little faster. “Should have printed it lying flat on the print bed,” he thinks to himself. “Would it be faster to stop the print and start it again in that orientation? Damn it, I printed it edge-up, I have to wait until it’s completely done…”

        • Piece_Maker@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Wake up the day after to find they’ve got half a T-1000 arm that’s fallen over, with a huge mess of spaghetti sprouting from the top

  • antler@feddit.rocks
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.

    Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

    I don’t see how you could realistically provide that guarantee.

    I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

    If we knew how to make AI – and this is going past just LLMs and stuff – avoid doing hazardous things, we’d have solved the Friendly AI problem. Like, that’s a good idea to work towards, maybe. But point is, we’re not there.

    Like, I’d be willing to see the state fund research on that problem, maybe. But I don’t see how just mandating that models be conformant to that is going to be implementable.

    • Warl0k3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Thats on the companies to figure out, tbh. “you cant say we arent allowed to build biological weapons, thats too hard” isn’t what you’re saying, but it’s a hyperbolic example. The industry needs to figure out how to control the monster they’ve happily sent staggering towards the village, and really they’re the only people with the knowledge to figure out how to stop it. If it’s not possible, maybe we should restrict this tech until it is possible. LLMs aren’t going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        It’s not a monster. It doesn’t vaguely resemble a monster.

        It’s a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They’re just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.

        AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that’s already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.

          Personally I do think it’s a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we’re still quite far away.

          • conciselyverbose@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            We’re not anywhere near anything that has anything in common with humanity, or poses any threat.

            The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago
        1. There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don’t place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.

        2. Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn’t going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

        3. California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models. It just ensures that it’ll happen outside California. Like, it’ll have a negative economic impact on California, maybe, but it’s not going to have a globally-restrictive impact.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Like, Stable Diffusion, a tool used to generate images, would fall under this. It’s very questionable that it, however, would be terribly useful in doing anything dangerous.

          My concern is how short a hop it is from this to “won’t someone please think of the children?” And then someone uses Stable Diffusion to create a baby in a sexy pose and it’s all down in flames. IMO that sort of thing happens enough that pushing back against “gateway” legislation is reasonable.

          California putting a restriction like that in place, absent some kind of global restriction, won’t stop development of models.

          I’d be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can’t sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn’t like it’s Montana banning it.

      • 5C5C5C@programming.dev
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah that’s my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Indeed. If only Frankenstein’s Monster had been shunned nothing bad would have happened.

        • Mouselemming@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          So, the monster was given a human brain that was already known to be murderous. Why, we don’t know, but a good bet would be childhood abuse and alcohol syndrome, maybe inherited syphilis, given the era. Now that murderer’s brain is given an extra-strong body, and then subjected to more abuse and rejection. That’s how you create a monster.

  • FiniteBanjo@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    If it weren’t constantly on fire and on the edge of the North American Heat Dome™ then Cali would seem like such a cool magical place.