Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

  • Asafum@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Oh this is going to work well!

    “Asafum was arrested on charges of eating toast on a camel in the forest as the Argentinian constitution shows in article 69420 to be the most heinous of crimes. Brought to you by GoogmetopenAIsandwitch GPT.”

  • Media Bias Fact Checker@lemmy.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago
    The Guardian Media Bias Fact Check Credibility: [Medium] (Click to view Full Report)

    Name: The Guardian Bias: Left-Center
    Factual Reporting: Mixed
    Country: United Kingdom
    Full Report: https://mediabiasfactcheck.com/the-guardian/

    Check the bias and credibility of this article on Ground.News


    Thanks to Media Bias Fact Check for their access to the API.
    Please consider supporting them by donating.

    Footer

    Beep boop. This action was performed automatically. If you dont like me then please block me.💔
    If you have any questions or comments about me, you can make a post to LW Support lemmy community.

    • OccamsTeapot@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The Guardian is “mixed” and yet Times of Israel is “high” for factual reporting. MBFC is trash.

      • mke@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Disappointing. Any reason to believe this might be a mistake or an outlier? I was just starting to seriously consider adding mbfc to the usual set of tools I depend on online.

        • OccamsTeapot@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          I don’t have evidence of this but I believe the owner/operator of the site is pro Israel and this bleeds through into the ratings, which are not produced in any objective or repeatable fashion. It says Times of Israel has not failed any fact checks, but it clearly doesn’t investigate this in a systematic way. I personally reported one particularly egregious and obviously false headline some months back and never heard anything.

          It lists the fact checks the Guardian failed (totally fair), but overall I would say most similar websites rank them highly for factual content and for good reason.

          For stuff unrelated to Israel I think MBFC is pretty solid if a little unclear and opaque in it’s approach.

  • Deestan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Tech guy here.

    This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.

    • Johnmannesca@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that’s somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.

      • Deestan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.

        Racist cops have used “I go where the crime is” as an exuse to basically hunt minorities for sport. Do not allow them to say “the AI model said this was efficient” and pretend it is not their own full and knowing bias directing them.

      • theneverfox@pawb.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s not even the problem here… AI, big data, a consultant - it’s all just an excuse to point to when they do what they wanted to do anyways, profile “criminals” and harass them

  • Mothra@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    This sounds too surveillancey for the so self proclaimed libertarian and too much of a flamboyant economic investment for the guy that said to cut down all unnecessary costs

    • Frog@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Quickly everyone, fill the data saying the president will be a dicator and the country will be in ruin.

  • phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Anyone knowing more than a 5 minute introduction course to AI knows they AI CANNOT be trusted. There are a lot of possibilities with AI and a lot of potentilly great applications, but you can never explicitly trust it’s outcomes

    Secondly, we still know that AI can give great (yet unreliable) answers to questions, but we have no idea how it got to those answers. This was true 30 years ago, this remains true today as well. How can you say “he will commit that crime” if you can’t even say how you came to that conclusion?

  • phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Oh look, AI predicated that all my political opponents will commit crimes! Guess I’ll have to lock them up, then!

    • ours@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Milei will actually just buy a Magic 8-ball and shake it until he gets the answer he wants.

  • SlopppyEngineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    That’s already tried. In the end the AI is just an electronic version of existing police biases.

    Police files more reports and arrests in poor neighborhoods because they patrol more there. Reports get used as training data and AI predicts more crime in poor areas. Those areas now get over patrolled and the tension leads to more crime. The system is celebrated for being correct.

    • Tryptaminev@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You make it sound like a bug instead of a feature. But for the capitalist ruling class it is working exactly as intended.