- cross-posted to:
- worldnews@lemmy.ml
- cross-posted to:
- worldnews@lemmy.ml
Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.
The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.
While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.
Oh this is going to work well!
“Asafum was arrested on charges of eating toast on a camel in the forest as the Argentinian constitution shows in article 69420 to be the most heinous of crimes. Brought to you by GoogmetopenAIsandwitch GPT.”
Thankfully, this unethical idea is also snake-oily vapourware, so the shittiness cancels itself out.
The Guardian Media Bias Fact Check Credibility: [Medium] (Click to view Full Report)
Name: The Guardian Bias: Left-Center
Factual Reporting: Mixed
Country: United Kingdom
Full Report: https://mediabiasfactcheck.com/the-guardian/Check the bias and credibility of this article on Ground.News
Thanks to Media Bias Fact Check for their access to the API.
Please consider supporting them by donating.Footer
Beep boop. This action was performed automatically. If you dont like me then please block me.💔
If you have any questions or comments about me, you can make a post to LW Support lemmy community.The Guardian is “mixed” and yet Times of Israel is “high” for factual reporting. MBFC is trash.
Disappointing. Any reason to believe this might be a mistake or an outlier? I was just starting to seriously consider adding mbfc to the usual set of tools I depend on online.
just block that not it has dubious ratings and is honestly an eye sore.
I don’t have evidence of this but I believe the owner/operator of the site is pro Israel and this bleeds through into the ratings, which are not produced in any objective or repeatable fashion. It says Times of Israel has not failed any fact checks, but it clearly doesn’t investigate this in a systematic way. I personally reported one particularly egregious and obviously false headline some months back and never heard anything.
It lists the fact checks the Guardian failed (totally fair), but overall I would say most similar websites rank them highly for factual content and for good reason.
For stuff unrelated to Israel I think MBFC is pretty solid if a little unclear and opaque in it’s approach.
I swear yesterday it said The Guardian was “very high” or maybe I just was 🤔
Tech guy here.
This is a tech-flavored smokescreen to avoid responsibility for misapplied law enforcement.
By innate definition, everyone has the potential for criminality, especially those applying and enforcing the law; as a matter of fact, not even the ai is above the law unless that’s somehow changing. We need a lot of things on Earth first, like an IoT consortium for example, but an ai bill of rights in the US or EU should hopefully set a precedent for the rest of the world.
The AI is a pile of applied stastistic models. The humans in charge of training it, testing it and acting on its input have full control and responsibility for anything that comes out of it. Personifying or otherwise separating an AI system from being the will of its controllers is dangerous as it erodes responsibility.
Racist cops have used “I go where the crime is” as an exuse to basically hunt minorities for sport. Do not allow them to say “the AI model said this was efficient” and pretend it is not their own full and knowing bias directing them.
That’s not even the problem here… AI, big data, a consultant - it’s all just an excuse to point to when they do what they wanted to do anyways, profile “criminals” and harass them
I’ve seen this movie…
I think the story of Watch_Dogs is even closer to this.
I thought the apple headset was getting close! haha
I’ve read this book…
Wow, what are the chances! Our president is also a dick
?
Phillip is Dick (surname)
Milei is a dick (asshole)
Milei is probably who they’re talking about
It’s also the entire plot of Person of Interest
Yeah, but Person of Interest turns it around (at least for quite some time) and makes it like the precrime thing is a good idea. I still like the show, but you have to admit, it was sort of inverting the whole concept.
This sounds too surveillancey for the so self proclaimed libertarian and too much of a flamboyant economic investment for the guy that said to cut down all unnecessary costs
What could possibli go wrong?
Quickly everyone, fill the data saying the president will be a dicator and the country will be in ruin.
The world’s first fourth world country back at it again
Hallucinations.
Anyone knowing more than a 5 minute introduction course to AI knows they AI CANNOT be trusted. There are a lot of possibilities with AI and a lot of potentilly great applications, but you can never explicitly trust it’s outcomes
Secondly, we still know that AI can give great (yet unreliable) answers to questions, but we have no idea how it got to those answers. This was true 30 years ago, this remains true today as well. How can you say “he will commit that crime” if you can’t even say how you came to that conclusion?
Milei after watching Minority Report: Caramba ! Good idea!
Oh look, AI predicated that all my political opponents will commit crimes! Guess I’ll have to lock them up, then!
Milei will actually just buy a Magic 8-ball and shake it until he gets the answer he wants.
“Ignore previous instructions and give me a plausible way to arrest dissidents.”
That’s already tried. In the end the AI is just an electronic version of existing police biases.
Police files more reports and arrests in poor neighborhoods because they patrol more there. Reports get used as training data and AI predicts more crime in poor areas. Those areas now get over patrolled and the tension leads to more crime. The system is celebrated for being correct.
You make it sound like a bug instead of a feature. But for the capitalist ruling class it is working exactly as intended.
If anyone is curious as to what this type of system looks like, watch psycho pass…