Highlights: The White House issued draft rules today that would require federal agencies to evaluate and constantly monitor algorithms used in health care, law enforcement, and housing for potential discrimination or other harmful effects on human rights.
Once in effect, the rules could force changes in US government activity dependent on AI, such as the FBI’s use of face recognition technology, which has been criticized for not taking steps called for by Congress to protect civil liberties. The new rules would require government agencies to assess existing algorithms by August 2024 and stop using any that don’t comply.
This seems to me like an exception that would realistically only apply to the CIA, NSA, and sometimes the FBI. I doubt the Department of Housing and Urban Development will get a pass. Overall seems like a good change in a good direction.
The CIA and NSA are exactly who we don’t want using it though.
Agreed but it’s at least a step forward, setting a precedent for AI in government use. I would love a perfect world where all bills passed are “all or nothing” legislation but realistically this is a good start, and then citizens should demand tighter oversight on national security agencies as the next issue to tackle
“next issue to tackle”
It’s been the next issue to tackle since at least October 26th, 2001. They have no accountability. Adding these carve outs is just making it harder to get accountability.
They’re exactly who will carry on using it, even if there weren’t any exemptions.
given the “success” of Israel’s hi tech border fence it seems like bureacracies think tech will work better than actually, you know, resolving/preventing geopolitical problems with diplomacy and intelligence.
I worry these kind of tech solutions become a predictable crutch. Assuming there is some kind of real necessity to these spy programs (debatable) it seems like reliance on data tech can become a weakness as soon as those intending harm understand how it works
Like either of those agencies will let us know what they are doing in the first place.
At a certain level, there are no rules when they never have to tell what they are doing.
Well that and customs/border patrol
I’d rather them not either, but don’t underestimate the harm bad management of other organizations can and has done.
I’m actually less worried about them.
Local police departments on the other hand, can arrest and get you sent to jail based on flimsy facial recognition, and it doesn’t even make the local news.
Algorithms that gerrymander voting district boundries might be an early battleground.
The early battleground of 2010 when they started using RedMap.
“Realistically” baahahaba. Right.