The executive order comes after a series of non-binding agreements with AI companies.
The order has eight goals: to create new standards for AI safety and security, protect privacy, advance equity and civil rights, stand up for consumers, patients, and students, support workers, promote innovation and competition, advance US leadership in AI technologies, and ensure the responsible and effective government use of the technology.
AI that is used to monitor cameras and identify our faces to track everywhere everyone goes: Why would that concern you? Do you have something to hide, citizen?
AI that might be used to generate agitprop, competing with conventional advertising: HOLY SHIT we need a new international treaty right away!
More and more of these “safety” proposals just serve to kill open source AI, only allowing a few mega corps deem what we can do with them while remaining advertising friendly of course. Freedom dies in the name of “safety”, especially technology that is governed by people who have zero concept of how it works besides a scary ambiguous buzzword.
What’s the point if they’re non-binding? AI should be privacy friendly and open, otherwise we end up with some serious problems down the line.
The President has limited authority and cannot make laws unilaterally. For sensible AI regulations and laws we will certainly need Congress to do its job, and clearly they’re pretty damn bad at that.
Dress codes are more important.
We can joke about it all we want but the reason is that things like that, naming of post offices, etc. are basically not political and easy to pass quickly. Real legislation takes time.
Unfortunately this doesn’t seem to address the “takeoff” problem: the use of AI to build more-capable AI, the creation of autonomous AI systems that can develop self-protection drives (see Omohundro 2008), etc.
AI systems should not be allowed to control economic resources until alignment is solved. As it stands, if a major company were to turn over its management to an autonomous AI system, there’s a good chance that’s game over for humans – including the humans who made that decision.
The safety problem of autonomous AI systems able to (for instance) obtain their own resources or optimize their own code have been known since long before GPTs or deepfakes were a thing.
Unfortunately “AI safety” has largely been coopted to mean “stop humans from using deepfakes to bully or deceive other humans” rather than “stop fully-automated corporations from taking over the economy and running the planet with even less humane ethics even than human-run corporations do.”
(Think selfishness or greed are a problem today? Consider a megacorp run by an entity that literally has no other drives but to protect and expand itself, thinks billions of times faster than any human board of directors, and cannot die. Say what you like about Bill Gates, he at least seems to enjoy curing diseases.)
Plenty of unemployed AI ethics folks around to ask.
They’re unemployed for a reason. They’re a cult and not actually doing anything worthwhile.
Being pro privacy is now being part of a cult? Projecting much?
AI ethics people aren’t about privacy.
They’re running around pretending there is some imminent technological singularity that’s going to wipe out humanity and we have to stop it before it happens.
I have no issue with privacy, but AI has very little to do with privacy beyond “don’t let the government track you”.
but AI has very little to do with privacy beyond “don’t let the government track you”.
lol
AI doesn’t collect your data. Companies and governments do.
Good grief dude…
deleted by creator
The National Institute of Standards and Safety (NIST) will be responsible for developing standards to “red team” AI models before public release, while the Department of Energy and Department of Homeland Security are directed to address the potential threat of AI to infrastructure and the chemical, biological, radiological, nuclear and cybersecurity risks.
The rules will be developed by agencies with relevant expertise.
Those agencies don’t have relevant experience and this will largely be guided by shitty upper level breauricratic types.
breauricratic
I do not trust your assessment of their expertise.
Cheekiness aside, there are plenty of people with tons of tech expertise working in the federal apparatus. Let’s hope they’re put on this project.
From experience with their results in a similar field: no.
This is the best summary I could come up with:
President Joe Biden signed an executive order providing rules around generative AI, ahead of any legislation coming from lawmakers.
Several government agencies are tasked with creating standards to protect against the use of AI to engineer biological materials, establish best practices around content authentication, and build advanced cybersecurity programs.
The National Institute of Standards and Safety (NIST) will be responsible for developing standards to “red team” AI models before public release, while the Department of Energy and Department of Homeland Security are directed to address the potential threat of AI to infrastructure and the chemical, biological, radiological, nuclear and cybersecurity risks.
Developers of large AI models like OpenAI ‘s GPT and Meta’s Llama 2 are required to share safety test results.
It also orders government agencies to provide guidelines for landlords, Federal benefits programs, and contracts on how to prevent AI from exacerbating discrimination.
These were later turned into a series of agreements between the White House and several AI players, including Meta, Google, OpenAI, Nvidia, and Adobe.
The original article contains 555 words, the summary contains 168 words. Saved 70%. I’m a bot and I’m open source!