Affiliate links on Android Authority may earn us a commission. Learn more.
President Biden issues executive order on generative AI that seems vague and toothless
- Today, President Biden issued an executive order related to the rapid growth of generative AI.
- The order attempts to put in guardrails to prevent national security risks, misinformation, and other problems prevalent with AI systems.
- However, the order doesn’t touch on significant issues — such as IP theft — nor gives strict regulations.
With how fast generative AI has become a significant part of our lives, it’s easy to think governments were caught with their collective pants down. There are nearly zero regulations on the books to limit the reach of AI, and even most major discussions on what to do about it only began in the past year.
Today, we have an executive order from the desk of President Joe Biden that attempts to address this. The order — which has not been released to the public yet — is an effort from the government to put some guardrails up when it comes to generative AI systems, such as ChatGPT and Google Bard. In place of the complete order, the Biden administration has issued a fact sheet that summarizes significant points. Check the previous link to see it yourself.
While we won’t summarize the full sheet here, we want to highlight a few things we saw. Please understand that since this is based on the fact sheet and not the complete order, some nuance may have been lost:
- Unfortunately, because of the limits of an executive order, everything in it only applies to federal agencies wishing to work with other organizations. It doesn’t limit a private company working with other private companies, for example. This gives everything in the order a considerable weakness.
- The best aspect of the order is that it requires developers who are working on AI systems that could pose threats to national security, economic security, or public health to notify the government during the model training phase. This makes perfect sense to us and should be an easy regulation to get approved outside of the executive order, eventually.
- The National Institute of Standards and Technology (NIST) and the Department of Homeland Security will work on standards and practices for “red teaming” AI services. “Red teaming” is when hackers attempt to make a system do something “bad” or attack the system in a controlled environment so that they can prevent malicious exploitation of that system. However, the language in the fact sheet related to Biden’s order is vague here, so it’s not clear how big of a deal this is.
- Although many of the most prominent AI companies — including OpenAI and Google — have already voluntarily agreed to implement watermarking systems for generative AI content, the fact sheet suggests Biden’s order also pushes for this. Unfortunately, the order does not appear to mandate anything or offer systems that might work for this purpose; it just suggests it should be done.
- Although the order mentions concerns about user privacy, it doesn’t mandate or even suggest anything related to it.
- Conspicuously missing from the fact sheet are any mentions of intellectual property theft from AI scrapes, copyright, data transparency, or preventing generative AI systems from creating copycat works that could be misconstrued as coming from an actual creator.
In brief, this order is a sweeping political gesture. It proves that the Biden administration knows and understands that AI needs regulation but stops short of actually doing much about it. Of course, for proper regulation to happen, laws will need to pass through Congress, not through executive orders from the president’s desk. Given the tumultuous state of Congress at the moment, it isn’t likely we’ll see that soon, making this order all we’ve got for now.