To create “new standards” for AI security and safety, U.S. President Joe Biden has issued an executive order (EO) requiring businesses creating foundation AI models to notify the federal government and share the outcomes of all safety tests with them before public release.
With the help of platforms like ChatGPT and foundation AI models created by OpenAI, the rapidly expanding generative AI movement has generated a global discussion on the need for safeguards to prevent the possible risks associated with granting computers too much power. Following the G7 leaders’ identification of important issues for the so-called Hiroshima AI Process in May, the seven-member nations have now agreed on guiding principles and a “voluntary” code of conduct for AI developers to adhere to.
This week, the United Kingdom hosts a global summit at Bletchley Park, addressing AI governance, with a forthcoming address by U.S. Vice President Kamala Harris. The United Nations (UN) recently unveiled plans to establish a new committee examining AI governance.
“Trustworthy, safe, and secure AI.”
Instead of centering their efforts on legally binding agreements, the Biden-Harris Administration has placed a strong emphasis on AI safety, securing “voluntary commitments” from major AI developers like OpenAI, Google, Microsoft, Meta, and Amazon. However, this approach was always intended as a prelude to today’s announcement of an executive order.
The directive specifically states that creators of the “most powerful AI systems” must provide the US government with access to the findings of their safety testing and related information.
The directive states that as AI develops, so will its ramifications for Americans’ safety and security. Its goal is to “protect Americans from the potential risks of AI systems.”
While there is some room for interpretation, the order specifically targets any foundation model that could endanger public health, economic security, or national security. This puts the new AI safety and security standards in line with the Defense Production Act (1950) and should apply to almost any foundation model that is developed.
The National Institute of Standards and Technology (NIST) is tasked with creating new standards “for extensive red-team testing” before deployment. The directive also describes intentions to establish new tools and processes to ensure that AI is safe and trustworthy. These tests will be conducted in all domains; for example, the Departments of Energy and Homeland Security will investigate hazards related to artificial intelligence and vital infrastructure.
The order also supports several new guidelines and standards, such as preventing the risks of using AI to engineer biological materials that pose a threat, guarding against fraud and deception using AI, and creating a cybersecurity program to develop AI tools for fixing vulnerabilities in important software.
Molars
It’s important to note that the ruling does touch on issues like fairness and civil rights, highlighting how AI might worsen prejudice and discrimination in the fields of healthcare, justice, and housing, in addition to the risks that AI presents when it comes to issues like job displacement and workplace monitoring. However, since most of the order is focused on recommendations and guidelines, some may view it as lacking real teeth. As an illustration, it articulates its intention to uphold justice within the criminal justice system through the “development of best practices in employing AI for tasks such as sentencing, parole, probation, pretrial release, detention, risk assessments, surveillance, crime prediction, predictive policing, and forensic analysis, among other domains.”
While the executive order does establish certain directives for AI developers regarding the integration of safety and security into their systems, the extent to which it can be effectively enforced in the absence of supplementary legal modifications requires further clarification. As an illustration, the order addresses data privacy issues. After all, AI greatly simplifies the process of extracting and exploiting personal information about individuals at a large scale, something that developers may be motivated to undertake as part of their model training procedures. To secure Americans’ data, the executive order only asks Congress to enact “bipartisan data privacy legislation,” including increased federal funding to create AI development methods that preserve privacy.
The rest of the world struggles to contain what is expected to cause one of the biggest societal disruptions since the Industrial Revolution, even as Europe prepares to approve the first comprehensive AI rules. It remains to be seen how effective President Biden’s executive order is in luring companies like OpenAI, Google, Microsoft, and Meta.
[Source of Information: Techcrunch.com]