TechTech newsTechnology

The White House Puts New Guardrails on Government Use of AI

The US government issued new rules Thursday requiring more caution and transparency from federal agencies using artificial intelligence, saying they are needed to protect the public as AI rapidly advances. But the new policy also has provisions to encourage AI innovation in government agencies when the technology can be used for public good.

The US hopes to emerge as an international leader with its new regime for government AI. Vice President Kamala Harris said during a news briefing ahead of the announcement that the administration plans for the policies to “serve as a model for global action.” She said that the US “will continue to call on all nations to follow our lead and put the public interest first when it comes to government use of AI.”

The new policy from the White House Office of Management and Budget will guide AI use across the federal government. It requires more transparency as to how the government uses AI and also calls for more development of the technology within federal agencies. The policy sees the administration trying to strike a balance between mitigating risks from deeper use of AI—the extent of which are not known—and using AI tools to solve existential threats like climate change and disease.

The announcement adds to a string of moves by the Biden administration to embrace and restrain AI. In October, President Biden signed a sweeping executive order on AI that would foster expansion of AI tech by the government but also requires those who make large AI models to give the government information about their activities, in the interest of national security.

In November, the US joined the UK, China, and members of the EU in signing a declaration that acknowledged the dangers of rapid AI advances but also called for international collaboration. Harris in the same week revealed a nonbinding declaration on military use of AI, signed by 31 nations. It sets up rudimentary guardrails and calls for the deactivation of systems that engage in “unintended behavior.”

The new policy for US government use of AI announced Thursday asks agencies to take several steps to prevent unintended consequences of AI deployments. To start, agencies must verify that the AI tools they use do not put Americans at risk. For example, for the Department of Veterans Affairs to use AI in its hospitals it must verify that the technology does not give racially biased diagnoses. Research has found that AI systems and other algorithms used to inform diagnosis or decide which patients receive care can reinforce historic patterns of discrimination.

If an agency cannot guarantee such safeguards, it must stop using the AI system or justify its continued use. US agencies face a December 1 deadline to comply with these new requirements.

The policy also asks for more transparency about government AI systems, requiring agencies to release government-owned AI models, data, and code, as long as the release of such information does not pose a threat to the public or government. Agencies must publicly report each year how they are using AI, the potential risks the systems pose, and how those risks are being mitigated.

And the new rules also require federal agencies to beef up their AI expertise, mandating each to appoint a chief AI officer to oversee all AI used within that agency. It’s a role that focuses on promoting AI innovation and also watching for its dangers.

Officials say the changes will also remove some barriers to AI use in federal agencies, a move that may facilitate more responsible experimentation with AI. The technology has the potential to help agencies review damage following natural disasters, forecast extreme weather, map disease spread, and control air traffic.

Countries around the world are moving to regulate AI. The EU voted in December to pass its AI Act, a measure that governs the creation and use of AI technologies, and formally adopted it earlier this month. China, too, is working on comprehensive AI regulation.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button