Sam Altman, Satya Nadella Join High-Powered AI Safety Board for Homeland Security

Sam Altman, Satya Nadella Join High-Powered AI Safety Board for Homeland Security



The U.S. Department of Homeland Security (DHS) has enlisted a broad slate of AI industry leaders to establish the Artificial Intelligence Safety and Security Board, the agency announced on Friday.

The DHS says the new body will play a crucial role in harnessing artificial intelligence for the protection of vital U.S. infrastructure—and to mitigate AI-powered threats.

“Artificial intelligence is a transformative technology that can advance our national interests in unprecedented ways,” Homeland Security Secretary Alejandro Mayorkas said in a prepared statement. “At the same time, it presents real risks that we can mitigate by adopting best practices and taking other studied, concrete actions.”

The board will be led by Mayorkas and includes several high-profile AI executives, including OpenAI CEO Sam Altman, Microsoft chairman and CEO Satya Nadella, Alphabet CEO Sundar Pichai, Anthropic co-founder and CEO Dario Amodei, and NVIDIA President and CEO Jensen Huang.

“I am grateful that such accomplished leaders are dedicating their time and expertise to the board to help ensure our nation’s critical infrastructure—the vital services upon which Americans rely every day—effectively guards against the risks and realizes the enormous potential of this transformative technology,” Mayorkas said.

“AI technology is capable of offering immense benefits to society if deployed responsibly, which is why we’ve advocated for efforts to test the safety of frontier AI systems to mitigate potential risks,” Anthropic’s Amodei said in a concurrent statement. “We’re proud to contribute to studying the implications of AI on protecting critical infrastructure with other leaders in the public and private sectors.

“Safe AI deployment is paramount to securing infrastructure that powers American society, and we believe the formation of this board is a positive step forward in strengthening U.S. national security,” he continued.

“Artificial intelligence is the most transformative technology of our time, and we must ensure it is deployed safely and responsibly,” Microsoft’s Nadella added. “Microsoft is honored to participate in this important effort and looks forward to sharing both our learnings to date, and our plans going forward.”

Others joining the AI Safety and Security Board include Amazon Web Services CEO Adam Selipsky, IBM chairman and CEO Arvind Krishna, Stanford Human-Centered AI Institute co-director Fei-Fei Li, and Maya Wiley, president and CEO of the Leadership Conference on Civil and Human Rights.

“I’m honored to join this group of interdisciplinary leaders to steward this world-changing technology responsibly and in a human-centered way,” Li said. “Ultimately AI is a tool, a potent tool, and it must be developed and applied with an understanding of how it will impact the individual, community, and society at large.”

The board will hold its first quarterly meeting in early May, according to the DHS, and will be immediately tasked with providing the secretary recommendations to ensure the safe adoption of AI technology in essential services. The objective is to create a forum for DHS, “the critical infrastructure community,” and AI leaders to exchange information on AI-related security risks.

The rapid intrusion of artificial intelligence into the mainstream in 2023 left global leaders scrambling to develop regulations to deal with the new technology. In January, the World Economic Forum (WEF) counted artificial intelligence and quantum computing among a list of the most urgent global risks.

Following issuing an executive order in October 2023, the Biden Administration announced the launch of the U.S. AI Safety Institute Consortium (AISIC), which includes many of the names joining DHS’ initiative.

In addition to national security concerns, AI developers—including Google, Meta, and OpenAI—on Tuesday joined with non-profit organizations Thorn and All Tech is Human, pledging to enforce guardrails around their respective AI models to stop them from being used to create child sexual abuse material (CSAM). CSAM was also the focus of a new letter sent by Sen. Elizabeth Warren to the DHS and the Justice Department.

Edited by Ryan Ozawa.

Stay on top of crypto news, get daily updates in your inbox.





Source link

NiceHash
NiceHash

Be the first to comment

Leave a Reply

Your email address will not be published.


*