US Vice President Kamala Harris has told top Big Tech CEOs, including Microsoft Chairman and CEO Satya Nadella, Alphabet and Google CEO Sundar Pichai and Sam Altman, CEO of OpenAI (ChatGPT fame), that the private sector has an ethical, moral, and legal responsibility to ensure the safety and security of their products.
In her meeting with CEOs of four US companies who are at the forefront of AI innovation, which also saw US President Joe Biden dropping by, she stressed that in order to realise the benefits that might come from advances in AI, “it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security”.
“These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values,” the White House said in a statement late on Thursday.
“Every company must comply with existing laws to protect the American people. I look forward to the follow through and follow up in the weeks to come,” Harris said.
She told the CEOs that advances in technology have always presented opportunities and risks, and generative AI is no different.
“AI is one of today’s most powerful technologies, with the potential to improve people’s lives and tackle some of society’s biggest challenges. At the same time, AI has the potential to dramatically increase threats to safety and security, infringe civil rights and privacy, and erode public trust and faith in democracy,” she stressed.
Meanwhile, The White House also announced more funding and policy guidance for developing responsible AI.
“We’re investing an additional $140 million to stand up seven new National AI Research Institutes. That will bring the total 25 National AI Research Institutes across the country, with half a billion dollars of funding to support responsible innovation that advances the public good,” said the Biden administration.
Also Read: SaaS Rising: India is Ready for its Next IT Moment