Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content.
In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI’s ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.
“We’re going have to address the issues around deepfakes. We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,” he said.
“We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”
Smith also called for licensing for the most critical forms of AI with “obligations to protect security, physical security, cybersecurity, national security”.
“We will need a new generation of export controls, at least the evolution of the export controls we have, to ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements,” he said.
For weeks, lawmakers in Washington have struggled with what laws to pass to control AI even as companies large and small have raced to bring increasingly versatile AI to market.
Last week, Sam Altman, CEO of OpenAI, the startup behind ChatGPT, told a Senate panel in his first appearance before Congress that use of AI interfere with election integrity is a “significant area of concern”, adding that it needs regulation.
Altman, whose OpenAI is backed by Microsoft,
Read more on theguardian.com