By Elizabeth Culliford
(Reuters) - Social media platform Twitter (N:TWTR) on Monday unveiled its plan for handling deepfake videos and other manipulated media, and called for feedback from the public.
In the run-up to the U.S. presidential election in November 2020, social platforms have been under pressure to tackle the threat of manipulated media, including deepfakes, which use artificial intelligence to create realistic videos in which a person appears to say or do something they did not.
Twitter's new proposal, laid out in a blog post, said it might place a notice next to tweets sharing "synthetic or manipulated media," warn people before they like or share such tweets, or add a link to a news story showing why various sources think the media is synthetic or manipulated.
The company also said it might remove tweets with such media if they were misleading and could threaten physical safety or lead to other serious harm.
It proposed defining synthetic and manipulated media as any photo, audio or video that has been "significantly altered or fabricated in a way that intends to mislead people or changes its original meaning." This would include either deepfakes or more manually doctored "shallowfakes."
Twitter last year banned deepfakes in the context of intimate media: its policy prohibits images or videos that digitally manipulate an individual's face onto another person's nude body.
While there has not been a well-crafted deepfake video with major political consequences in the United States, the potential for manipulated video to cause turmoil was demonstrated in May by a clip of House Speaker Nancy Pelosi, manually slowed down to make her speech seem slurred.
After the Pelosi video, Facebook Inc (O:FB) Chief Executive Mark Zuckerberg was portrayed in a spoof video on Instagram in which he appears to say "whoever controls the data, controls the future." Facebook, which owns Instagram, did not to take down the video.
In July, U.S. House of Representatives Intelligence Committee Chairman Adam Schiff wrote to the CEOs of Facebook, Twitter and Alphabet (NASDAQ:GOOGL) Inc's Google asking for the companies' plans to handle the threat of deepfake images and videos ahead of the 2020 elections.
Twitter has opened its new proposal up for public input through a survey and tweets with the hashtag #TwitterPolicyFeedback until Nov. 27.
Last month, Amazon Inc's (O:AMZN) Amazon Web Services (AWS) said it would join Facebook and Microsoft Corp (O:MSFT) in their "Deepfake Detection Challenge," a contest to spur research into the area.