Twitter still isn’t sure how it should deal with the looming threat of deepfakes. So, it’s asking its users for help.
The company is classifying deepfakes as synthetic and manipulated media, which it defines as “any photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.”
Twitter detailed some ideas on how to deal with deepfakes, such as warning users before they retweet or like tweets containing deepfakes. The company also floated the idea of sharing news articles or other links from third-party sources explaining why the media is believed to be manipulated. The latter policy idea is sounds reminiscent of Facebook’s fact-checking program.
The survey asks users for their opinions on the above policy proposals as well as their thoughts on other solutions. It takes about five minutes to complete and is open for responses until Nov. 27.
Deepfakes are videos and images that have been altered using artificial intelligence. Some have used the technology to create funny videos, adding actors like into movies they did not originally appear in. Bad actors, on the other hand, have used deepfakes to create nonconsensual pornographic videos by swapping a victim’s face with an adult entertainer.
A report from last year detailing how the U.S. Defense Department was for deepfakes showed just how serious nefarious uses of the technology are being treated. The BuzzFeed-created from last year also helped sound the alarm on how exactly this technology could be used for political disinformation.
With the 2020 election season upon us, tech companies like Twitter are trying to get ahead of disinformation campaigns and avoid a repeat of the fake news that rampantly spread during the 2016 elections.