Deepfake Videos of Johnson and Corbyn and Future DemocracyVignesh Subbaian (Author) Published Date : Nov 13, 2019 13:58 IST
Two Deepfake videos of Johnson and Corbyn establishes how they could cause havoc in the idea of democracy.
People world over and especially the UK people were confused by two videos of Prime Minister Boris Johnson and his opponent Jeremy Corbyn asking people to vote for the other. Only by the end of the video, they found it is the one most in-depth fake video. And, something like that possibly could play spoilsport in the coming 2020 US elections.
Bill Posters, the man behind these two videos, says it is done to warn the people about the threat of high-tech video manipulation. Also, to remind the government of not bringing in any law to protect liberty and democracy on the Digital, Culture, Media, and Sport Committee recommendations after the Cambridge Analytica scandals revelation more than three years ago.
What do the two Deepfake videos say?
The first video starts with Boris Johnson starting a stump speech about the Brexit blues. Then after 20 seconds, he sincerely advocates for Labour leader of opposition Jeremy Corbyn to be the right person for the prime minister's chair. The second video shows Jeremy Corbyn endorsing Boris Johnson to be the rightful candidate.
Future Advocacy claims responsibility of the Deepfake videos, and it is the same Bill Posters who had made a Deepfake video of Mark Zuckerberg with Kim Kardashian recently, which was viral worldwide. Areeq Chowdhury, head of the think tanks at Future Advocacy, said that Deepfakes are a clear and present danger to democracy and society. They undermine trust in audiovisual content and can be used to fuel misinformation.
How Are Deepfake videos made?
Deepfake gets their name from Deep Learning a sub-field of Artificial Intelligence which are used to create fake videos of a target individual. The computers are fed with a set of instructions in the form of algorithms. Then it mimics all the person's facial expressions, mannerisms, voice, and inflections by learning from the inputs. It enables anyone with the video and audio of the target individual to fake a video and gets them to say anything.