Videos are becoming the most influential channel for passing information. Unfortunately, bad guys also know that and have been using technology to spread Deepfake videos. So do not be surprised that you might have unknowingly watched fake videos on social platforms believing they were original — sounds scaring, right.
Well, thank goodness that a team of AI researchers from Albany University will soon help rate videos based on originality. They’ve managed to develop a system to combat Deepfake videos by analyzing unique fingerprints in the digital file. With this, users will be able to identify videos with altered digital fingerprints.
The Rise of Deepfakes
Deepfakes to be precise are fake videos created by artificial intelligence algorithms, powered by deep neural networks. The reason these creations are scaring is that the videos are amazingly good, and are nearly impossible to separate between them and real videos.
Long before people become well informed about Photoshop, countless marriages, relationship, and friendships suffered in the hands of faked and cloned images. Now with Deepfakes on the rise, fears are that this has the potential to cause even more damage – not only in relationships but security-wise too.
From the societal point of view, political scientists, ethicists and mostly AI researchers are deeply concerned that this rising technology can be used to influence political outcomes or spread misinformation much more effectively even than fake news.
The only savior who all eyes are on now, to combat this potentially harmful tech is the newly created tool, which detects fake videos.
When Deepfake Programs Rule
Gurus like Elon Musk have been campaigning for AI control. From that, can we say deepfake programs are an illegal use of the AI technology? Well, I bet. You can agree that if machines can create consumable content on their own, they have the potential to do even more damage if left loose.
The program is able to merge images from different sources into a video, as well as compose scenes that imitate one person onto another. So as a political weapon, the technology can be used to spread false information by making influential people appear misinforming their followers, or saying something that discredits their integrity and credibility.
How Deepfake Programs Operate
First, the system is able to master specific features that are native to, and define the person targeted. After which it operates by analyzing a great number of images of the target from varying angles, at different scenes and wearing variable facial expressions.
Fortunately, Deepfakes as of now still have their flaws, and that’s what the researchers are taking advantage of. As in, the fakes have certain patterns that can be revealed by another (in this case more smarter) algorithmic system, which is able to point out altered digital fingerprint of the video. One of such loopholes is people in Deepfake videos blink less regularly than a normal human would do. Their neural networks miss picking the movements of the eye, as they are yet to master that. At other times, the fake videos appear to blink inconsistently with other facial expressions.
Who Will Win The Battle
Professor Siwei Lyu, one of the research leaders at Albany involved with creating the detection tool says, “Top on picking fake blinking patterns, the new system is also able to pick on missed subtle and other unique physiological signals humans sent, among the sure ones being normal breathing rate.
However, fake makers are not constant they are likewise becoming more sophisticated, and the researchers have reported the same. They say as soon as they revealed that Deepfakes have a blinking inconsistency, the trainers of these systems were so quick to correct their algorithms that it has become harder to use that as a maker to track fakes. Hopefully, Lyu’s and team’s work, which is being supported by Media Forensics will maintain victory over video fakes.
Continue at: https://sanvada.com/2018/10/24/artificial-intelligence-learns-to-combat-deepfakes/
The text above is owned by the site above referred.
Here is only a small part of the article, for more please follow the link