Deepfakes and the Ways for Recognizing Them (Part 1)

Deepfakes And The Ways For Recognizing Them (part 1)

We have seen the simplest forms of face-changing filters on social media platforms. Advanced digital face editing techniques are employed in the movie industry and they have been around for many years. Sometimes it is a combination of AI and physical methods, but regarding the advancements in AI and digital tools, they may fully replace physical methods in the near future.

 

Nowadays various face swapping and digital face alteration are labeled as deep fake. Besides, deepfakes are created merely by merely editing speech. But this evolving technology has created serious worries in recent years. Although today’s common examples are putting the face of celebrities on pornstars or putting jokes and funny stuff into politicians’ mouths, the probability of having important figures or legal representatives speaking for others, creating and spreading false news is nothing cool.

 

 

Some fake videos are easier to recognize. If you only find it in a low resolution, it’s because the maker could hide some of the defects by offering it in lower resolution. Commonly the face or some parts of it is blurred and when the face is changing directions fast, you can see some flickering or blurs. In some other, the movements or blinks of the eyes are not normal or there are odd shadows around. The latter might need specific software for being detected.

 

Source: https://www.justaskgemalto.com

 

In the deepfake, the word “deep” comes from Deep Learning. It’s a branch of AI that takes advantage of neural networks; machine learning systems that tend to work like a human’s brain. Deep learning is being used for solving complex problems in various fields like data analytics and human-level control systems. Deepfake algorithms, as the basis for some video editing software, can aid in creating fake images and videos which are not easily distinguishable from the real ones.

 

For making a deep fake video, the more pictures you have from the target, the better. Obviously, it is more feasible in the case of celebrities. That’s why celebrities and important political figures are more vulnerable. These days various software offer such opportunity and users with different levels of computer skills can easily use them. Although creating more realistic ones takes a toll and requires higher levels of expertise.

 

 

What we should have in mind is that in the world of media we cannot trust everything. The publishers might do anything to make their offering more engaging and even shocking. They might look real, but if you look more closely you might find signs showing it fakeness. Therefore, we need to be more careful about sharing when it comes to sensitive issues, in order to prevent the spread of fake news and misinformation.

 

 

In the next part of the article, we will have a look at the most recent advancement indetection of deepfake  videos.

Leave a Reply

Your email address will not be published. Required fields are marked *

Main Menu