Connect with us

Hi, what are you looking for?

Headlines

How to spot deepfakes? Look at light reflection in the eyes

University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos by analyzing light reflections in the eyes.

University at Buffalo computer scientists have developed a tool that automatically identifies deepfake photos by analyzing light reflections in the eyes.

The tool proved 94% effective in experiments described in a paper accepted at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in June in Toronto, Canada.

“The cornea is almost like a perfect semisphere and is very reflective,” says the paper’s lead author, Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. “So, anything that is coming to the eye with a light emitting from those sources will have an image on the cornea.

“The two eyes should have very similar reflective patterns because they’re seeing the same thing. It’s something that we typically don’t typically notice when we look at a face,” says Lyu, a multimedia and digital forensics expert who has testified before Congress.

Advertisement. Scroll to continue reading.

The paper, “Exposing GAN-Generated Faces Using Inconsistent Corneal Specular Highlights,” is available on the open access repository arXiv.

Co-authors are Shu Hu, a third-year computer science PhD student and research assistant in the Media Forensic Lab at UB, and Yuezun Li, PhD, a former senior research scientist at UB who is now a lecturer at the Ocean University of China’s Center on Artificial Intelligence.

Tool maps face, examines tiny differences in eyes

When we look at something, the image of what we see is reflected in our eyes. In a real photo or video, the reflections on the eyes would generally appear to be the same shape and color.

However, most images generated by artificial intelligence – including generative adversary network (GAN) images – fail to accurately or consistently do this, possibly due to many photos combined to generate the fake image.

Advertisement. Scroll to continue reading.

Lyu’s tool exploits this shortcoming by spotting tiny deviations in reflected light in the eyes of deepfake images.

To conduct the experiments, the research team obtained real images from Flickr Faces-HQ, as well as fake images from http://www.thispersondoesnotexist.com, a repository of AI-generated faces that look lifelike but are indeed fake. All images were portrait-like (real people and fake people looking directly into the camera with good lighting) and 1,024 by 1,024 pixels.

The tool works by mapping out each face. It then examines the eyes, followed by the eyeballs and lastly the light reflected in each eyeball. It compares in incredible detail potential differences in shape, light intensity and other features of the reflected light.

‘Deepfake-o-meter,’ and commitment to fight deepfakes

While promising, Lyu’s technique has limitations.

Advertisement. Scroll to continue reading.

For one, you need a reflected source of light. Also, mismatched light reflections of the eyes can be fixed during editing of the image. Additionally, the technique looks only at the individual pixels reflected in the eyes – not the shape of the eye, the shapes within the eyes, or the nature of what’s reflected in the eyes.

Finally, the technique compares the reflections within both eyes. If the subject is missing an eye, or the eye is not visible, the technique fails.

Lyu, who has researched machine learning and computer vision projects for over 20 years, previously proved that deepfake videos tend to have inconsistent or nonexistent blink rates for the video subjects.

In addition to testifying before Congress, he assisted Facebook in 2020 with its deepfake detection global challenge, and he helped create the “Deepfake-o-meter,” an online resource to help the average person test to see if the video they’ve watched is, in fact, a deepfake.

He says identifying deepfakes is increasingly important, especially given the hyper-partisan world full of race-and gender-related tensions and the dangers of disinformation – particularly violence.

Advertisement. Scroll to continue reading.

“Unfortunately, a big chunk of these kinds of fake videos were created for pornographic purposes, and that (caused) a lot of … psychological damage to the victims,” Lyu says. “There’s also the potential political impact, the fake video showing politicians saying something or doing something that they’re not supposed to do. That’s bad.”

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

In rigorous evaluations conducted by prestigious cybersecurity testing organizations, Kaspersky Plus (starting in Q4 2024, Kaspersky Premium), Kaspersky Endpoint Security for Business (KESB), and...

HEADLINES

"Given the Philippines' high exposure to cyber threats, it's important for both individuals and businesses to stay vigilant," said Adrian Hia, Managing Director for...

White Papers

When compared to 2023, Sophos saw a 51% increase in abusing “Living off the Land” binaries or LOLbins; since 2021, it’s increased by 83%.

HEADLINES

Someone illegally acquires or uses personal information such as bank account or credit card numbers of another person to obtain money, goods or services....

HEADLINES

To stay ahead of these challenges, organizations need to invest in AI-driven defenses, transition to quantum-safe encryption, and adopt a Zero Trust approach to...

HEADLINES

There was a 121% Year-on-Year (YoY) increase in identity fraud in 2024 across the region, with significant surges recorded in Singapore (207%), Thailand (206%)...

HEADLINES

As part of RCBC’s 2024 Cybersecurity literacy program, the webinar aims to help Filipinos level up their online banking safety by providing them with...

White Papers

The survey found that CXO’s feel less prepared than their global peers. Less than half or 48% in APAC said they felt completely prepared...

Advertisement