AI images are now omnipresent online. While some are absolutely ridiculous, dubbed by Gen Z and Gen Alpha as “Boomer Art”, others can fool even people paying close attention. This is particularly important when it comes to deep fakes, the made-up images of real people.
Counteracting the spread of these images is extremely important as they are used to spread misinformation, attack political adversaries, and violate people. A possible solution to spot the most realistic deepfakes comes from techniques used in astronomy, although astronomy borrowed it from statistics applied to wealth inequalities. It’s called the Gini Coefficient.
It is not a silver bullet but this method provides us with a basis, a plan of attack. This is an armed race. It can help now in 2024, but if we do not keep up the deepfake of 2028 will have overcome this issue.
Prof Kevin Pimbblet
Before we dive into the stats, it’s important to discuss the approach the researchers used. Machine learning algorithms that make fake images are still not good at representing reality completely. Until a few months ago, even the best ones couldn’t replicate a realistic hand and they really struggle to get the reflections you see in human eyes correctly.
Both images seem real but the ‘stars in their eyes’ reveal that only Scarlett Johansson is real. The person on the right doesn’t exist.
Image Credit: Adejumoke Owolabi
Ambient light and objects are reflected in the eyes of a person photographed. The reflection in one eye is consistent with the reflection in the other, and this can be assessed statistically. The researchers do note that it is not perfect though, as the statistical analysis of the data distributions might lead to the occasional mistake.
“It is not a silver bullet. There could be false positives or false negatives. It’s not going to get everything. But this method provides us with a basis, a plan of attack. This is an armed race. This can help now in 2024, but if we do not keep up the deepfake of 2028 will have overcome this issue,” Professor Kevin Pimbblet, from the University of Hull, told IFLScience.
The Gini coefficient is used to estimate wealth inequality, so if a country has a dramatic difference between the wealth of rich people and the wealth of poor people, then that country – for example, the United States – will have a high Gini coefficient. But in general, the number is used to work out the inequality among the values of a frequency distribution, and that is applicable to all data, including when you want a computer to work out the morphology of a galaxy – basically, what it looks like.
“This is a way to analyze the morphology of galaxies. Traditionally, morphology was judged by eye. I hope it is obvious that the human eyeball is a fantastic device in a physics sense, but we are biased,” Professor Pimbblet told IFLScience. “What we really want is an unbiased way to quantify galaxy morphology and preferably one that makes very few assumptions.”
Gini and other astronomy methods that are used in the classification of galaxies were used in this project but according to the preliminary work, only the Gini coefficient has been good enough when it comes to identifying the eyes of deep fakes.
The current research is part of a master’s project by Pimbblet’s student Adejumoke Owolabi and was presented at the National Astronomy Meeting this week. Owolabi and Pimbblet are now planning to submit a paper with the findings.