Do you think you can pick Sailor Moon out of a crowd of anime characters? Many people probably could. It might take most people less than second to pick out their favorite animated character. But it’s a different story for facial recognition.
As you may have noticed from sites like Facebook and Google Photos, facial recognition systems have gotten much better at recognizing human faces over the years. Telling apart Doraemon from Bugs Bunny, though, has proven more difficult for machines.
Baidu’s streaming video platform is aiming to change that. iQiyi, often called the Netflix of China, has been researching the use of facial recognition for the (mostly) two-dimensional heroes of their shows. At the end of June, a research team of scientists from the streaming site and Beijing’s Beihang University published a paper introducing an animated character facial recognition dataset called iCartoonFace.
The dataset contains over 400,000 images of more than 5,000 animated characters. It includes not only human-like figures, but also animals and monsters.
But telling animated characters apart isn’t as simple as it might sound. With different colors and textures, the process for machines is actually more complex than differentiating human faces. And while most characters have features similar to humans, they’re often exaggerated.
Part of the challenge is that a character can look different depending on the show or universe. Different artists also bring their own flair to a character. Consider Ash and Pikachu in the film Pokémon: Mewtwo Strikes Back—Evolution that was released on Netflix in February. The new film uses computer graphics, while the original anime from 23 years ago used hand-drawn animation.
Researchers have tried to create similar face detection systems before. Despite these efforts, animated character recognition is still in its infancy, iQiyi said.
92% accuracy
But progress is still being made. During a competition last month involving several Chinese universities, teams from Zhejiang University and Sun Yat-sen University created a character recognition algorithm with 92% accuracy.
If facial recognition for animation is so challenging, though, why even bother?
iQiyi seems hopeful about the potential benefits. The company says the technology could be used for things like automatic editing, filming, advertisement recommendations and computer-aided modeling. iQiyi also currently offers it as a feature on the platform, allowing users to identify characters.
There are potentially billions of reasons for this kind of investment in animation-related tech. One estimate from Research and Markets puts the value of the global animation industry at USD 250 billion. The report estimates the industry in China was worth RMB 200 billion (USD 28.6 billion) last year. The animation industry is also booming as a result of the COVID-19 pandemic, as it’s one form of entertainment that can be produced remotely.
iQiyi has also been working on other artificial intelligence projects. It’s been developing machine learning programs that could help with editing footage or casting actors. The casting is done by matching role descriptors with an actors’ previous performances, but it’s only used for supporting roles right now.
Facial recognition could also help fans skip to specific scenes in which their favorite actors appear. Marketers might also find it useful as a way to gauge how popular a performer is by calculating time on screen.