Researchers at Chinese gaming giant NetEase published a paper detailing a new machine learning method that enables players to create in-game characters from a selfie, Synced reports.
Why it matters: The researchers frame the value of their work as a means to streamline the often-laborious character customization process in contemporary RPGs.
- As game developers look to set their games apart through the level of personalization they offer to their players, the added immersion that comes with translating one’s face into an avatar could also prove valuable.
Details: Titled “Face-to-Parameter Translation for Game Character Auto-Creation” and published September 3 on a publishing platform associated with Cornell University, the paper describes how a deep generative network can transform a portrait into a character whose style matches that of the desired game engine.
- To improve accuracy and enable further customization, the 3D face reconstruction method creates a bone-driven model, as opposed to previous methods that created 3D face mesh grids.
- In addition to photographs, the generator also works with sketches.
Context: Following the recent backlash against deepfake app Zao for its excessive data collection, it is unclear how wider audiences will respond to a big tech company soliciting this type of personal information.
- The technology has already been used over one million times by Chinese gamers.
- NetEase isn’t the first in the entertainment industry to explore the potential of artificial intelligence—in August, researchers at Baidu’s iQiyi created a facial recognition dataset based on anime characters.
– This article originally appeared on TechNode.