Do Virtual Humans Have Personalities

Posted at 2 years ago

The 2022 World Artificial Intelligence Conference is titled "Intelligent Connected World, Unbounded Origin", focusing on core technologies of the Metaverse, such as virtual native, digital twin and spatial computing. The main venue for the opening ceremony featured a 360-degree stage designed to resemble an extended reality headset.

Walking into the exhibition hall, the most striking thing is a row of warm and polite virtual people. These virtual humans are good at receiving people and things by virtue of voice interaction and image recognition. Some of them are good at solving work problems, diligent and meticulous; some are good at expressing and communicating, with lively personalities and love of literature and art. At the entrance of the hall, avatars with distinct personalities welcomed the guests in their own way. It can be seen that their expressions are more natural, and the expression control algorithm of these avatars has been further improved compared with previous years.

The virtual human technology is indeed becoming more and more mature. Data shows that there are more than 388,000 virtual human-related companies in China, and nearly 70% of virtual human companies have been established within one year, and the industry has entered an explosive period. However, the virtual human market still faces challenges such as the difficulty of capturing micro-expressions and the difficulty of creating a personified virtual human.

Micro-Expression Rendering

The virtual human technology includes a character image module, an animation generation module, a voice generation module, and an audio-video synthesis display module. The most important aspect of virtual human image is the design of virtual human facial expressions. Humans can express their emotions through facial expressions and intonation through the coordination of their brains, but virtual humans need to follow a set script. When the content of interaction with people exceeds the content set by the script, the expression cannot be expressed naturally, and there will be problems with dubbing, mouth shape and intonation.

Therefore, how to deduce the emotional orientation of the current dialogue content through context and semantics, locate the expression and tone of the virtual person to the current scene, and carefully identify the human expression according to the conversation context and images to render appropriate emotions. Human research focus. In the past, face capture technology was used to drive changes in virtual human faces and mouth shapes. The whole process was very complicated, and the cost of time, manpower and material resources was huge. Developers and content creators urgently need lower-cost and easier-to-develop tools to lower the threshold and cost of virtual human production. Now, relying on technologies such as natural language processing and speech recognition in AI technology, virtual humans can be taught to a large extent how to identify the user's emotional changes during the dialogue process and render the correct emotional state in time.

In addition, due to the great difference between different application scenarios of virtual human, smoother scene transition cannot be performed, and the IP-derived monetization ability is relatively poor. As long as the application scenario is changed, it is necessary to rebuild the virtual human model, design the virtual human image, and train the physical movements of the virtual human, which will bring huge cost. At present, virtual human development costs are high, terminal costs are high, and experience costs are high, resulting in limited immersive experience; in terms of functions, the AI ​​interaction scene is single, and multiple rounds of dialogue will inevitably fall into "embarrassment"; in terms of marketing, virtual idols can realize their ability to monetize It does not match the difficulty of development. At present, it mainly relies on human design marketing and lacks truly dazzling technological innovations.

Lack of Personality

Virtual humans are the core members of the metaverse, which simulates the physical world on the one hand and human society on the other. A virtual human is a virtual human image with a human shape, but in fact, the virtual human we expect not only shows the appearance, actions, speech and behavior of human beings, but also hopes that the virtual human has the perception and cognitive ability of human beings, and can express the personality the transformed side. It is easy to say that there are thousands of people and thousands of faces, but the appearance of people can be shaped through animation technology, but the personality needs massive personalized data, so that virtual people can learn through models, and it needs to rely on algorithms such as reinforcement learning and incremental learning. Although virtual humans have been widely used in various fields, such as digital anchors, virtual social networking, intelligent diagnosis and treatment, and ergonomics, they do not have "personality" and are still only used as an expression medium.

Related Datasets

Datasets Download Rank

ASR-RAMC-BigCCSC: A Chinese Conversational Speech Corpus
Multi-Modal Driver Behaviors Dataset for DMS
ASR-SCCantDuSC: A Scripted Chinese Cantonese (Canton) Daily-use Speech Corpus
ASR-SCKwsptSC: A Scripted Chinese Keyword Spotting Speech Corpus
ASR-SCCantCabSC: A Scripted Chinese Cantonese (Canton) Cabin Speech Corpus
ASR-CCantCSC: A Chinese Cantonese (Canton) Conversational Speech Corpus
ASR-EgArbCSC: An Egyptian Arabic Conversational Speech Corpus
ASR-SpCSC: A Spanish Conversational Speech Corpus
ASR-CStrMAcstCSC: A Chinese Strong Mandarin Accent Conversational Speech Corpus