At the NVIDIA GTC 2022, DeepBrain AI’s Chief Technology Officer (CTO), Kyung-Soo Chae, presented under “Toward Real-Time Audiovisual Conversation with Artificial Humans.” DeepBrain AI’s lip-sync video synthesis technology introduced during the event enables real-time video and voice-based communication between AI and humans.
The NVIDIA GTC, a global technology conference held as an online live session from March 21 to 24, was attended by companies worldwide. These companies participated in over 500 live sessions, including data science, high-performance computing, deep learning artificial intelligence, and robotics technology, to showcase their recent technological advancement and discuss related agendas with participants.
This new technology introduced by DeepBrain AI can generate high-resolution lip-sync videos at 4x speed of time through its own artificial neural network structure design. The company also announced research results that reduced synthesis time to 1/3 by applying NVIDIA’s deep learning inference optimization SDK.
Read more: Waymo Expands its Open Dataset with new labels and challenges
DeepBrain AI also participated in a panel discussion session on “Digital Human and Convergent AI” to announce AI human technology and specific business status. Real-time questions and answers followed the session, and the company introduced DeepBrain AI’s technology use cases and future business plans.
DeepBrain AI CTO Kyung-Soo Chae said, “The area that takes the most time and money to implement AI humans is video and voice synthesis. DeepBrain AI’s unique technology has announced reducing high-quality video synthesis time in 1/12 of real-time.”
DeepBrain AI is regarded as one of the top three global companies for interactive AI. It used deep learning-based video synthesis and voice synthesis source technologies. They have attracted attention as an AI Human solution that creates a virtual human capable of real-time two-way communication through deep learning AI technology.