Microsoft Unveils VASA, a Neural Network That Creates Hyper-Realistic Videos from a Single Photo and Audio Track
18 de Abril de 2024 in Noticias de IA by Sofía González
Microsoft has recently introduced a groundbreaking neural network named VASA, which has the capability to produce incredibly realistic videos from a single static portrait image and a corresponding audio track. This technology not only synchronizes lip movements but also captures vivid facial expressions and natural head movements during speech, enhancing the realism of generated content.
VASA operates by integrating a static portrait photo with an audio file containing speech, rendering animated sequences that mimic real human interactions with remarkable accuracy. The system supports online generation in real-time, offering outputs at a resolution of 512x512 pixels and 40 frames per second, with minimal latency.
While the technology is not yet available for public testing, Microsoft has released a detailed research paper filled with examples, available on their website. This move marks another significant step in the realm of artificial intelligence, signaling a future where distinguishing between videos of real people and those generated by AI might become increasingly challenging.
The development of VASA is particularly exciting for those in marketing, as it holds potential for creating high-quality user-generated content (UGC) creatives. The current landscape lacks adequate tools for such applications, making VASA's introduction a highly anticipated development in digital marketing and beyond.