ByteDance unveils OmniHuman-1, a new AI model for realistic deepfake videos
ByteDance has introduced OmniHuman-1, an AI model that creates realistic deepfake videos from a single image. This model can generate full-body animations that sync gestures and facial expressions with audio, surpassing previous deepfake technologies. The company released test videos, including AI-generated TED Talks and a talking Albert Einstein. OmniHuman-1 is trained on 19,000 hours of human motion data, allowing it to produce video clips of any length while adapting to various input signals. As deepfake technology advances, detection tools from companies like Google and Meta are struggling to keep pace. These tools aim to flag synthetic content, but misuse of deepfakes continues to raise concerns about harassment and fraud.