Adobe has taken the next step in the development of its products by integrating the Firefly Video Model generative neural network into the Premiere Pro video editor. The tool, which is available in beta, allows users to create videos from text prompts or static images, making the content creation process much easier.
One of the most important features is Generative Extend, which allows you to extend the video by a few seconds in the desired segment, correcting minor errors. However, the functionality of this feature is limited: it only works on 720p and 1080p videos and can only be extended by a maximum of two seconds. You can also extend audio effects up to 10 seconds, but this does not apply to recording conversations or music.
Adobe has introduced two new tools for Firefly web users: Text-to-Video and Image-to-Video. These features allow you to generate short videos (up to five seconds) from a text description or an image. Although these tools are in limited beta, they open new possibilities for those who work with video and animation, although the quality does not yet allow for a full retake.
Adobe is aggressively improving its AI tools, promising faster content generation in the future and ensuring the videos it produces are “commercially safe” by using approved content for Firefly training.
⚠️ Discord Messenger – VSE
Source: VG Times
Gregory Robert is a sports aficionado and a writer for “Social Bites”. He provides in-depth coverage of the latest sporting events and trends, offering a unique and knowledgeable perspective on the world of sports.