Google's Motion Prompting Controls Video Generation Through Motion Trajectories
GOOGLE

Google's Motion Prompting Controls Video Generation Through Motion Trajectories

January 4, 2025

The video Motion Prompting research by Google DeepMind controls video generation through motion trajectories, with the core objective of designing a generative system that uses motion information to generate dynamic video content in a more flexible and controllable manner.

Train trajectory-based conditional video generation models.

  • to represent motion:
    • This flexible motion representation supports encoding single or multiple point trajectories.
    • It can describe the motion of specific objects or the entire scene.
    • Including occluded and temporally sparse motion sequences.

Method steps

  1. Train trajectory-based conditional video generation models.

  2. Use motion prompts to guide the model to generate target behaviors.

Comparative analysis

ABOUT THE AUTHOR

Renee's Entrepreneurial JourneyEssay Editor

This is my little corner of the internet where I share thoughts, ideas, and interesting stuff I come across in the world of AI. Things in this field move fast, and I use this space to slow down a bit—to reflect, explore, and hopefully spark some good conversations.

GOOGLE

See More