On Thursday, December 12, Meta announced the release of its new artificial intelligence model, Meta Motivo, designed to control the movements of human-like digital agents. This innovation aims to enhance the Metaverse experience by enabling more realistic and lifelike avatar interactions.
Meta has invested heavily in AI, augmented reality, and other Metaverse technologies, allocating tens of billions of dollars to these areas. This substantial spending has pushed the company’s 2024 capital expenditure forecast to a record-breaking range of $37 billion to $40 billion.
Committed to an open approach, Meta has been making many of its AI models freely available to developers, hoping this strategy will lead to better tools that enhance its services.
"This research has the potential to create fully embodied agents for the Metaverse, revolutionize character animation, democratize access to NPCs, and unlock new immersive experiences," Meta stated.
Meta Motivo addresses common challenges with digital avatar body control, allowing them to perform more natural and human-like movements. Additionally, Meta unveiled a new language training model called the Large Concept Model (LCM), which focuses on predicting high-level ideas or concepts instead of individual tokens. LCM represents these ideas through multilingual and multimodal sentence embeddings.
Meta has also introduced Video Seal, an AI tool that embeds invisible yet traceable watermarks into videos, providing a secure and innovative way to protect content.
Disclaimer: This image is taken from Reuters file