Unveiling CM3leon: Meta Showcases a More Efficient AI Image Generation Model
July 14 2023
Meta is advancing its research into generative AI models with its latest creation – CM3leon, a multimodal foundation model for text-to-image and image-to-text creations. Unlike current text-to-image generation technologies that rely on diffusion models, CM3leon uses a token-based autoregressive model. Despite being more computationally expensive, CM3leon is claimed by Meta to be more efficient than its diffusion-based counterparts. It operates by utilising licensed images from Shutterstock, thereby avoiding legal complications from sourcing images online. It then undergoes a pre-training phase and a supervised fine-tuning stage, resulting in high-quality image generation and optimal resource utilization. It is currently in the research stage, and no details have been released about the possibility of it being made publicly available.
Back to Breaking AI News
What does it mean?
- Generative AI models: These are types of artificial intelligence models that can generate new content. They learn patterns in data and can produce data that resembles their learning.
- Multimodal foundation model: A type of model that can understand and generate different types of data such as text, images, or sounds.
- Diffusion models: These are types of AI models used for generating images from text. They generate an image by making small changes to an initial random image until it matches the desired output.
- Token-based autoregressive model: This is a type of AI model that generates one part of a sequence at a time (like words in a sentence or pixels in an image), with each part being conditioned on the previous parts.
- Computationally expensive: A term used in the field of computer science to describe programs or operations that require a lot of computational resources, like time or memory.
- Pre-training phase: A phase in machine learning where a model is initially taught to understand data by learning from a large dataset before it is fine-tuned with specific examples.
- Supervised fine-tuning stage: A stage in machine learning where the model is further trained on specific tasks using labelled data.
Does reading the news feel like drinking from the firehose?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?