Home Meta Facebook Meta’s Make-a-Scene tool gives artists fine control over the output of the...

Meta’s Make-a-Scene tool gives artists fine control over the output of the generative AI

Meta has announced a new tool which gives artists greater creative control over AI image generation as found in platforms like OpenAI’s Dall-E 2 and Google’s Imagen.

Meta says to realize AI’s potential, people should be able to shape and control the content a system generates in an intuitive and easy-to-use way.

Meta’s new AI research concept is called Make-A-Scene. Make-A-Scene is a multimodal generative AI method that puts creative control in the hands of people who use it by allowing them to describe and illustrate their vision through both text descriptions and freeform sketches. Like other generative AI models, Make-A-Scene learns the relationship between visuals and text by training on millions of example images.

Make-A-Scene can generate its own scene layout using text-only prompts, if that’s what the user chooses to do. Make-A-Scene’s freeform sketch and text components however enable a greater level of creative control for AI-generated images. By drawing what you want, you can decide the size of the cat, which way it’s facing, and the shape and length of its tail.

Several artists have praised the tool and the control it offers. These include Sofia Crespo, Scott Eaton, Alexander Reben, and Refik Anadol — all of whom have experience using state-of-the-art generative AI.

The prototype tool is not just for people with a penchant for art. Meta believes it could help anyone better express themselves, including people without artistic skill sets.

At present, the tool is, however, only available on a limited basis to Meta employees who are testing and providing feedback about their experience with Make-A-Scene. Meta aims to provide broader access to their research demos in the future to give more people the opportunity to be in control of their own creations and unlock entirely new forms of expression.

And you can catch an oral presentation of the tool at this year’s ECCV conference held in Tel Aviv on October 23 – 27, 2022 or read the paper here.

Exit mobile version