Article from: Art News
“Our model generates an image given a text input and an optional scene layout,” reads the report. “As demonstrated in our experiments, by conditioning over the scene layout, our method provides a new form of implicit controllability, improves structural consistency and quality, and adheres to human preference.”
“To realize AI’s potential to push creative expression forward, people should be able to shape and control the content a system generates,” reads the blog post. “It should be intuitive and easy to use so people can leverage whatever modes of expression work best for them.”
Researchers used human evaluators to assess images created by the AI program. “Each was shown two images generated by Make-A-Scene: one generated from only a text prompt, and one from both a sketch and a text prompt,” explains the post.
The addition of the sketch feature resulted in a final image that was more aligned by 66.3 percent with the text description, the post said. For those who do not wish to draw, however, the sketch option is not necessary to generate an image; users can choose to use solely the text-only descriptions.
“I was prompting ideas, mixing and matching different worlds,” noted Anadol. “You are literally dipping the brush in the mind of a machine and painting with machine consciousness.”
Meta program manager Andy Boyatzis, on the other hand, “used Make-A-Scene to generate art with his young children of ages two and four. They used playful drawings to bring their ideas and imagination to life.”
Meta has increased the potential resolution of Make-A-Scene’s output four-fold up to 2048-by-2048 pixels since the report’s release. Additionally, the company plans to provide open access demos but has not yet announced a release date. Until then, those curious about the developing technology will have to wait until October, when the project will be discussed at the European Conference on Computer Vision in Tel Aviv, Israel.
source
Article from: Art News