Microsoft has started rolling out Microsoft Bing’s Image Creator in a preview in select markets, preparing the AI Art Builder for a wider rollout to Microsoft Edge later this month. In a blog post and related video, the company showed how Image Creator will work and further explained the limitations it will place on user-generated prompts.
Microsoft said last week that it would bring AI art to Bing and Edge, using the more advanced DALL-E 2 algorithm to generate art. It looks like Image Creator will be accessible from Bing.com and a related version will be available from Edge shortly thereafter. The company showed Image Creator working in the Edge sidebar, which can create a small vertical column to display search results and other information along with some useful utilities. This is where you can access the new image creator.
In a video, Microsoft showed how users could generate a prompt, using conventional terms like art styles. In the video below, you can see Image Creator return four small results in just a few seconds. It is unclear if this will be representative of overall performance. It’s also unclear if there will be some sort of credit system or some other counter to limit prompt generation.
Similarly, Microsoft also showed off Image Creator running in Edge.
Microsoft’s approach here is more social: the example shown is of a user conceptualizing a “dream house” using Image Creator’s content creation tools, then sharing it on social media . Again, the image appeared within seconds and four images were generated.
Microsoft’s blog post implied that the AI art generation tools would work similarly to other services such as Midjourney or DreamStudio running on the Azure cloud. “We’ve found that in general Image Creator works best when you type a description of something, with additional context like location or the art style you want to mimic, as opposed to a more limited description,” said Microsoft.
Microsoft will also use AI to filter requests, applying the same kind of signals that help Microsoft Defender filter out problematic websites, for example. These blocklists and classifiers will be used to “reduce the risk of offensive prompts being used,” Microsoft said.
Interestingly, Microsoft is also applying additional technology to correct biases found in AI image generation. (Microsoft hasn’t specified what this means, although anecdotally some generic prompts seem to favor results with certain skin colors.)
“We take our commitment to responsible AI seriously,” Microsoft said. “To help prevent the delivery of inappropriate results on the Designer and Image Creator app, we are working with our partner OpenAI, who developed DALL∙E 2, to take the necessary steps and will continue to evolve our approach. We will regularly take the feedback we have and share it with OpenAI to improve the model and apply it to our own mitigation work.
Microsoft said its image builds would be governed by its Content Policy, which prevents images of child sexual abuse, non-consensual intimate activity, suicide, terrorism, hate speech, and more.