DALL.E 2 is open to select users.

OpenAI’s announcement of DALL.E 2 on April 6 broke the internet.

Later, Sam Altman shared a few DALL.E 2-generated images on Twitter and called it the most delightful thing to play with we’ve created so far and fun in a way I haven’t felt from technology in a while

DALL·E 2 can create realistic images and abstract art from a description in natural language.

The latest iteration has an edit option and can add and remove elements in existing images from natural language captions while taking shadows, reflections, and textures into account.

About DALL.E 2

DALL·E 2 can be used to generate content that features or suggests nudity/sexual content, hate, or violence/harm.

Explicit content can originate in the prompt, uploaded image, or generation and in some cases may only be identified as such via the combination of one or more of these modalities. 

Whether something is explicit depends on context.

The prompt filtering seems to catch a few problematic suggestions in the DALL·E 2 Preview. However, it is possible to bypass the filters with descriptive or coded words.

A visual synonym is another problem OpenAI has to deal with In the context of DALL.E 2, it refers to prompts of things that are visually similar to filtered objects or concepts, e.g. ketchup for blood

While the pre-training filters have stunted the system’s ability to generate explicitly harmful content to some extent, it is still possible to describe the desired content and get similar results.

To mitigate these, OpenAI needs to train prompt classifiers conditioned on the content they lead to and explicit language included in the prompt.


Thanks For Reading!

Next: Rainbow Riches: Play Rainbow Riches Slots Online