DALL-E 2 is the latest product by OpenAI that is capable of transforming anything you can mumble or type into high quality illustrations, designs and arguably art.
DALL·E, named after Dalí and Pixar’s WALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs.
DALL-E 2 has shown huge progress in abstraction details as shown below and in the paper
The paper ends with a discussion about limitations and risks, mainly focusing on the performance of DALL-E 2 but not on the consequences of its development and limited access to a few users, which is currently the case.
The paper does not mention creative illustrators, designers, or even artists who are suddenly dispensable with no prior warning.
I sympathise with you yet is the road that is being paved and this is only the beginning.
Soon we will have this technology built in our capturing devices. Soon our cameras will be equipped with AI-enhancing, AI-suggesting and AI-creating cameras.
In these three scenarios, the CLIP-guided AI can stylize, edit and reproduce pictures in almost real-time. The CLIP-guided AI can suggest variations to the picture taken with several styling options… Last but not least, the CLIP-guided can create or edit illustrations, designs…
What are your thoughts?
Where do you think we are heading?