A couple of years ago, I was fascinated by the possibility of animating any motion picture with a style of choice. From AI-postprocessing and AI-postproduction, I was eager to deepen my research in next-frame prediction and eventually film production, no matter how computationally intensive that might be even for a tiny gif production.
A few papers published in this area discussed beautiful theories that were challenging to implement even on parallel GPU dedicated processors.
Until very recently, Open AI brought wonderful contributions with their CLIPS’ translation of text into image that was packaged a couple of months ago as DALL-E. The text could span to several words sometimes describing complex concepts that DALL-E was capable of abstracting and translating into meaningful images and similarly for Ryan Murdock’s application of Clips that I used initially. For example, asking it to interpret “art will save the world” would result in the following image of an artistic arm stretching towards Earth. And for a painting of Paris by Salvador Dali would return the following.
With a few tweaks to the algorithms and many iterations with the parameters and the seeding and re-seeding, I was able to make it visualize or paint a poem by William Blake. Without any further due:
Exciting times for AI and its creative capabilities that might be interfering with the Film Industry sooner than ever.