By Manuel Ruiz Dupont: consultant and trainer of development processes in real time at Pixelacademia
This article looks at the disruption that AI will bring to the field of imaging. Even if you’re not in the business, you can easily understand it because I illustrate my point with many visual examples (and I advise you to first watch the video of Mr. Villani’s speech, in the Senate, in which he discusses the limits of AI).
What is AI?
There are several types of AI (ANI, AGI, ASI…), each with its own definition, but I will only deal with those that use a database managed by complex algorithms. There is also AI without already established databases: these are created at the moment”T” when the order is released, or gradually created (Machine Learning).
What do we do with AI?
Today, there are already many applications that work well with AI.
Creation of very high quality images, created from text or a simple sketch.
Image made with Google Colab (left) and Nvidia (right)
Ability to create music with a very simple or text interface
Aiva interface, music creation software with AI
Automatic creation of a story (tales, in particular)
Chance to have a gambling game”premonitoryplayer actions, in the field of video games
The “neural state machine” Presented at the Siggraph fair in 2019, it is capable of learning and “predict” the interactions between the character (avatar) and the scene from real-time motion capture data.
There are also several fun apps that use AI exclusively on the market. You can also find them for free on the Internet (Google Colab). They are usually programmed in Python, but you can use them without knowing how to program.
See your face instead of an actor in a movie scene
Example of a deepfake with actor Tom Cruise (right)
Possibility of seeing your face in 50 years or if you were a woman or a man or of another origin
Image generated from Artbreeder AI software
Added to this are the applications that are still in an embryonic stage of research.
Generation of 3D characters, with their animation from a text
HumanML3D forerunner of animation”textual“
Creation of 3D volumes from a single photo
And, of course, there are all those applications that we would like AI to make possible today, such as the automatic muscle rig, or even the creation of hyper-realistic garments that can be animated according to a pattern or a simple photo, but we will have to wait. a little more.
AI in image production
Current AI software is not positioned as a solution to development problems. It took Unreal Engine more than 6 years, despite its huge means, to create a relatively acceptable crossover with 3DSMax, Maya and Houdini, but still very weak for Nuke. Consequently, if we arbitrarily consider the year 2022 as the year zero of AI, we will still have to wait before AI is actually applied in the image production chain, and this will necessarily happen through the activation of plugins in the imaging software. production and, at the same time, at the same time, by the birth of AI software that takes into account the needs of production software: AI will really prevail when it is accessible in already installed production software.
Until now, if you worked in the field of video games, you only had to master a few technical concepts (UV, Bones, normals, polygons, shader) to develop, but the increase in the power of machines has created bridges with other sectors. of activity (cinema in particular) that manage other concepts (fluids, hair, renders, etc.). I believe that AI will initially allow tools with very powerful technical concepts and easy to use thanks to simple interfaces, and that they will be used for both cinema and real-time professions.
Second, the AI will also allow you to quickly produce visual effects or gimmicks that took a long time to develop. These include the creation of graphic styles or the famous deepfake (already possible, but the result is not precise or qualitative enough to be implemented in a production).
Using various mathematical models for art style transfer, here is the Portland city skyline, in the style of Van Gogh’s Starry Night
Is this the end of certain professions? Nobody thinks about it, but Quixel, Kitbash3d or even texture banks have considerably reduced the human needs of a production, but at the same time, the visual production has skyrocketed and the human needs only increase.
For example, I tried to create a 3D animation with as little human intervention as possible.
This is how I did it:
- I created a text with AI software that generates poems
- I created the image with an AI image creation software from text (I copied the text created by the poem creation software)
- I created the “2D volumewith AI software
- I created the 3D volume with photogrammetry software.
- I exported the volume in Maya and then created a cinematic with camera movement.
- I exported the rendering in AI software to make the animation smoother.
What is the future of AI?
Currently, the biggest hurdle for AI is the time required to create the database and process it through algorithms.
If we manage to reduce this time, we can imagine that AI will help us in real time – throughout development – to sublimate our creations or correct our mistakes. If we combine this real-time AI with other techniques (asset library, avatar creation, etc.) that will also go a long way, we can imagine the creation of movies without too much difficulty (with dialogues and music) or cartoons, made according to a script more or less detailed graph, where the AI would simply fill in the gaps according to its databases. The creator of this movie would only have to modify certain details of the result, according to his vision, using a fairly simple interface. I know this may sound like science fiction, at this point…
Of course, many steps will have to be taken before arriving at a viable solution, but the closer we get to a spontaneous creation with very few human needs, the more the problem of the originality of the work will arise, because the AI needs databases. However, at present, many of them are created without the consent of the authors.
Can we already imagine the creation of new companies that would market databases of images of which they would be the authors? What if, in the end, one of the possible futures for development companies was simply the creation of images destined to constitute databases?
When to switch to AI?
It reminds me of that time, 4 or 5 years ago, when real time began to be used in sectors other than video games. It was initially seen as the ultimate solution to many development problems. In some cases it was true, but in others it was not. And often real time has simply offered more creative comfort. For example, real time brings real added value to videomapping, and in the field of cinema it offers greater scenic comfort. In the field of animation, it allows to create a more flexible development process, but the final rendering loses quality.
Today, AI is already present in the pre-production phases (it is used by concept artists). In the production phase, it is clearly not developed or remains anecdotal, but I think it will soon be present in the last phase (post production), in particular for colorimetric adjustments (DVinchi), because there are already thousands of databases that allow you to use quickly.
So when to switch to AI? Should we wait until the software is finally ready, knowing that the next version will always be better? In the end, if you are careful and spend some time on R&D and tech watch, there is no bad choice. But in certain sectors of activity, the implementation of AI will be faster, especially in the field of photo or video restoration (and surely, already, in music).
Restoration of photographs made with Photoshop and AI software (stable diffusion).
How to teach it and why?
Teaching AI to a graphic designer like Maya or Blender teach them seems impossible to me because, at the end of the day, it’s all about math. But we are not going to ask a graphic designer to program an AI. On the other hand, we can explain the technical concepts that AI uses and that are far from UV or fluids.
The truth is that for several years the following trend has been emerging: companies are looking for profiles that dominate aesthetics and technology.
New professions will appear and, therefore, new profiles that will have other qualities in addition to aesthetics or technology and rather a strong disposition to abstraction in order to manipulate the technological concepts of AI, without being virtuosos in mathematics.
We must already teach AI in schools and each school must identify how to do it because, in the short term, companies are going to ask for profiles that are comfortable with AI concepts.
To sum up
I believe that in the medium term, AI will be a true vector of change within our production chains, because the development processes will evolve with its implementation in the software we use. The teaching of AI will occupy a full place in video game schools and we will see it appear in new courses.
Of course, new professions and companies will appear. In the very short term, the real obstacle is at the level of ethical and legal aspects. In fact, AI enables the creation of very high-quality content that may be ethically objectionable. Furthermore, at present, databases are probably created without the consent of the authors, which raises a legal problem.
If you are interested in the legal and ethical aspect of AI, I advise you to read the interview that Mr. Emad Mostaque, founder of Stable Diffusion (AI software) gave to The Times.
Manuel Ruiz Dupont
Consultant in real-time development process and trainer at Pixelacademia
#Artificial #intelligence #production #images