Talk

Reality as code - How close are we at generating humans and it's environment
Conference (INTERMEDIATE level)
Room 7
Score 0.15
Score 0.16
Score 0.17
Score 0.17
The match becomes increasingly accurate as the similarity score approaches zero.

I've done my fair share of automating computers and deploying processes. Many things have been said and written and after reading the conference schedule I wasn't sure what to add. Lately, though, I've become intrigued by how computers are automating humans, that is, the creation of photorealistic humans and virtual environments also known as 'synthetic media'.

I will take you on a tour across lip-synching, face swapping, voice cloning and capturing 3d modelling humans. How close are we at generating humans and what does it mean for our society? From Hollywood VFX over virtual production to AI-generated humans via GAN models and deepfakes, I'd like to explain the topics in a technical engineering way to inspire people in this exciting new field.

Patrick Debois
Jedi BV

In order to understand current IT organizations, Patrick has taken a habit of changing both his consultancy role and the domain which he works in: sometimes as a developer, manager, sysadmin, tester and even as the customer. Today Patrick is a Distinguished Engineer at Showpad. He is also a co-author of “The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations” with Gene Kim, John Willis, and Jez Humble. He first presented concepts on Agile Infrastructure at Agile 2008 in Toronto, and in 2009 he organized the first DevOpsDays. Since then he has been promoting the notion of ‘devops’ to exchange ideas between these groups and show how they can help each other to achieve better results in business. More recently he has been focusing on DevSecOps to make the world a safer place.

Generated Summary
WARNING: This summary was generated using GPT based on the transcript, as a result spelling mistakes and more importantly hallucinations can be present.

Creating Digital Avatars with Technology
Green Screens, Lighting and Virtual Production
The presenter utilized multiple lights to make the green screen more effective and an app to help edit the video. They also experimented with virtual production, a technique used in movies like The Mandalorian, to make the background adapt to the presenter's movements.
Creating a 3D Model of Oneself
A projector was used to create the background and a 360 camera was used to scan the room. A VR headset was used to track the camera position and a concept called Nerves Instant was used to take six pictures and create a 3D image. Finally, a LiDAR scanner on a phone was used to scan the person and create a 3D model of them. Professional help may be needed for detailed cleanup of the scan.
Making the Models Realistic
This article discusses the use of technology to create realistic 3D models of humans. It explains that with the help of AI and scanning, it is possible to create an avatar using only a few pictures, and then use puppeteering to record the movement of the model. To make the models even more realistic, inverse kinetic models are used to understand how bones are connected so that if one thing moves, other parts of the body will also move.
Using Technology to Create Realistic Digital Avatars
This talk discusses the various methods of creating realistic digital avatars that can be used in films or on the internet. It covers body motion tracking, facial expression tracking, and voice cloning. It also covers the use of motion capture suits, cameras, and puppeteering to capture realistic movement. Finally, it discusses the use of text-to-speech and voice cloning technologies to create realistic audio.
Using Technology for Other Purposes
Text-to-speech technology is being used to automate parts of dialogue recording, such as when Darth Vader's voice was bought because the actor couldn't maintain it in the future. Overdub is a video editor that can be used to record audio and select text to create parts between recordings. Audio-to-lip technology can be used to predict how lips should move based on text, and synthetic media, such as deep fakes, can be used to generate unique photos and videos. Deep fakes have been used to con people out of money, but they can also be used for more positive applications such as lip syncing and live translation. Deepfake technology has been used for a variety of purposes, from creating virtual bands and influencers to helping people grieve for lost relatives. It can also be used to enhance 3D models and create realistic images for games like FIFA.
Open Source Technology
There is also software like Dali that can create images from text, and open source software like Stable Diffusion that can be used to generate models. All of these technologies have the potential to create interesting and creative content. Open source technology has become more available, surpassing open API which was not open source. It can be used to generate images from datasets, and even search what images are used in the dataset, for copyright reasons. It can be used for creative art and even medical images. It can be used for storyboarding and game engines, and even for creating realistic Lego models. It can also be used to create images from stories, and to view images from different angles.
New Job Prompt Engineering and Google
This new job prompt engineering involves using text to magically generate images and videos. Instead of writing code, it is a way of expressing an intent and allowing the computer to generate the desired result. There are also search engines, newsletters, and meetups to help with this process. Additionally, Auto-completion and macros can be used to group images together. Google has also started to implement text-to-video capabilities, as well as creating static images from moving ones. 3D models can also be created from images and manipulated using text. This technology has the potential to create entire worlds from a single background image.
Conclusion
This talk discussed the use of video editing software to create a 360 scan, and how it is changing the way manual labor is done. It also suggested using AI to make the process easier, and provided a list of books to learn more about it. Finally, the speaker announced plans to hold a conference in Belgium in March or April to explain the technology further. This talk provides an overview of how to create realistic digital avatars and how to use technology to create realistic 3D models of humans. It also suggests that open source technology and new job prompt engineering can be used to generate interesting and creative content.
You can also ask questions on the complete talk using Devoxx Insights