This article is available in: French
NVIDIA Research unveiled GET3D, a novel research project that can populate virtual worlds using AI. It could be used in the near future to fill videogames, architectural projects or CGI-heavy movies with life.
At the core of GET3D is a simple idea: an AI is trained on a huge dataset of 2D images of cars, houses, animals and so on. It can then create new 3D models. Even better: the 3D models are textured and therefore ready to use. Here is a preview of what can be achieved using GET3D:
The main advantage of GET3D compared to similar techniques is the fact that it can generate 3D models with fine geometric details, such as small wheels on office chairs, ears and horns on animals, wireframe on the tires of mortorbikes, as shown in the examples of the video and on the additional material available on the webpage where you can download the full scientific paper.
Textures are also quite convincing, and there is a good disentanglement between geometry and texture. The texture can be changed while keeping the same geometry, and the geometry can be changed while keeping the same texture.
Last, but not least, GET3D allows for text-guided shape generation.
GET3D is trained using 2D images generated from 3D shapes captured from different camera angles. In other worlds, the AI is not trained using raw meshes or pictures from the real world, but rather using synthetic data. To create the results shown in this article, NVIDIA researchers used around 1 million images.
Research projects such as GET3D could be the beginning of a new way to populate 3D worlds, where artists would work on the artistic direction and then train, guide an AI that would generate assets for them, instead of having to create each 3D model by hand. This could used, for example, to generate crowds in CGI-heavy movies, for videogames featuring a virtually infinite number of unique cars, or for virtual worlds inhabited by CG creatures and animals.
This kind of technique is still in its infancy, and we should witness a lot of progress in the years to come in terms of quality and ease of use. For more information, do not hesitate to check out the official website for the project. You’ll be able to download the full paper, to watch more videos, and some code will be published in the days to come. “GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images” is a scientific paper by Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, Sanja Fidler.