This article is available in: French
The 50th edition of SIGGRAPH kicked off this Sunday morning in Los Angeles: the largest annual conference on computer graphics and interactive techniques is back! We didn’t want to miss the event: 3DVF is therefore attending SIGGRAPH 2023 on-site as an official media partner.
Just like the previous years, a queue quickly formed as people came in to get their badges. This edition is hybrid (in Los Angeles and online), and official numbers are not yet available, but it’s already quite clear that the many people decided to come.
A few changes are worth noting at the Los Angeles Convention Center. The exhibitor space in the South Hall is gone, now everything is consolidated in a more restricted area in the West Hall. Furthermore, due to the extensive venue, it sometimes took several minutes to move between two rooms. This year, everything has been concentrated in a section of the Convention Center. Only a few events will take place in the South Hall, such as the NVIDIA Keynote featuring NVIDIA co-founder and CEO Jensen Huang. The NVIDIA team waved signs with the numbers 888 throughout the morning, to remind us that the keynote will take place August 8 at 8 a.m. PT. It will be streamed online and is expected to touch on topics such as AI, OpenUSD, and research. NVIDIA might also tread us with a few surprise announcements.
Neural Radiance Fields – NeRF
The first conference we attended this Sunday focused on NeRF, a topic that will continue to advance AI in video and imagery and will become an increasingly discussed subject in the future.
Neural Radiance Fields, or NeRF, is a technology based on neural networks. In a nutshell, NeRFs allows you to navigate within a scene, whether real or virtual, by interpolating viewpoints between input images/photos. This description might remind you of photogrammetry, but the underlying technique is very different, as are the possibilities.
NeRF handles scenes and objects with changing appearances depending on the point of view (e.g., reflections) or involving complex occlusions. And while NeRFs don’t rely on traditional 3D representations of a scene, it’s still possible to generate 3D meshes if needed (using techniques like marching cubes).
Behind the scenes (for those averse to technical details, you can skip to the next paragraph after the video), NeRF uses a volumetric representation of the scene defined in 5D (3 coordinates for spatial position and 2 for camera orientation). The idea is then to create a neural network that associates these 5D coordinates with a 4D output (R, G, B values, and volume density). Techniques from volumetric rendering are then used to calculate an image for a given viewpoint. Once the system is optimized (by reducing the error between the calculated rendering and reference images), any point of view can be generated.
The video below will provide more details: it presents the publication NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis, unveiled in 2020, which serves as the foundation for Neural Radiance Fields.
NeRFs are a very active field of research. They have numerous applications, such as mixed reality, use cases similar to photogrammetry, and much more. Training NeRFs still takes some time, this process will take less and less time as new improvements are implemented.
These algorithms and new technologies were thus showcased at SIGGRAPH. The team nehind Luma AI highlighted their technology: Luma AI offers tools to create and your own NeRFs using videos or photos. An iOS application, a web version, and a plugin for Unreal Engine are available.
If you have a SIGGRAPH accreditation, the replay of this Sunday conference is available online. We’ll have the opportunity to talk about NeRFs again in the coming days.