This article is available in: French
At SIGGRAPH 2023, Shutterstock made several announcements: they want to explore distribution and licensing of NeRFs, they partnered with NVIDIA to provide AI-generated HDRI environements, and they also showcased their Stemcell approach for 3D assets sold on Turbosquid.
We interviewed Dade Orgeron (VP, 3D Innovation – Shutterstock) to learn more about these announcements, and as well as discuss the ethical implications of generative AI, and where NeRFs are headed.
3DVF: At SIGGRAPH 2023, Shutterstock and NVIDIA announced a partnership on generative AI. Basically, the idea is to provide a cloud-based tool that will allow users to create 360° HDRI 8K environments. They will then be able to use these environments for their 3D scenes. All of this relies on NVIDIA Picasso, a “foundry for building and deploying generative AI image, video, 3D assets and 360 HDRi”. In other words, a platform taylored for generative AI tools.
Why did you partner with NVIDIA?
I came along at Shutterstock with the acquisition of Turbosquid. I was with Turbosquid for about 9 years before that. And we were good friends with NVIDIA even before the acquisition. We’ve been working closely with the Omniverse team, especially when they were just starting out. And we developped a couple of applications around Omniverse.
Once we were acquired by Shutterstock, this relationship did not stop. It became immediately obvious, considering the dataset that we have, something that we have control over, that is not out in the wild, that a partnership would make sense. The idea being to let NVIDIA use this content to train their AIs.
And during that process, we realized that there were several opportunities, not just training AI on 3D models to generate 3D models. Videos, pictures can be used as well: we have a massive library. And there are many ways to use our data, for example 3 models can help an AI learn what a clean mesh should look like. What a production-ready asset should look like.
Besides, we had the same common goal: to empower creators, not to replace the artists.
3DVF: How does it work for contributors, in other words creators whose work is used to train AIs? Do they have a choice in this? And what do they get in return?
What we’ve done is set up a contributor fund. Basically, you have the opportunity to opt out if you don’t want your content to be used for training. This way we make sure that the artists are ok with any data going into the resulting pool of data.
And the artists are paid for their data being used, through this contributor fund. It’s a great opportunity for them to make additional money.
3DVF: Many artists are fearful of generative AI, the use of their work, what will be the impact of AI on their job, etc.
Yeah, and I think 3D artists are probably the most skilled artists who could be utilizing generative AI. We have a long way to go before artists really understand how these AI tools fit within their pipelines, their workflows. But what we’re actively trying to do, is make this a safe place for them to make money now, and also to be able to utilize those tools.
3DVF: At SIGGRAPH 2023, you also showcased StemCell v2 and an upcoming Universal 3D Asset standard. Can you tell us about it ?
They’re interconnected. Basically, Stemcell is two things:
- a specification, so that our contributors build content in a very specific way. It’s based around based practices and approaches like PBR.
- our pipeline. Once an asset has been created to these specifications, we can convert the 3D content to pretty much any native experience that the end user wants, automatically. It removes a part of the process that was time-consuming, and that artists didn’t really want to handle. Besides, they don’t always do it well.
Ultimately, the goal is Universal Asset. The idea is to develop a standard for output. We want users to be able to stay in their DCC, and to be able to add content in a very seamless way. That’s where Universal Asset comes in. It’s not just about finding what you’re looking for and downloading it, it’s about staying in your workflow, and interacting with our library in a way that allows you to instantly, very easily add content to your project.
In a nutshell, a frictionless environment that really takes Stemcell to the next level for creators. They don’t have to get out of their DCC tool, to download a file, convert it, figure out if it’s gonna work for them.
3DVF: Do you think that USD will be at the heart of Universal Asset?
That’s the vision. Right now, it’s really up to the tool creators, such as Autodesk, to integrate USD in a useful way. And they’re doing that. So we see that as a way to ease the pain of what we are trying to create, if you will. We’re not the ones having to create all the plugins, we just make sure all the content is standardized, and then it’s ready to be used.
We’re also really excited about OpenPBR, I think that has an interesting place for our dream of Universal Asset.
But whatever path tool creaters are taking, we’re going to make sure we follow along.
Yes, this is something we are really excited about. Content is king, and we’re all about licencing and content. We’re always trying to follow the latest trends of the industry.
Last year at SIGGRAPH, it was sort of the beginning of the excitement around NeRF technology, and we were very much aware of what was happening in that field. During the last year, a lot has happened in that field. We realized that this was a tremendous opportunity to sort of unlock 3D content creation.
We have thousands of contributors around the world, who are perfectly comfortable whith their cameras and drones and able to capture video anywhere you can imagine. So this is perfect for NeRF.
Which is why we developped a potential product called Virtual Locations. Basically, the idea is to mix the technology of NeRF and the talent of our videographers. We can then start to pool a massive collection of 3D content.
So this is spaces, but also objects thanks to our partnership with RECON Labs’ 3Dpresso. They are doing some amazing work to generate 3D objects thanks to NeRF, Luma AI is doing the same. And Luma AI and Volinga AI are doing some amazing things with environments and locations.
So the idea was, let’s bring all this great technology together, and let’s figure out how we can enable our video creators so that they can create 3D content.
We’ve been working closely with Luma and Volinga to understand what is needed to create these virtual locations. In other words, if you want to do a shoot in the Mojave Desert, what do you need? A medium, a wide, an aerial, a close-up…. What are the things you should capture to package that so that any consumer can then do their virtual location for XR production or whatever they are working on.
3DVF: So basically you’re going to provide videographers with information on best practices, as well as tips and tricks, rely on your partners to create NeRFs, and the these NeRFs would then be available on Shutterstock.
Exactly. And we’ll continue to work with Luma AI, Volinga AI, 3Dpresso as they develop their products. We want to make sure that we’re using the latest technology and to give our partners the data they need from our users, contributors, to create amazing NeRFs and be able to use them.
So there’s a lot of sharing that’s going to be involved in this partnership.
3DVF: It’s quite interesting to learn that you want to rely on videographers to create these assets, not on people doing photogrammetry for example. It brings a new pool of artists to the table.
Exactly. The digital camera certainly helped bring Shutterstock to the forefront, everyone around the world could now create content, sell it and make money from it. And we were alway kind of waiting for something similar in 3D, we were waiting for the “digital camera moment”. That’s what exciting about it. Simple tools that people already know how to use and AI technology can create something that was impossible to do.
People often compare NeRFs to photogrammetry, and certainly that makes sense, but NeRF is so unique, and different in many ways. And so powerful in the way that it can reconstruct environments.
This is it. This is our digital camera moment.
3DVF: Which kind of customers would use these NeRF assets?
Right now, we are focusing on XR production customers. We want to provide them a library of environements that they can start using and experimenting with.
At the moment we’re in R&D phase. We want to find out more about more how they will use this product, where the technology is going, what does the technology needs to do in order for the customers to be able to achieve what they want on set.
XR is exciting, but it’s still hard, it’s very expensive and it requires a lot of processing power. So this is an opportunity to learn how to make that easier, to make sure we’re being mindful of the needs of these customers.
I’m sure there will be other customer types as well: game development, architecture, which are the main customers of Turbosquid. They’ll definively want to utilize this technology as well, so we’re going to work with them to figure out how we can make this usable for them as well.
Besides, let’s face it, the 3D customer has evolved. What used to be just gaming, VFX, architecture, design, has really become simulation researchers, medical researchers, and some on. We are seing a wide diversity of users, and they’re insatiable, they need to build entire worlds. This driving force started our partnership with NVIDIA, but as we got deeper into it, it wasn’t just that. It’s also about the tools that will enable creators. So what we’re doing with Luma AI or Volinga on NeRFs, or the 360 HDRI tool you saw in the NVIDIA keynote, those are all things that will enable 3D artists, and people who aren’t exactly experts, to utilize 3D content.
3DVF: Thanks for all these informations about Shutterstock’s strategy regarding AI, NeRFs and 3D assets. We’ll keep following these topics in the future.
And for our readers who want to keep up with AI and NeRFs, don’t forget to follow us on Youtube, Instagram, Facebook, X/Twitter, LinkedIn so that you won’t mis our upcoming articles, videos and interviews..