This article is also available in: Français
Every year in spring, the VR/AR/XR industry meets in Laval, France, for the Laval Virtual exhibition. A tradition that attracts companies and academics from all over Europe and beyond, from Japan to India, through Korea and the USA.
We were on site to review the main new developments of recent months, but also to discover more confidential projects deserving of the spotlight. Follow the guide!
New technologies, companies present at Laval Virtual 2024
Apple Vision Pro, Varjo XR-4: the latest XR headsets
At Laval Virtual 2024, we put to test the latest virtual or mixed reality headsets and we talked to other attendees, as well as companies, to get their feedback.
Regarding the Apple Vision Pro, we tend to see it as what Microsoft Hololens could have become. An effective mixed reality system, with intuitive controls through gestures. We particularly appreciate the high definition display, the ability to select and then validate an action with the gaze and a movement of the hand… Without having to lift it, thanks to built-in cameras filming downwards!
However, we can confirm the feedback from many users: the headset feels a bit heavy (the headband is very nice, but a strap on the top of the head would have been very helpful). And above all, the narrowness of the headset complicates the use of glasses. Apple openly states that this headset is not meant to be worn with glasses, and recommends purchasing dedicated corrective lenses if needed. Does this design choice really make sense? Maybe not. Most headsets on the market allow users to wear glasses, as long as they are not too wide. It is therefore perplexing to see Apple going in the opposite direction.
The Varjo XR-4, on the other hand, is Varjo’s new headset. Resolution is 4K x 4K per eye, and the headset is optimized for XR with its pass-through cameras and LiDAR.
The product comes in three versions: a standard version, a second with improved pass-through. The third is tailored for the defense industry. Unlike the other two versions, this “XR-4 Secure Edition” is manufactured in Europe and not in China, and does not require an internet connection. The price consequently jumps to about $20,000, against $4,000 for the basic version.
The Varjo XR-4 features excellent definition across the entire field of view. We always appreciate the mix of XR and LiDAR: the latter is used to enhance the experience. If you visualize a virtual cockpit, and your hand passes through the dashboard, the model is displayed in front of it. It feels as if the 3D model was a hologram.
But the biggest evolution is that of the business model: Varjo says goodbye to its subscription system, which allowed you to buy the XR-3 at a discounted price in exchange for a yearly subscription… But turned your headset into a very expensive paperweight if you stopped paying. The company explains that this option was little appreciated, and was very time-consuming for Varjo in terms of management.
Does this news mean that XR-3 headsets under subscription will also automatically be rid of this constraint? Not quite. Varjo explains that the switch can be made on request, but according to a XR-3 customer we know, in practice, the procedures can be very constraining or even impossible. We would have appreciated a less restrictive approach. For example, the unlocking of all headsets by a simple update, without having to go through support.
Haply Robotics: an accessible haptic system, soon in the hands of artists?
Haptic systems are not new, but it is always interesting to learn about improvements in this area. Haply Robotics provides this type of solution: the Inverse3, a system that allows manipulating elements in 3D space, with force feedback. Precision: up to 0.1mm, force exerted: up to 10 Newtons.
The Inverse3 has the advantage of being able to reproduce a wide range of sensations, and not just a hard surface. We can, for example, confirm that soft or “rubbery” surfaces are well simulated, and for good reason: the medical sector (surgical training, dentists) is the main target of Haply Robotics. It is therefore essential to be able to simulate contact with human flesh and tissues. Other markets include remote control of robots for the industry, and designers.
A version more oriented towards CG artists is going to be launched soon, at a more accessible price (800$ or less, according to Haply Robotics). The company, based in Montreal, is also interested in partnerships with software publishers, to ensure the compatibility of its solution.
XDE Physics: point clouds and 3D Gaussian Splatting
We were also able to discuss with CEA LIST (CEA Laboratory of Interactive Simulation), which showcased XDE Physics. This physical engine is a plugin that can simulate collisions, doors, cables, and is available in Unity, Unreal Engine, and NVIDIA Omniverse. It can be used when working with massive point clouds from LiDAR scans, but also in other situation. We tried a VR project based on 3D Gaussian splatting. As a reminded, this technique stemming from the research work of INRIA, is a hot topic at the moment. Compared to photogrammetry, there is no mesh, only 3D Gaussians. Without going into too much detail, this approach is particularly effective for representing reflections, anisotropic surfaces, or even fine elements like hair, foliage.
The CEA LIST team presented a project carried out under the aegis of the Centre des monuments nationaux, with an interactive visualization of two rooms of the Hôtel de la Marine, in Paris. The use of Gaussian splatting proved very effective, with good rendering of reflections and light effects of the scene captured at night.
Moverse: markerless motion capture in real-time
Un the StatUps area, we met Moverse, a greek company specialized in AI-based motion capture. The idea is to use commercially available cameras to capture someone on video, then AI does the rest. The motion data can be exported in standard formats such as FBX or streamed into game engines such as UE, Unity.
Moverse is subscription-based. The Pro version allows you to export raw data, while the Advanced version provides better support and automatic data cleaning.
The capture typically requires from 3 to a dozen cameras, depending on the size of the area where the talent is moving. For the moment, you can only capture one person at a time, but this issue will be corrected.
Moverse specified that any HD camera featuring PoE is supported, but they can also provide specific models for a turnkey experience.
For the time being, Moverse mainly targets the XR sector, but the company also wants to target video game, cinema industries.
CLARTE – 3D Gaussian splatting in a few minutes
We also chatted with CLARTE. The highlight of their booth was a tech demo featuring 3D gaussian splatting. The group, specialized in AR/VR/AI consulting, is interested in this technology for different partners, such as the automotive sector.
CLARTE allowed visitors to create their portrait thanks to 3D Gaussian splatting. The twist the reconstruction only took a few minutes. The capture was done with a dozen webcams, and the computing was done on site. Compromises are obviously necessary for such a quick result, but this impressive demo makes you wonder whether we might one day achieve interactive or even real-time 3D gaussian splatting.
You could vizualize your 3D gaussian splatting self inside a VR demo also featuring voice recognition and text to speech via ChatGPT: one could discuss with a virtual extraterrestrial during the processing. The 3D Gaussian splatting was updated a few times during the discussion.
Dimensify: text to 3D generative AI
We also met with the team from Dimensify, a service for creating 3D models using generative AI created by Eurosys Informatics GmbH. A beta version will soon be launched.
As always with generative AI, the question that comes to mind is that of the dataset used to train the tool. Dimensify confirmed having used Objaverse, a dataset of annotated 3D models. This is an issue: as we explained on 3DVF, Objaverse (whether in its original version or Objaverse-XL, an extended version that contains even more 3D models) contains data of more than dubious origin. One of the main sources: 3D models, tags, descriptions scrapped on Sketchfab. Which is forbidden by Sketchfab’s terms of use.
Moreover, some 3D models shared under “Creative Commons licence” on Sketchfab are actually not copyright free. For example, people have shared 3D models ripped from Nintendo video games under CC licence. They have no right to so. And you can’t use these models without facing legal repercussions.
Dimensify explained us they are aware of this issue, while asserting that it was not necessarily an issue, and that the intention of Dimensify was to provide studios with the means of using their own data, which would allow them to generate models in the style of their game they are developping. And they promise that any data sent to them will be safe and not reused for other customers.
There is clearly a market for “prompt to 3D” tools such as Dimensify, for example to populate 3D scenes or to create prototypes. But it’s a bit perplexing to hear a company pleding to respect the copyright of their upcoming customers, minutes after admitting they chose to disregard copyright infrigement issues, knowingly, to accelerate the development of their product.
This stance is, unfortunately, far from being an isolated case in the generative AI industry.