I had the opportunity to talk with Richard Kerris, vice president of the Omniverse development platform at NVIDIA. In the interview, I asked Kerris questions that are on every investor’s mind, including pointed questions about where real revenue growth will come from, how large is the addressable market exactly, plus what CEO Jensen Huang meant when said that the “Omniverse or the Metaverse is going to be a new economy that is larger than our current economy.”
The Metaverse is particularly challenging for investors as the opportunity is enormous yet getting the timing right and also choosing which companies will participate in the futuristic yet burgeoning virtual economy will not be easy.
Here are the main points we discuss in the video:
- Why the Metaverse Can Exceed Our Current Economy
- How Universal Scene Description provides the portal to the Metaverse
- The Importance of Ray-Tracing and the RTX Platform
- Industrial Virtual Worlds
Watch the Nvidia Omniverse interview with Beth Kindig and Richard Kerris:
Candid Interview with Nvidia: Can the Metaverse Drive Real Revenue Growth
Last year, Nvidia CEO Jensen Huang said, “Omniverse or the Metaverse is going to be a new economy that is larger than our current economy.” Many investors look for large addressable markets, so to hear a CEO state that a technology could surpass the size of our current economy is a statement to pay close attention to. In the interview, I asked Kerris what Huang meant, and Kerris’ answer was quite simple: the Metaverse will exceed our current economy because it will be many times larger than the internet.
Here is a direct quote:
“[The Metaverse] is going to be many times bigger than the web because of what a virtual world can do for business, for education, for medical, for all sorts of things including entertainment; we’ve just begun to scratch the surface of these possibilities […] You’ve probably heard the term digital twin. One example is what it’s going to do to revolutionize the industrial market, design and manufacturing. Well, a digital twin is a true-to-reality twin in synthetic worlds of what happens in the physical world. We are seeing this transform these things because when you can make decisions in that synthetic world before you commit to it in the physical world, you have a lot of cost savings.” –Richard Kerris
According to Kerris, in simple terms, the Metaverse is virtual worlds. Rather than being a Ready Player One type experience where you are strapped down with heavy equipment, the Metaverse will instead offer more 3D experiences or environments that replace our current digital experiences.
The Real Value of 3D Virtual Worlds
Industrial 3D environments are especially ramping up as 3D virtual worlds result in cost savings, are safer for employees and allow for more iteration on designs. For example, BMW manufactures 2.5 million cars per year, and the company has made a digital twin of its factory to reduce any downtime when the company has to change its process for a new model. In the physical world, a new model affected production whereas now the company can change its process in a synthetic world to eliminate errors and downtime. The new approach helps BWM view their entire factory in simulation mode with photorealistic detail. Another advantage is that data is available immediately and any changes can be made in the planning stage itself, which saves time and money.
Another example as to the value of virtual worlds is route planning. Factories can perform route planning in a virtual warehouse in Nvidia’s Omniverse that allows orders to be optimized for automated order picking scenarios. Essentially, warehouses are able to reoptimize factory floor operations as new orders come in or as robots go offline. This optimization training can also save quite a bit of money.
Universal Scene Description (USD) Provides the Portal to the Metaverse
The portal to the Metaverse is Pixar’s Universal Scene Description. USD is hailed as one of the most important tools for building the Metaverse as it allows for a persistent experience to where the Metaverse is more connective and has consistency. Metaverse experts tend to reference the internet as a baseline for how the Metaverse will become widely adopted. In this Web 3.0 reference, Pixar’s USD functions like the connective HTML of 3D worlds.
“In the early days [of the web], once we started connecting things, we didn’t realize just how big it could be … one of the things that happened at that time was HTML. HTML allowed for a consistent plumbing or consistent connective tissue between the websites so you didn’t have to have a specific browser or a specific extension installed. Now, we go from one website to another and the videos play the same, the text is great, the images, etcetera. Much of the same kind of thing is going to happen with these virtual worlds because once you can teleport from one virtual world to another virtual world, and the experience of the worlds is about the content instead of the lighting and the materials, we will see a tremendous opportunity take place.” –Richard Kerris, on Universal Scene Description (USD)
With persistent experience, the Metaverse can rival Web 1.0 and Web 2.0, as it allows for virtual worlds to break out of siloed experiences. The USD open-source framework allows for the exchange of 3D data and has become a universally supported plugin for developers and creators to use as a baseline.
Nvidia’s Omniverse supports the USD plugin as a portal to access Nvidia’s rendering and graphics products. Apple, Autodesk, and many other companies are also using the framework to create a persistent experience for developers and creators. For example, the iPhone’s LiDAR scanner allows you to scan objects in USD. The scanned images are saved in a USDZ compressed file to be used across other platforms.
How Real-Time Ray Tracing Catalyzed Virtual Worlds
Nvidia was originally a hardware company that transitioned to also become a software company, which uniquely positioned the company to deliver ray-tracing to the Metaverse market. Ray-tracing allows for the simulation of light and physics to render graphics, resulting in a much more realistic and complex virtual world. After commercially releasing real-time ray tracing on a chip, Nvidia was then able to leverage RTX GPUs and ray tracing extensions for the renderer in their software simulation platform Omniverse.
“There was the big boom moment that took place for all of these things to happen. The first part of that was RTX, which is real-time ray tracing on a chip. It was really a milestone that will be forever remembered. Ray-tracing allows for photorealistic rendering to happen in real time. That means that the worlds we are talking about look more and more indistinguishable from the real world […] It was such a moment at Nvidia when that happened. I remember the enthusiasm of the computer graphics community. For those of us who have been in [the industry] for awhile, we dreamed of this 30 years ago. We knew it was that impactful of a moment.” -Richard Kerris
Ray-tracing renders 3D models through the physics of light. Through the physical simulation of light, subsurface scattering and light diffusion helps computer vision see things similar to how a human retina works.
Companies like Pixar have been able to do 3D rendering and computer-generated imaging for some time in films, television and gaming, however, Nvidia improves this process by adding the real-time capabilities to ray tracing through the RTX platform. Formally, it could take weeks to render animation that is only a few seconds long whereas the RTX platform reduces this time by offering more real-life retina vision. The RTX platform also offers better path tracing and denoising, which was first used for gaming, and now lends itself well to the virtual worlds of the metaverse.
More on the RTX Platform
The RTX platform provides APIs and SDKs running on Turing GPU architecture for applications to be built with ray tracing and AI-enhanced graphics. More than 200 games and applications use RTX including Minecraft, Fortnite and Cyberpunk. According to Nvidia’s previous earnings call, an estimated 25% of the installed base had adopted RTX GPUs. This is due to an overhaul on GPU gaming architecture as Nvidia has combined Turing RTX GPUs with Ampere RTX GPUs. This allows for AI rendering called Deep Learning Super Sampling (DLSS) to be combined with ray tracing. The success in gaming will only help the proficiency of real-time ray tracing and AI enabled super resolution capabilities for the Metaverse and 3D virtual worlds.
Industrial Virtual Worlds
Autonomous driving is one of the more compelling use cases for the Metaverse given the number of times autonomous vehicle systems were deployed on the roads and yet were pulled from the market due to accidents. Meanwhile, attempting to train models in the physical world (or analog world as Kerris puts it) can take a very long time.
“If we go back to the concept of a digital twin, a digital twin doesn’t have to be a factory, it can be a city. We have companies like Ericson who have built digital twins of cities for antenna propagation for the 5G network […] we’ve been using digital twins in that environment to have a synthetic world that these cars can be trained on that is indistinguishable from the real world, it’s exactly the way things are laid out, but you can have many cars being trained in that world at the same time. What’s most important is you can throw all kinds of predicaments at it so that it learns [..] that those cars in the analog world may never have encountered [..] there’s a much higher degree of confidence in how a [synthetically trained vehicle] will respond.”
The Isaac Sim toolkit is a robotics simulation and synthetic data generation tool that helps increase accuracy for robots. It supports a SDK and robotics operating system frameworks package to develop robotics AI and navigation applications. This helps to improve AI-based computer vision by improving the data sets, which in turn are used to train robots for increased understanding of their surroundings. The end result is fewer accidents and less human intervention.
It’s understandable if investors are skeptical of the Metaverse as the technology is essentially in the early adopter stage. My conversation with Nvidia helps to break down these walls around the virtual economy and how big it can get, which is partly because the internet as we know it is ready for disruption if we are to enable a new depth of digital connection.
Additionally, industries are able to reduce errors and increase productivity by using synthetic virtual worlds for training models and robots. Digital twins for factories, cities and other virtual assets can be used to get a product right the first time it is deployed.
Nvidia is a company that is not standing still. Last August, I had predicted Nvidia would surpass Apple to become the world’s most valuable company. The Omniverse is one of many reasons I believe this prediction is still on track to come true.
Beth Kindig and I/O Fund currently own shares of NVDA. This is not financial advice. Please consult with your financial advisor in regards to any stocks you buy.
Please note: The I/O Fund conducts research and draws conclusions for the Fund’s positions. We then share that information with our readers. This is not a guarantee of a stock’s performance. Please consult your personal financial advisor before buying any stock in the companies mentioned in this analysis.