Nvidia Omniverse to support scientific digital twins

Nvidia Omniverse to support scientific digital twins

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.



Nvidia has announced several significant advances and partnerships to extend the Omniverse into scientific applications on top of high-performance computer (HPC) systems. This will support scientific digital twins that join together data silos currently existing across different apps, models, instruments and user experiences. This work will expand upon Nvidia’s progress in building out the Omniverse for entertainment, industry, infrastructure, robotics, self-driving cars and medicine

The Omniverse platform uses special-purpose connectors to dynamically translate and align 3D data from dozens of formats and applications on the fly. Changes in one tool, application or sensor are dynamically reflected in other tools and views that look at the same building, factory, road or human body from different perspectives. 

Scientists are using it to model fusion reactors, cellular interactions and planetary systems. Today, scientists spend a lot of time translating data between tools and then manually tweaking the data representation, model configuration and 3D rendering engines to see the results. Nvidia wants to use the USD (universal scene description) format as an intermediate data tier to automate this process. 

Nvidia lead product manager of accelerated computing, Dion Harris, explained, “The USD format allows us to have a single standard by which you can represent all those different data types in a single complex model. You could go in and somehow build an API specifically for a certain type of data, but that process would not be scalable and extendable to other use cases or other sorts of data regimes.” 

Here are the major updates:

  • Nvidia Omniverse now connects to scientific computing visualization tools on systems powered by Nvidia A100 and H100 Tensor Core GPUs. 
  • Supports larger scientific and industrial digital twins using Nvidia OVX and Omniverse Cloud.
  • Enhances Holoscan to support scientific use cases in addition to medical. New APIs for C++ and Python will make it easier for researchers to build sensor data processing workflows for Holoscan.
  • Added connections to Kitware’s ParaView for visualization, Nvidia IndeX for volumetric rendering, Nvidia Modulus for Physics-ML, and Neural VDB for large-scale sparse volumetric representation. 
  • MetroX-3 extends the range of the Nvidia Quantum-2 InfiniBand Platform up to 25 miles. This will make connecting scientific instruments spread across a large facility or campus easier.
  • Nvidia BlueField-3 DPUs will help orchestrate data management at the edge.

Building bigger twins

Processing latency is one of the biggest challenges with building Omniverse workflows that span many tools and applications. While it is one thing to translate between a few file formats or tools, creating live connections between many requires serious computing horsepower. The larger Nvidia A100 and H100 could help reduce the latency in running the larger models, and support for Nvidia OVX and Omniverse Cloud will help enterprises scale composable digital twins across more building blocks. 

Nvidia created a demo showing how these new capabilities can simulate more aspects of data centers. Earlier this year, they reported on work to simulate data center network hardware and software. Now they can bring together engineering designs from tools like Autodesk Revit, PTC Creo and Trimble SketchUp to share designs across different engineering teams. These can be combined with port maps in Patch Manager for planning cabling and physical connectivity within the data center. Then Cadence 6SigmaDCX can help analyze heat flows, and Nvidia Modulus can create faster surrogate models to do what-if analysis in real time. 

Nvidia is also working on a partnership with Lockheed Martin on a project for the National Oceanic and Atmospheric Administration. They plan to use the Omniverse as part of an Earth observation digital twin to monitor the environment and gather data from ground stations, satellites and sensors into one model. This could help improve our understanding of glacial melting, model climate impacts, assess drought risks and prevent wildfires. 

This digital twin will work with Lockheed’s OpenRosetta3D to store data, apply artificial intelligence (AI) and build connectors with various tools and apps that are standardized using the USD format to represent and share data across the system. Nvidia Nucleus will translate between native data formats and the USD format, and then deliver those to Lockheed’s Agatha 3D viewer, based on Unity, to visualize data from multiple sensors and models. 

Harris believes these enhancements will usher in a new era of digital twins that evolves from passively reflecting a model of the world to actively shaping the world. A two-way connection will leverage IoT, AI and the cloud to issue commands to equipment in the field. For example, Nvidia is working with Lockheed Martin on using digital twins to help direct satellites to focus on areas at increased risk of forest fires. 

“We are just scratching the surface of digital twins,” Harris said.


VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.