10 major NVIDIA breakthroughs: 6G networks, robotaxis, quantum, and more
From open models and IGX Thor at the edge to BlueField-4 DPUs, DOE AI supercomputers and NVQLink for quantum, NVIDIA recently mapped an end-to-end AI stack across networks, robots, and research.
NVIDIA used its Washington, D.C. GTC event to roll out a sweeping stack of hardware, software and partnerships that push AI deeper into telecom, robotics, science and transportation.
Below is a rundown of these NVDIA announcements:
1) Nokia partnership: 5G-Advanced to 6G
NVIDIA and Nokia announced a strategic tie-up to commercialize AI-RAN products on NVIDIA platforms, alongside a $1B NVIDIA investment in Nokia. The plan centers on Aerial RAN Computer (ARC-Pro), a 6G-ready, accelerated computing design that unifies radio and AI workloads and promises a software-upgrade path from today’s 5G-Advanced to 6G.
T-Mobile US will test AI-RAN technologies from 2026; Dell will supply PowerEdge servers. The companies pitch it as a practical route to distribute edge inference, improve spectral/energy efficiency, and prepare networks for AI-native devices and services.
2) AI physics: Faster engineering sims
NVIDIA detailed PhysicsNeMo and the DoMINO NIM microservice for AI-driven physics, claiming up to 500× faster computational engineering by combining GPU acceleration with AI-initialized simulations.
Examples include using pretrained models to start fluid simulations closer to convergence and then validating with CUDA-X-accelerated solvers (e.g., Ansys Fluent).
Adopters span aerospace and automotive, with mentions of datasets generated on platforms like Cadence Millennium M2000 to train surrogates and explore designs interactively. The impact: speedier iteration cycles, lower compute costs per design, and more real-time workflows in complex mechanical engineering.
3) Digital twins for fusion
In energy research, NVIDIA and General Atomics (with support from UC San Diego, Argonne and NERSC) unveiled a high-fidelity digital twin of the DIII-D fusion facility, built in NVIDIA Omniverse and accelerated on NVIDIA GPUs.
AI surrogate models trained on decades of plasma data will be used to predict and help control plasma behavior in seconds, versus weeks for traditional simulations. The team argued this enables safer “what-if” exploration and faster tuning before real experiments, potentially shortening the path to commercially viable fusion.
4) Aerial goes open source
NVIDIA said its Aerial software—covering CUDA-accelerated RAN, Aerial Omniverse Digital Twin (AODT) and a new Aerial Framework—will be open-sourced under Apache 2.0, with initial code expected on GitHub starting December 2025 (AODT targeted for March 2026).
The goal is to let researchers and vendors prototype AI-RAN components quickly on NVIDIA platforms (including DGX Spark), test neural PHY features (e.g., advanced channel estimation), and even expose secure RAN data to third-party “dApps” for real-time optimization.
5) IGX Thor: Real-time “physical AI”
For robotics and healthcare, NVIDIA introduced IGX Thor, an industrial-grade platform powered by Blackwell GPUs and designed for deterministic, real-time AI.
NVIDIA claimed up to 8× the iGPU AI compute versus IGX Orin, 5,581 FP4 TFLOPS aggregate AI compute (iGPU+dGPU), 400 GbE connectivity, a 10-year lifecycle and integration with Isaac, Metropolis, Holoscan and NIM microservices.
Early adopters span Hitachi Rail, Maven Robotics, and Diligent Robotics; CMR Surgical is reportedly evaluating it for image-guided procedures. Availability is slated via T5000/T7000 systems and dev kits.
6) DOE supercomputers with Oracle
NVIDIA, Oracle and the U.S. Department of Energy announced Solstice (100,000 Blackwell GPUs) and Equinox (10,000 Blackwell GPUs), two AI systems to be installed at Argonne National Laboratory.
The companies position them as the DOE’s largest AI supercomputers, aimed at agentic-AI workflows for open science across healthcare, materials and more, using NVIDIA libraries like Megatron-Core/TensorRT.
Equinox is targeted for H1 2026 availability; both systems will be connected by NVIDIA networking with a claimed combined 2,200 exaFLOPS of AI performance.
7) Robotaxi scale-up: Uber partnership
NVIDIA and Uber outlined plans to scale a Level-4-ready ride-hailing network starting 2027, targeting up to 100,000 autonomous vehicles over time. The technical backbone is DRIVE AGX Hyperion 10 (sensors + compute reference), with DRIVE AGX Thor SoCs and NVIDIA’s AV software.
The partnership also includes a joint AI data factory built on the Cosmos platform for ingesting and curating multimodal driving data. Automakers and ecosystem partners named include Stellantis, Lucid, Mercedes-Benz, and trucking players like Aurora and Volvo Autonomous Solutions. NVIDIA also introduced the Halos Certified Program for physical-AI safety certification.
8) Open models & datasets
NVIDIA released families of open-weight models and datasets across language (Nemotron), physical AI (Cosmos), robotics (Isaac GR00T) and biomedical (Clara) via Hugging Face and other hubs, with NIM microservices for deployment.
Highlights include Nemotron Nano 3 for efficient reasoning, Nemotron Safety Guard for multilingual moderation, Cosmos Predict/Transfer 2.5 for world simulation and photorealistic data, Isaac GR00T N1.6 for humanoid control, and Clara models for RNA/protein design and radiology reasoning. NVIDIA also spotlighted a 1,700-hour multimodal driving dataset for AV foundation models.
9) BlueField-4 DPUs & ConnectX-9
On the data center side, NVIDIA launched BlueField-4 DPUs with 800Gb/s throughput and ConnectX-9 SuperNICs, positioned as foundational to “AI factory” architectures.
NVIDIA said BlueField-4 combines Grace CPU + ConnectX-9 to deliver 6× compute vs BlueField-3 and support AI factories up to 4× larger, with DOCA microservices for multi-tenant networking, storage acceleration, and runtime security.
An ecosystem list spans Cisco, Dell, HPE, Lenovo, VAST Data, WEKA, and cybersecurity vendors like Check Point and Palo Alto Networks. Early availability is tied to Vera Rubin platforms in 2026.
10) NVQLink: Bridging quantum and GPU
NVIDIA introduced NVQLink, an open interconnect to tightly couple quantum processing units (QPUs) with GPU clusters for hybrid quantum-classical computing.
The company said 17 quantum builders, five controller vendors and nine U.S. national labs (including Brookhaven, Los Alamos, Oak Ridge, Sandia, Berkeley Lab) will contribute to or will use NVQLink.
The aim is low-latency control and error-correction loops that today require close coupling to classical supercomputers, especially for chemistry/materials workloads. Access will be integrated through CUDA-Q for application development.
This slate of announcements sketches an end-to-end NVIDIA playbook: open models and toolchains for developers, new silicon and interconnects for “AI factory” data centers, edge-class robotics platforms for regulated environments, and telecom/transport partnerships that move AI from the lab to live networks and streets.


