NVIDIA Nemotron 3 Super 120B A12B (Reasoning)
Description
Nemotron 3 Super is a 120B total / 12B active parameter hybrid Mamba-Attention Mixture-of-Experts model optimized for agentic reasoning, coding, planning, tool calling, and long-context analysis. It introduces LatentMoE (projecting tokens into a compressed latent space for expert routing, enabling 4x more experts at the same inference cost), Multi-Token Prediction for native speculative decoding (up to 3x faster generation), and native NVFP4 pretraining on Blackwell. The hybrid architecture interleaves Mamba-2 layers for linear-time sequence processing with strategically placed Transformer attention layers as global anchors, supporting a 1M-token context window. Pre-trained on 25 trillion tokens and post-trained with multi-environment RL across 21 configurations using NeMo Gym/RL with 1.2 million rollouts. Achieves up to 5x higher throughput than previous Nemotron Super and 2.2x higher throughput than GPT-OSS-120B while maintaining comparable accuracy.
Capability Radar
Science uses a reasoning proxy when dedicated science benchmarks are unavailable.
Rankings
| Domain | #Rank | Score | Source |
|---|---|---|---|
| Agents & Tools | 96 | 30.0 | LS |
| Code Ranking | 108 | 58.0 | AA |
| General Ranking | 102 | 66.0 | AA |
| Reasoning | 92 | 42.0 | LS |
| Science | 96 | 62.0 | AA |
Benchmark Scores (LLM Stats)
Agents
Biology
Code
Communication
Creativity
Finance
General
Language
Long Context
Math
Reasoning
AA Evaluation Indices
LLM Stats Category Scores
Pricing
Speed
Available Providers
(LS internal units)No provider data available