NVIDIA Nemotron 3 Super 120B A12B (Reasoning)
Descripción
Nemotron 3 Super is a 120B total / 12B active parameter hybrid Mamba-Attention Mixture-of-Experts model optimized for agentic reasoning, coding, planning, tool calling, and long-context analysis. It introduces LatentMoE (projecting tokens into a compressed latent space for expert routing, enabling 4x more experts at the same inference cost), Multi-Token Prediction for native speculative decoding (up to 3x faster generation), and native NVFP4 pretraining on Blackwell. The hybrid architecture interleaves Mamba-2 layers for linear-time sequence processing with strategically placed Transformer attention layers as global anchors, supporting a 1M-token context window. Pre-trained on 25 trillion tokens and post-trained with multi-environment RL across 21 configurations using NeMo Gym/RL with 1.2 million rollouts. Achieves up to 5x higher throughput than previous Nemotron Super and 2.2x higher throughput than GPT-OSS-120B while maintaining comparable accuracy.
Radar de capacidades
Science usa un proxy de razonamiento cuando los benchmarks científicos dedicados no están disponibles.
Rankings
| Dominio | #Posición | Puntuación | Fuente |
|---|---|---|---|
| Agents & Tools | 96 | 30.0 | LS |
| Code Ranking | 108 | 58.0 | AA |
| General Ranking | 102 | 66.0 | AA |
| Reasoning | 92 | 42.0 | LS |
| Science | 96 | 62.0 | AA |
Puntuaciones de benchmarks (LLM Stats)
Agents
Biology
Code
Communication
Creativity
Finance
General
Language
Long Context
Math
Reasoning
Índices de evaluación AA
Puntuaciones por categoría LLM Stats
Precios
Velocidad
Proveedores disponibles
(Unidades internas LS)No hay datos de proveedores disponibles