NVIDIA Nemotron 3 Super 120B A12B (Reasoning)
描述
Nemotron 3 Super is a 120B total / 12B active parameter hybrid Mamba-Attention Mixture-of-Experts model optimized for agentic reasoning, coding, planning, tool calling, and long-context analysis. It introduces LatentMoE (projecting tokens into a compressed latent space for expert routing, enabling 4x more experts at the same inference cost), Multi-Token Prediction for native speculative decoding (up to 3x faster generation), and native NVFP4 pretraining on Blackwell. The hybrid architecture interleaves Mamba-2 layers for linear-time sequence processing with strategically placed Transformer attention layers as global anchors, supporting a 1M-token context window. Pre-trained on 25 trillion tokens and post-trained with multi-environment RL across 21 configurations using NeMo Gym/RL with 1.2 million rollouts. Achieves up to 5x higher throughput than previous Nemotron Super and 2.2x higher throughput than GPT-OSS-120B while maintaining comparable accuracy.
能力雷達圖
Science 在缺少專門科學評測時使用推理能力代理估算。
排行榜排名
基準測試分數 (LLM Stats)
Agents
Biology
Code
Communication
Creativity
Finance
General
Language
Long Context
Math
Reasoning
AA 評測指數
LLM Stats 分類評分
定價
速度
可用提供商
(LS 內部計價單位)暫無提供商資料