NVIDIA Nemotron 3 Super 120B A12B (Reasoning)
Описание
Nemotron 3 Super is a 120B total / 12B active parameter hybrid Mamba-Attention Mixture-of-Experts model optimized for agentic reasoning, coding, planning, tool calling, and long-context analysis. It introduces LatentMoE (projecting tokens into a compressed latent space for expert routing, enabling 4x more experts at the same inference cost), Multi-Token Prediction for native speculative decoding (up to 3x faster generation), and native NVFP4 pretraining on Blackwell. The hybrid architecture interleaves Mamba-2 layers for linear-time sequence processing with strategically placed Transformer attention layers as global anchors, supporting a 1M-token context window. Pre-trained on 25 trillion tokens and post-trained with multi-environment RL across 21 configurations using NeMo Gym/RL with 1.2 million rollouts. Achieves up to 5x higher throughput than previous Nemotron Super and 2.2x higher throughput than GPT-OSS-120B while maintaining comparable accuracy.
Радар способностей
Science использует прокси на основе рассуждений, когда специализированные научные бенчмарки недоступны.
Рейтинги
| Домен | #Место | Оценка | Источник |
|---|---|---|---|
| Agents & Tools | 96 | 30.0 | LS |
| Code Ranking | 108 | 58.0 | AA |
| General Ranking | 102 | 66.0 | AA |
| Reasoning | 92 | 42.0 | LS |
| Science | 96 | 62.0 | AA |
Оценки бенчмарков (LLM Stats)
Agents
Biology
Code
Communication
Creativity
Finance
General
Language
Long Context
Math
Reasoning
Индексы оценки AA
Оценки категорий LLM Stats
Цены
Скорость
Доступные провайдеры
(Внутренние единицы LS)Нет данных провайдеров