NVIDIA Nemotron 3 Super 120B A12B (Reasoning)
설명
Nemotron 3 Super is a 120B total / 12B active parameter hybrid Mamba-Attention Mixture-of-Experts model optimized for agentic reasoning, coding, planning, tool calling, and long-context analysis. It introduces LatentMoE (projecting tokens into a compressed latent space for expert routing, enabling 4x more experts at the same inference cost), Multi-Token Prediction for native speculative decoding (up to 3x faster generation), and native NVFP4 pretraining on Blackwell. The hybrid architecture interleaves Mamba-2 layers for linear-time sequence processing with strategically placed Transformer attention layers as global anchors, supporting a 1M-token context window. Pre-trained on 25 trillion tokens and post-trained with multi-environment RL across 21 configurations using NeMo Gym/RL with 1.2 million rollouts. Achieves up to 5x higher throughput than previous Nemotron Super and 2.2x higher throughput than GPT-OSS-120B while maintaining comparable accuracy.
능력 레이더
전용 과학 벤치마크가 없을 때 Science는 추론 프록시를 사용하여 추정합니다.
랭킹
| 도메인 | #순위 | 점수 | 소스 |
|---|---|---|---|
| Agents & Tools | 96 | 30.0 | LS |
| Code Ranking | 108 | 58.0 | AA |
| General Ranking | 102 | 66.0 | AA |
| Reasoning | 92 | 42.0 | LS |
| Science | 96 | 62.0 | AA |
벤치마크 점수 (LLM Stats)
Agents
Biology
Code
Communication
Creativity
Finance
General
Language
Long Context
Math
Reasoning
AA 평가 지수
LLM Stats 카테고리 점수
가격
속도
사용 가능한 프로바이더
(LS 내부 단위)프로바이더 데이터가 없습니다