Back to Models

Nemotron Super 120B

nvidianvidia/nemotron-super-120b

NVIDIA's hybrid LatentMoE model (120B total, 12B active). Mamba-2 + Attention + MoE architecture with 1M context. Multi-Token Prediction for fast inference.

Context Length
1M
Max Output
262K
Input Price
$0.240/ 1M tokens
Output Price
$1.02/ 1M tokens

Modalities

texttext

Pricing Breakdown

TypeRate
Input$0.240 / 1M tokens
Output$1.02 / 1M tokens

Supported Parameters

temperaturemax_tokenstop_ptoolstool_choiceresponse_formatstop

API Usage Examples

cURL
curl https://api.therouter.ai/v1/chat/completions   -H "Content-Type: application/json"   -H "Authorization: Bearer $THE_ROUTER_API_KEY"   -d '{
    "model": "nvidia/nemotron-super-120b",
    "messages": [
      {"role": "user", "content": "Summarize the key points from this input."}
    ]
  }'