Skip to content
Models/glm/GLM 4.6

GLM 4.6

Amazon Bedrock·glm·glm-4.6·

Z.AI GLM 4.6 open-weight model, general + code, tool use and long context

Context Window
128K
Input price / 1M tokens
Free1M tokens
Output price / 1M tokens
Free1M tokens
Cached input / 1M tokens
Free1M tokens
Max Completion
4K
Input Modalities
text
Output Modalities
text
Function callingChatStreaming

Description

Z.AI GLM 4.6 open-weight model, general + code, tool use and long context

Available Providers

AllToken can route requests to the providers below based on route priority and policy.

ProviderContext LengthInput PriceOutput PriceCached / MLatency p50Throughput

Best For

Z.AI GLM 4.6 open-weight model, general + code, tool use and long context

How To Use This Model

Use the exact model ID shown below. This is the safest way to avoid call failures, variant mismatches, or incorrect route assumptions.

curl https://api.alltoken.ai/v1/chat/completions \
  -H "Authorization: Bearer sk-your-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "glm-4.6",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'
Supported Parameters
temperaturetop_pmax_tokenstools
API Key Setup
Smart Routing

Let the platform choose the best provider path automatically.

Default Model

If a request does not specify a model, default the key to glm-4.6.

Forced Model

Always override incoming requests and lock the key to glm-4.6.