Zhipu Adapter
Use the Zhipu adapter to connect to Zhipu AI's GLM series models
The Zhipu adapter provides integration with the Zhipu AI API. Zhipu AI offers the GLM series of large language models, with an API that is fully compatible with the OpenAI format.
Installation
pnpm add @amux.ai/llm-bridge @amux.ai/adapter-zhipuBasic Usage
import { createBridge } from '@amux.ai/llm-bridge'
import { zhipuAdapter } from '@amux.ai/adapter-zhipu'
const bridge = createBridge({
inbound: zhipuAdapter,
outbound: zhipuAdapter,
config: {
apiKey: process.env.ZHIPU_API_KEY
}
})
const response = await bridge.chat({
model: 'glm-4.7',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'What is Amux?' }
]
})
console.log(response.choices[0].message.content)Supported Models
Zhipu AI offers various GLM models:
| Model | Description |
|---|---|
| glm-4.7 | Latest flagship model with best performance |
| glm-4.6 | High-performance model balancing capability and cost |
| glm-4.5 | Cost-effective model for everyday tasks |
| glm-4v | Multimodal model supporting visual understanding |
Zhipu AI models are continuously updated. Please refer to the official documentation for the latest model list.
Key Features
Basic Chat
const response = await bridge.chat({
model: 'glm-4.7',
messages: [
{ role: 'user', content: 'Tell me about Zhipu AI' }
],
temperature: 0.7,
max_tokens: 1000
})Function Calling
const response = await bridge.chat({
model: 'glm-4.7',
messages: [
{ role: 'user', content: 'What time is it in Beijing?' }
],
tools: [{
type: 'function',
function: {
name: 'get_current_time',
description: 'Get the current time for a specified city',
parameters: {
type: 'object',
properties: {
city: { type: 'string', description: 'City name' }
},
required: ['city']
}
}
}]
})Streaming
const stream = bridge.chatStream({
model: 'glm-4.7',
messages: [
{ role: 'user', content: 'Tell me a story' }
],
stream: true
})
for await (const chunk of stream) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content)
}
}Configuration Options
const bridge = createBridge({
inbound: zhipuAdapter,
outbound: zhipuAdapter,
config: {
apiKey: process.env.ZHIPU_API_KEY,
baseURL: 'https://open.bigmodel.cn/api/paas', // Default value
timeout: 60000
}
})Feature Support
| Feature | Supported | Notes |
|---|---|---|
| Chat Completion | ✅ | Fully supported |
| Streaming | ✅ | Fully supported |
| Function Calling | ✅ | Fully supported |
| Vision | ✅ | glm-4v supported |
| System Prompt | ✅ | Fully supported |
| JSON Mode | ✅ | Fully supported |
| Web Search | ✅ | Supported |
Best Practices
1. Choose the Right Model for Your Needs
// Use cost-effective model for everyday tasks
const quickResponse = await bridge.chat({
model: 'glm-4.5',
messages: [{ role: 'user', content: 'Hello' }]
})
// Use flagship model for complex tasks
const complexTask = await bridge.chat({
model: 'glm-4.7',
messages: [
{ role: 'user', content: 'Please analyze the performance issues in this code...' }
]
})2. Use System Prompts to Optimize Output
const response = await bridge.chat({
model: 'glm-4.7',
messages: [
{
role: 'system',
content: 'You are a professional technical documentation writer. Please answer questions in concise, professional language.'
},
{
role: 'user',
content: 'What is a RESTful API?'
}
]
})3. Handle Multi-turn Conversations
const messages = [
{ role: 'user', content: 'First question' },
{ role: 'assistant', content: 'First answer' },
{ role: 'user', content: 'Follow-up question' }
]
const response = await bridge.chat({
model: 'glm-4.7',
messages
})Converting with OpenAI
Zhipu AI is fully compatible with the OpenAI format:
import { openaiAdapter } from '@amux.ai/adapter-openai'
import { zhipuAdapter } from '@amux.ai/adapter-zhipu'
const bridge = createBridge({
inbound: openaiAdapter,
outbound: zhipuAdapter,
config: {
apiKey: process.env.ZHIPU_API_KEY
}
})
// Send requests using OpenAI format
const response = await bridge.chat({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello' }]
})