Quick Start
Build your first Amux bridge in 5 minutes
This guide will walk you through creating your first bridge. You'll learn how to convert between OpenAI and Anthropic formats in just a few minutes.
Step 1: Install Packages
pnpm add @amux.ai/llm-bridge @amux.ai/adapter-openai @amux.ai/adapter-anthropicStep 2: Set Up API Keys
Create a .env file:
ANTHROPIC_API_KEY=your_key_hereNever commit your .env file to version control. Add it to .gitignore.
Step 3: Create Your First Bridge
Create bridge.ts:
import { createBridge } from '@amux.ai/llm-bridge'
import { openaiAdapter } from '@amux.ai/adapter-openai'
import { anthropicAdapter } from '@amux.ai/adapter-anthropic'
// Create a bridge: OpenAI format in → Anthropic API out
const bridge = createBridge({
inbound: openaiAdapter,
outbound: anthropicAdapter,
config: {
apiKey: process.env.ANTHROPIC_API_KEY
}
})
// Send OpenAI-format request
const response = await bridge.chat({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Say hello!' }
]
})
console.log(response.choices[0].message.content)What happens:
- You send a request in OpenAI format
- Amux converts it to Anthropic format
- Claude API is called
- Response is converted back to OpenAI format
Step 4: Run It
tsx bridge.tsYou should see a response from Claude, but in OpenAI format!
Try Streaming
For better user experience, use streaming:
import { createBridge } from '@amux.ai/llm-bridge'
import { openaiAdapter } from '@amux.ai/adapter-openai'
import { anthropicAdapter } from '@amux.ai/adapter-anthropic'
const bridge = createBridge({
inbound: openaiAdapter,
outbound: anthropicAdapter,
config: {
apiKey: process.env.ANTHROPIC_API_KEY
}
})
// Enable streaming
const stream = await bridge.chat({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'Tell me a short story' }
],
stream: true
})
// Process stream events
for await (const event of stream) {
if (event.type === 'content') {
process.stdout.write(event.content.delta)
}
}Common Patterns
Reverse Direction
Switch the adapters to go the other way:
// Anthropic format in → OpenAI API out
const bridge = createBridge({
inbound: anthropicAdapter,
outbound: openaiAdapter,
config: {
apiKey: process.env.OPENAI_API_KEY
}
})
const response = await bridge.chat({
model: 'claude-3-5-sonnet-20241022',
messages: [{ role: 'user', content: 'Hello!' }],
max_tokens: 100
})
console.log(response.content[0].text)Tool Calling
Use tools with any provider:
const response = await bridge.chat({
model: 'gpt-4',
messages: [
{ role: 'user', content: 'What is the weather in Beijing?' }
],
tools: [{
type: 'function',
function: {
name: 'get_weather',
description: 'Get the weather for a location',
parameters: {
type: 'object',
properties: {
location: { type: 'string' }
},
required: ['location']
}
}
}]
})
// Check for tool calls
if (response.choices[0].message.toolCalls) {
console.log('Tool calls:', response.choices[0].message.toolCalls)
}Error Handling
Always handle errors:
import { LLMBridgeError } from '@amux.ai/llm-bridge'
try {
const response = await bridge.chat(request)
} catch (error) {
if (error instanceof LLMBridgeError) {
console.error(`${error.type}: ${error.message}`)
if (error.retryable) {
// Retry the request
}
}
}