Core Concepts
Understand the architecture of Amux
Amux is built on three core concepts: Adapters, IR (Intermediate Representation), and Bridge. Understanding these is key to using Amux effectively.
Architecture Overview
Amux uses a simple but powerful pattern:
Your App (Provider Format A)
↓
Inbound Adapter → Parse to IR
↓
Intermediate Representation (unified)
↓
Outbound Adapter → Build Provider Format B
↓
Provider API
↓
Response flows back through IR
↓
Your App (Provider Format A)1. Adapters
Adapters convert between provider-specific formats and the unified IR format. Each adapter has two directions:
Inbound (Provider → IR)
Parses provider-specific requests into IR:
// OpenAI format → IR
const ir = openaiAdapter.inbound.parseRequest({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
// Returns: LLMRequestIROutbound (IR → Provider)
Builds provider-specific requests from IR:
// IR → Anthropic format
const anthropicRequest = anthropicAdapter.outbound.buildRequest(ir)
// Returns: { model: 'claude-...', messages: [...], ... }Adapter Capabilities
Each adapter declares what it supports:
openaiAdapter.capabilities = {
streaming: true,
tools: true,
vision: true,
multimodal: true,
systemPrompt: true,
toolChoice: true,
jsonMode: true,
logprobs: true,
seed: true
}The Bridge uses these capabilities to check compatibility and warn about unsupported features.
2. Intermediate Representation (IR)
The IR is a unified format that captures all LLM capabilities:
interface LLMRequestIR {
messages: Message[] // Conversation history
model?: string // Model identifier
tools?: Tool[] // Available functions
toolChoice?: ToolChoice // Tool usage control
stream?: boolean // Enable streaming
generation?: GenerationConfig // Temperature, max tokens, etc.
system?: string // System prompt
metadata?: Record<string, any> // Additional metadata
extensions?: Record<string, any> // Provider-specific features
}Why IR?
Unified format - All providers speak the same language internally:
// OpenAI and Anthropic both convert to the same IR
const ir1 = openaiAdapter.inbound.parseRequest(openaiRequest)
const ir2 = anthropicAdapter.inbound.parseRequest(anthropicRequest)
// Both produce LLMRequestIRProvider-agnostic - Write code once, use with any provider:
function processRequest(ir: LLMRequestIR) {
// This function works with any provider's IR
console.log(`Processing ${ir.messages.length} messages`)
}Extensible - Support provider-specific features:
const ir: LLMRequestIR = {
messages: [{ role: 'user', content: 'Hello' }],
extensions: {
// OpenAI-specific
response_format: { type: 'json_object' },
// Anthropic-specific
thinking: { type: 'enabled', budget_tokens: 1000 }
}
}3. Bridge
The Bridge orchestrates the conversion flow:
const bridge = createBridge({
inbound: openaiAdapter, // Parse incoming format
outbound: anthropicAdapter, // Build outgoing format
config: {
apiKey: 'xxx',
baseURL: 'https://api.anthropic.com'
}
})Bridge Workflow
When you call bridge.chat():
- Parse - Inbound adapter parses request → IR
- Validate - Bridge validates IR completeness
- Check - Bridge checks adapter compatibility
- Build - Outbound adapter builds provider request
- Call - Bridge calls provider API
- Parse - Outbound adapter parses response → IR
- Build - Inbound adapter builds response format
- Return - Bridge returns response to you
const response = await bridge.chat({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
// Behind the scenes:
// OpenAI format → IR → Anthropic format → Claude API
// Claude response → IR → OpenAI format → returned to youError Handling
The Bridge provides unified error handling:
import { LLMBridgeError } from '@amux.ai/llm-bridge'
try {
const response = await bridge.chat(request)
} catch (error) {
if (error instanceof LLMBridgeError) {
console.error('Error type:', error.type)
console.error('Message:', error.message)
console.error('Retryable:', error.retryable)
// Handle specific error types
if (error.type === 'rate_limit') {
// Wait and retry
} else if (error.type === 'authentication') {
// Check API key
}
}
}Common error types:
network- Connection issues (retryable)rate_limit- Rate limit exceeded (retryable)authentication- Invalid API key (not retryable)validation- Invalid request format (not retryable)api- Provider API error (maybe retryable)
Putting It Together
Here's how it all works together:
import { createBridge } from '@amux.ai/llm-bridge'
import { openaiAdapter } from '@amux.ai/adapter-openai'
import { anthropicAdapter } from '@amux.ai/adapter-anthropic'
// 1. Create bridge with adapters
const bridge = createBridge({
inbound: openaiAdapter, // Handles OpenAI format
outbound: anthropicAdapter, // Handles Anthropic format
config: {
apiKey: process.env.ANTHROPIC_API_KEY
}
})
// 2. Send request (OpenAI format)
const response = await bridge.chat({
model: 'gpt-4',
messages: [{ role: 'user', content: 'Hello!' }]
})
// 3. Get response (OpenAI format)
console.log(response.choices[0].message.content)
// Behind the scenes:
// - openaiAdapter.inbound converts OpenAI → IR
// - anthropicAdapter.outbound converts IR → Anthropic
// - Claude API is called
// - anthropicAdapter.inbound converts Anthropic → IR
// - openaiAdapter.outbound converts IR → OpenAI
// - You get OpenAI-format responseKey Takeaways
- Adapters handle conversion between provider formats and IR
- IR is the universal format that all providers understand
- Bridge orchestrates the conversion and API calls
- The conversion is bidirectional - you control both input and output formats
- Error handling is unified across all providers