Common issues and solutions for OpenAgents development
Troubleshooting
Common issues and solutions when working with OpenAgents. If you can’t find your issue here, please open a GitHub issue.
Installation Issues
Module not found: @openagentsinc/sdk
Symptoms: Import errors when trying to use the SDK
Solution:
# Clear package manager cache
pnpm store prune
# or
npm cache clean --force
# Reinstall dependencies
rm -rf node_modules package-lock.json
pnpm install
TypeScript compilation errors
Symptoms: Type errors when importing SDK functions
Solution: Ensure you’re using Node.js 18+ with proper TypeScript configuration:
// tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "node",
"allowSyntheticDefaultImports": true,
"esModuleInterop": true,
"strict": true
}
}
Ollama Connection Issues
Ollama is not available
errors
Symptoms: SDK functions fail with Ollama connection errors
Common Causes:
- Ollama not installed
- Ollama not running
- Ollama running on wrong port
- Firewall blocking connection
Solutions:
Install Ollama:
# macOS
brew install ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh
# Windows
# Download from https://ollama.com/download
Start Ollama:
# Start Ollama service
ollama serve
# Pull a model
ollama pull llama3.2
Check Connection:
# Test if Ollama is responding
curl http://localhost:11434/api/tags
# Should return JSON with available models
Alternative Port Configuration:
// If Ollama runs on different port
const customConfig = {
ollamaUrl: "http://localhost:11435" // Custom port
}
Models not loading
Symptoms: listModels()
returns empty array
Solution:
# List available models online
ollama list
# Pull popular models
ollama pull llama3.2
ollama pull mistral
ollama pull codellama
# Verify models loaded
curl http://localhost:11434/api/tags
Agent Creation Issues
Agent creation failed
errors
Symptoms: Agent.create()
throws errors
Common Causes:
- Invalid configuration
- Network connectivity issues
- Insufficient system resources
Solutions:
Validate Configuration:
// Minimal valid configuration
const agent = Agent.create({
name: "Test Agent", // Required: non-empty string
capabilities: ["test"] // Optional but recommended
})
// Avoid these common mistakes
const badAgent = Agent.create({
name: "", // ❌ Empty name
pricing: {
per_request: -100 // ❌ Negative pricing
}
})
Debug Agent Creation:
try {
const agent = Agent.create({ name: "Debug Agent" })
console.log("✅ Agent created successfully")
} catch (error) {
console.error("❌ Agent creation failed:", error.message)
if (error instanceof ValidationError) {
console.error("Configuration error:", error.details)
}
}
Mnemonic generation fails
Symptoms: generateMnemonic()
or createFromMnemonic()
errors
Solution:
// Test mnemonic generation
try {
const mnemonic = await Agent.generateMnemonic()
console.log("✅ Mnemonic generated:", mnemonic)
const agent = await Agent.createFromMnemonic(mnemonic)
console.log("✅ Agent created from mnemonic")
} catch (error) {
console.error("❌ Mnemonic error:", error.message)
// Check system entropy
if (error.message.includes("entropy")) {
console.log("Try running in environment with better randomness")
}
}
Package-Specific Issues
Nostr Integration Not Working
Symptoms: Agent Nostr keys are placeholder values
Cause: The Nostr package exists but isn’t integrated with the SDK yet.
Current State:
// Current: Returns placeholder keys
const agent = Agent.create({ name: "Test" })
console.log(agent.nostrKeys.public) // "npub..."
// Future: Will use actual NIP-06 derivation
Workaround: Use the @openagentsinc/nostr
package directly for Nostr functionality.
AI Provider Selection
Symptoms: Can’t use providers other than Ollama
Cause: The AI package supports multiple providers but SDK only uses Ollama.
Solution: For now, use Ollama for all inference:
// Ensure Ollama is running
const status = await checkOllama()
if (!status.online) {
console.error("Ollama required for inference")
}
Alternative: Use the @openagentsinc/ai
package directly for Claude integration.
Inference Issues
Model not responding
Symptoms: Inference.infer()
hangs or times out
Common Causes:
- Model not loaded in Ollama
- Request too complex for model
- System resource constraints
Solutions:
Check Model Availability:
// List available models first
const models = await Inference.listModels()
console.log("Available models:", models.map(m => m.id))
// Use available model
const response = await Inference.infer({
system: "You are helpful",
messages: [{ role: "user", content: "Hello" }],
model: models[0]?.id || "llama3.2", // Use first available
max_tokens: 100
})
Reduce Request Complexity:
// If inference fails, try simpler request
const simpleRequest = {
system: "Be brief",
messages: [{ role: "user", content: "Hi" }],
max_tokens: 50, // Reduce token limit
temperature: 0.1 // Reduce randomness
}
const response = await Inference.infer(simpleRequest)
Add Timeout Handling:
// Add timeout to prevent hanging
const timeoutPromise = new Promise((_, reject) => {
setTimeout(() => reject(new Error("Timeout")), 30000) // 30 second timeout
})
try {
const response = await Promise.race([
Inference.infer(request),
timeoutPromise
])
} catch (error) {
if (error.message === "Timeout") {
console.error("Inference timed out, try simpler request")
}
}
Streaming interruptions
Symptoms: inferStream()
stops mid-response
Solution:
// Robust streaming with error recovery
async function robustStreaming(request) {
let retries = 3
while (retries > 0) {
try {
let content = ""
for await (const chunk of Inference.inferStream(request)) {
content += chunk.content
process.stdout.write(chunk.content)
// Check for completion
if (chunk.finish_reason === "stop") {
return content
}
}
} catch (error) {
console.error(`Streaming error: ${error.message}`)
retries--
if (retries > 0) {
console.log(`Retrying... (${retries} attempts left)`)
await new Promise(resolve => setTimeout(resolve, 1000))
}
}
}
throw new Error("Streaming failed after retries")
}
Performance Issues
Slow response times
Symptoms: Inference takes >30 seconds
Solutions:
Use Smaller Models:
// Fast models for quick responses
const fastModels = [
"llama3.2-1b", // Fastest, basic quality
"llama3.2-3b", // Good balance
"qwen2.5-1.5b" // Very fast
]
const response = await Inference.infer({
system: "Be concise",
messages: [{ role: "user", content: query }],
model: "llama3.2-1b", // Use fastest model
max_tokens: 100 // Limit response length
})
Optimize System Resources:
# Check system resources
top
htop
nvidia-smi # If using GPU
# Increase Ollama memory limit
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_NUM_PARALLEL=1
Implement Caching:
// Simple response cache
const responseCache = new Map()
async function cachedInference(request) {
const key = JSON.stringify(request)
if (responseCache.has(key)) {
return responseCache.get(key)
}
const response = await Inference.infer(request)
responseCache.set(key, response)
return response
}
High memory usage
Symptoms: System becomes unresponsive, out of memory errors
Solutions:
Limit Concurrent Requests:
// Request queue to prevent overload
class RequestQueue {
constructor(maxConcurrent = 2) {
this.maxConcurrent = maxConcurrent
this.running = 0
this.queue = []
}
async add(requestFn) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFn, resolve, reject })
this.process()
})
}
async process() {
if (this.running >= this.maxConcurrent || this.queue.length === 0) {
return
}
this.running++
const { requestFn, resolve, reject } = this.queue.shift()
try {
const result = await requestFn()
resolve(result)
} catch (error) {
reject(error)
} finally {
this.running--
this.process() // Process next item
}
}
}
const queue = new RequestQueue(2) // Max 2 concurrent requests
Configure Ollama Memory:
# Limit Ollama memory usage
export OLLAMA_MAX_LOADED_MODELS=1
export OLLAMA_HOST=0.0.0.0:11434
# Start with memory limits
ollama serve
Development Issues
Hot reload not working
Symptoms: Changes not reflected during development
Solution:
# Clear build cache
rm -rf .next
rm -rf dist
rm -rf build
# Restart development server
pnpm dev
# If using Bun, ensure hot reload enabled
bun run --hot src/index.ts
TypeScript errors in development
Symptoms: Type checking fails unexpectedly
Solution:
# Update TypeScript and dependencies
pnpm update typescript @types/node
# Rebuild project with clean slate
pnpm clean
pnpm build
# Check TypeScript configuration
npx tsc --showConfig
Getting Help
Enable Debug Logging
// Enable verbose logging
process.env.DEBUG = "openagents:*"
// Or specific modules
process.env.DEBUG = "openagents:inference,openagents:agent"
Collect System Information
# System info for bug reports
echo "Node.js: $(node --version)"
echo "npm: $(npm --version)"
echo "OS: $(uname -a)"
echo "Ollama: $(ollama --version)"
# Package versions
pnpm list @openagentsinc/sdk
Create Minimal Reproduction
// Minimal example for bug reports
import { Agent, Inference } from '@openagentsinc/sdk'
async function reproduce() {
try {
// Your minimal failing case here
const agent = Agent.create({ name: "Bug Report" })
console.log("Agent created:", agent.id)
} catch (error) {
console.error("Error:", error.message)
console.error("Stack:", error.stack)
}
}
reproduce()
Community Resources
- GitHub Issues: Report bugs
- Discord: Join our community (coming soon)
- Documentation: Full SDK reference
- Source Code: Browse packages
Can’t find your issue? Open a GitHub issue with details about your problem. 🐛