Skip to main content

Install

go get github.com/lioarce01/chainforge

1. Create an agent

Use a provider shorthand to set both provider and model in one call:
package main

import (
    "context"
    "fmt"
    "os"

    chainforge "github.com/lioarce01/chainforge"
    "github.com/lioarce01/chainforge/pkg/tools/calculator"
    "github.com/lioarce01/chainforge/pkg/memory/inmemory"
)

func main() {
    agent, err := chainforge.NewAgent(
        chainforge.WithOpenAI(os.Getenv("OPENAI_API_KEY"), "gpt-4o-mini"),
        chainforge.WithSystemPrompt("You are a helpful assistant."),
        chainforge.WithTools(calculator.New()),
        chainforge.WithMemory(inmemory.New()),
    )
    if err != nil {
        panic(err)
    }

    ctx := context.Background()
    result, err := agent.Run(ctx, "session-1", "What is 2^10 + 144?")
    if err != nil {
        panic(err)
    }

    fmt.Println(result)
}
Provider shortcuts available out of the box:
ShorthandProvider
WithAnthropic(key, model)Anthropic (Claude)
WithOpenAI(key, model)OpenAI
WithGemini(key, model)Google Gemini
WithOllama(url, model)Ollama (local)

2. Run it

OPENAI_API_KEY=sk-... go run main.go

3. Use any OpenAI-compatible provider

chainforge works with OpenRouter, Ollama, or any provider that speaks the OpenAI API:
# OpenRouter
API_KEY=sk-or-... BASE_URL=https://openrouter.ai/api/v1 MODEL=openai/gpt-4o-mini go run main.go
chainforge.WithOpenAICompatible(
    os.Getenv("API_KEY"),
    os.Getenv("BASE_URL"),
    "openrouter",
    os.Getenv("MODEL"),
)

4. Serve over HTTP

Turn any agent into an HTTP service with one line:
log.Fatal(chainforge.Serve(":8080", agent))
This exposes POST /v1/chat, POST /v1/chat/stream (SSE), and GET /healthz. It blocks until SIGINT/SIGTERM and shuts down gracefully.

5. Add middleware

agent, _ := chainforge.NewAgent(
    chainforge.WithOpenAI(os.Getenv("OPENAI_API_KEY"), "gpt-4o-mini"),
    chainforge.WithLogging(slog.Default()),  // structured logs per LLM call
    chainforge.WithTracing(),                // OTel spans (noop if not configured)
    chainforge.WithRetry(3),                 // exponential backoff on transient errors
    chainforge.WithRunTimeout(30*time.Second), // per-run deadline
)
For rate limiting, Prometheus metrics, or fallback providers, use ProviderBuilder:
import (
    "github.com/prometheus/client_golang/prometheus"
    chainforge "github.com/lioarce01/chainforge"
    "github.com/lioarce01/chainforge/pkg/providers/openai"
    "github.com/lioarce01/chainforge/pkg/providers/anthropic"
)

p := chainforge.NewProviderBuilder(openai.New(os.Getenv("OPENAI_API_KEY"))).
    WithRateLimit(10, 20).                              // 10 rps, burst 20
    WithFallback(anthropic.New(os.Getenv("ANTHROPIC_API_KEY"))).
    WithMetrics(prometheus.DefaultRegisterer).
    Build()

agent, _ := chainforge.NewAgent(chainforge.WithProvider(p), chainforge.WithModel("gpt-4o-mini"))

6. Get token usage

Run discards token counts. Use RunWithUsage when you need them:
result, usage, err := agent.RunWithUsage(ctx, "session-1", "Hello")
fmt.Printf("input=%d output=%d\n", usage.InputTokens, usage.OutputTokens)

Next steps

Providers

Anthropic, OpenAI, Ollama, and OpenAI-compatible APIs.

Tools

Built-in tools, custom tools, and MCP servers.

MCP

Connect any MCP server with one line.

Orchestration

Sequential pipelines and parallel fan-out.

Observability

Structured logging and OpenTelemetry tracing.

Deployment

Docker, Kubernetes, Helm, and embedding the HTTP server.