Skip to content

Providers

Octrafic supports cloud and local AI providers. Choose one during first launch or switch anytime with octrafic --onboarding.

Cloud

ProviderGet API key
Claude (Anthropic)console.anthropic.com
OpenRouteropenrouter.ai/keys
OpenAIplatform.openai.com
Google Geminiaistudio.google.com/apikey

Local

No API key needed. Just run a model server locally.

ProviderDefault URLGet started
Ollamalocalhost:11434ollama.com
llama.cpplocalhost:8080github.com/ggml-org/llama.cpp

Quick start with Ollama:

bash
ollama pull llama3.1
octrafic --onboarding   # select Ollama

Custom (OpenAI-compatible)

Any API that follows the OpenAI chat completions format is supported. This covers providers like Groq, Mistral, DeepSeek, Together AI, and others.

Select Custom (OpenAI-compatible) during onboarding, then enter:

  • Base URL — the API root without /v1, e.g. https://api.groq.com/openai
  • API key — optional, leave empty if the endpoint requires no auth

Octrafic will fetch the model list automatically. If that fails (some providers don't expose /v1/models), you can type the model name manually.

Configuration

Settings are stored in ~/.octrafic/config.json. To switch provider interactively, run octrafic --onboarding.

Environment Variables

For headless environments like CI/CD pipelines, you can bypass the interactive setup and config.json by providing environment variables:

VariableDescriptionExample
OCTRAFIC_PROVIDERThe LLM provider to useclaude, openai, gemini, ollama, custom
OCTRAFIC_API_KEYProvider API key (if required)sk-ant-api03...
OCTRAFIC_BASE_URLCustom provider endpoint (needed for local & custom)https://api.groq.com/openai
OCTRAFIC_MODELSpecific model to requestclaude-haiku-4.5

Example for a Custom (OpenAI-compatible) provider:

bash
export OCTRAFIC_PROVIDER=custom
export OCTRAFIC_BASE_URL=https://api.groq.com/openai
export OCTRAFIC_API_KEY=gsk_your_groq_key
export OCTRAFIC_MODEL=llama3-70b-8192