OpenAI-Compatible API
This provider works with any API that follows OpenAI's chat completions format: which includes OpenAI itself, many local model servers like Ollama, and a lot of third-party model APIs.
Setup
- Open Settings → AI Providers
- Find OpenAI-compatible API and click Configure
- Enter:
- API URL: the base URL of the endpoint
- API Key: your API key (or any string if the server doesn't require one)
- Model: the model name to use
- Click Save
Common setups
OpenAI
- API URL:
https://api.openai.com/v1 - API Key: your OpenAI key from platform.openai.com
- Model:
gpt-4oorgpt-4o-mini
Ollama (local)
Run models locally with no API key and no data leaving your machine.
- Install Ollama and pull a model:
ollama pull llama3.2 - In Coeus:
- API URL:
http://localhost:11434/v1 - API Key:
ollama(any string works) - Model:
llama3.2
- API URL:
Other providers
Any provider with an OpenAI-compatible endpoint works the same way: Groq, Together AI, Mistral (with their compatibility layer), etc. Check their docs for the base URL.
Choosing a model
For answering questions about your notes, a mid-size model (GPT-4o-mini, Llama 3.2, Mistral 7B) works well and is fast and cheap. You only need a larger model for long, involved questions.
Attachments
Image attachments work if the model supports vision (like GPT-4o). Text-based models will skip image content.