LLM Providers
Configure AI providers for post-processing
Post-processing uses Large Language Models (LLMs) to format and enhance your transcriptions.
Supported Providers
- OpenAI: GPT-4o, GPT-4o-mini
- Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku
- Google: Gemini Pro
- Mistral: Mistral Large
- OpenRouter: Access to 100+ models
- Cerebras: Fast inference
- Ollama: Local models (Llama, Mistral, etc.)
Using Ollama (Local)
For complete privacy, use Ollama to run LLMs locally:
- Install Ollama from ollama.ai
- Pull a model (e.g.,
ollama pull llama3) - In Speakly, select Ollama as your LLM provider
- Choose your downloaded model