LLM Providers

Configure AI providers for post-processing

Post-processing uses Large Language Models (LLMs) to format and enhance your transcriptions.

Supported Providers

  • OpenAI: GPT-4o, GPT-4o-mini
  • Anthropic: Claude 3.5 Sonnet, Claude 3 Haiku
  • Google: Gemini Pro
  • Mistral: Mistral Large
  • OpenRouter: Access to 100+ models
  • Cerebras: Fast inference
  • Ollama: Local models (Llama, Mistral, etc.)

Using Ollama (Local)

For complete privacy, use Ollama to run LLMs locally:

  1. Install Ollama from ollama.ai
  2. Pull a model (e.g., ollama pull llama3)
  3. In Speakly, select Ollama as your LLM provider
  4. Choose your downloaded model
Documentation - Speakly