Skip to content

Provider Configuration

NCMcClure edited this page Feb 26, 2025 · 1 revision

Plugin Settings → Provider Configuration


Provider Selection & Setup

Choose and configure your preferred LLM provider through the Plugin Settings:

  • API Key: Enter your Claude API key
  • Model:
    • Claude3_5_Sonnet (Recommended)
    • Claude3_5_Haiku
  • API Key: Enter your OpenAI API key
  • Model:
    • GPT_o1
    • GPT_o1_Preview
    • GPT_o3_Mini (Recommended)
    • GPT_o1_Mini
    • GPT4o_2024_08_06
    • GPT4o_Mini_2024_07_18
  • API Key: Enter your OpenAI API key
  • Model:
    • Gemini 2.0 Flash Thinking Exp 01-21
    • Gemini 2.0 Pro Exp 02-05
    • Gemini 2.0 Flash
    • Gemini 2.0 Flash-Lite-Preview-02-05
    • Gemini 1.5 Pro
    • Gemini 1.5 Flash
  • API Key: Enter your DeepSeek API key
  • Model:
    • DeepSeek_R1 (Recommended)
    • DeepSeek_V3
  • Endpoint: Default http://localhost:11434/
  • No API Key Required
  • Advanced Settings:
    • Model name (e.g., codellama:code)
    • Temperature (0.0 - 2.0)
    • Context Window size (see Translation Depth for guidance)
    • Max Output Tokens
    • Keep Alive Duration
    • Top P
    • Top K
    • Min P
    • Repeat Penalty
    • Mirostat Mode
    • Mirostat Eta
    • Mirostat Tau
    • Random Seed

Best Practices

  1. API Key Security

    • Never commit API keys to version control
    • Regularly rotate keys for security
    • Configure through Plugin Settings
  2. Model Selection

    • Start with recommended models for your use case
    • Consider cost vs capability tradeoffs
    • Monitor token usage and adjust as needed
  3. Local Processing

    • Ensure sufficient system resources for Ollama
    • Monitor GPU/CPU usage
    • Adjust context window based on translation needs

More Provider Resources

Not sure which model will work best for your needs?

Want to understand more specific details about the LLM service providers?

Clone this wiki locally