Quick Start

Connect AI Providers

QCoder supports 11 AI providers out of the box. Add your API key and start chatting in seconds.

Supported Providers

QCoder works with any OpenAI-compatible API, plus native integrations for major providers. Here is the full list:

ProviderAPI Key FormatContext WindowTool Calling
------------
OpenAIsk-...128K tokensNative
Anthropicsk-ant-...200K tokensNative
DeepSeeksk-...64K tokensNative
Google GeminiAIza...1M+ tokensNative
Groqgsk_...64K tokensNative
OllamaNone (local)32K tokensVia prompting
OpenRoutersk-or-...Varies by modelVaries
xAI / Grokxai-...VariesNative
MistralAPI key128K tokensNative
LM StudioNone (local)VariesVia prompting
Custom HTTPConfigurableConfigurableConfigurable

Adding a Provider

  1. Open the chat panel and click the gear icon to open Settings.
  2. Select the API Config tab.
  3. Choose your provider from the Provider dropdown.
  4. Paste your API key into the API Key field.
  5. (Optional) Override the Base URL if you are using a proxy or self-hosted endpoint.
  6. Select a Model from the dropdown. Models are fetched dynamically from the provider when possible.
  7. Click Save or simply close the Settings panel -- changes are saved automatically.

You can configure multiple providers and switch between them at any time using the model selector in the chat header.

Local Providers (Free)

Ollama and LM Studio let you run models entirely on your machine with no API key and no cost.

Ollama setup: 1. Install Ollama from [ollama.com](https://ollama.com). 2. Pull a model: ollama pull llama3.1 or ollama pull codellama. 3. In QCoder Settings > API Config, select Ollama as the provider. 4. The base URL defaults to http://localhost:11434. Change it only if you run Ollama on a different port. 5. Select your pulled model from the dropdown and start chatting.

LM Studio setup: 1. Install LM Studio from [lmstudio.ai](https://lmstudio.ai). 2. Download a model through the LM Studio interface. 3. Start the local server in LM Studio (it will show you the port). 4. In QCoder, select LM Studio as the provider and enter the server URL.

OpenRouter (Multi-Provider Proxy)

OpenRouter acts as a single API gateway to hundreds of models from multiple providers. This is useful if you want access to many models with one API key.

  1. Sign up at [openrouter.ai](https://openrouter.ai) and generate an API key.
  2. In QCoder Settings, select OpenRouter as the provider.
  3. Paste your sk-or-... key.
  4. Browse and select from the full model catalog.

OpenRouter supports pay-per-use pricing and routes requests to the cheapest available endpoint for each model.

Rate Limiting and Retries

QCoder includes built-in retry logic for rate-limited or temporarily failed API calls:

SettingValue
------
Max retries3
Base delay2,000 ms
Max delay30,000 ms
Min interval between requests1,000 ms

Retries use exponential backoff. If a provider returns a rate-limit error (HTTP 429), QCoder waits and retries automatically. You do not need to do anything -- just wait for the response to arrive.

QCoder also includes a temperature auto-retry feature. If a model does not support the temperature parameter (such as OpenAI's o1 and o3 reasoning models), QCoder detects the error and automatically retries the request without temperature.