Why it matters
- The only fully open-source AI coding assistant with a native IDE extension experience comparable to GitHub Copilot.
- Model-agnostic design means you're never locked into one provider — switch between Claude, GPT-4o, and local Llama freely.
- Local model support via Ollama enables completely private, offline AI coding with zero data sent to any external service.
- Apache 2.0 license means teams can fork, modify, and self-host the entire stack without restrictions.
Key capabilities
- Chat sidebar: Ask questions about your code, get explanations, or request changes in a persistent chat interface.
- Inline completions: Tab-complete code suggestions powered by whichever model you've configured.
- Codebase context:
@codebaseindexing uses embeddings to pull relevant files into the model's context automatically. - File and docs context: Reference
@file,@folder,@docs,@terminal, and@gitin prompts for grounded responses. - Custom slash commands: Define reusable prompts (e.g.,
/test,/docstring,/refactor) for repeated workflows. - Multi-model configuration: Configure different models for chat vs. completions (e.g., Claude for chat, Starcoder for completion).
- Local models (Ollama): Zero-latency, offline completions with CodeLlama, DeepSeek Coder, Qwen Coder, etc.
- Extensions: Community-contributed model providers and prompt libraries.
Technical notes
- License: Apache 2.0 — fully open source at github.com/continuedev/continue
- IDEs: VS Code (extension in Marketplace), JetBrains (plugin in Marketplace)
- Local model support: Ollama, LM Studio, llama.cpp server, any OpenAI-compatible local server
- Cloud model support: Anthropic, OpenAI, Google, Mistral, AWS Bedrock, Azure OpenAI, Together AI, Groq, Fireworks AI
- Config file:
~/.continue/config.json— YAML-based; hot-reloads without restart - Continue Enterprise: Commercial tier with team management, shared config, and analytics (contact for pricing)
- Founded: 2023 by Nate Sesti and Ty Dunn; backed by Heavybit and Y Combinator (W23)
Ideal for
- Privacy-conscious developers who want AI coding help with zero code sent to third-party cloud services.
- Developers who want to experiment with multiple LLMs (Claude vs. GPT-4o vs. Llama) in their IDE without switching tools.
- Teams self-hosting LLMs (vLLM, Together AI, Bedrock) who need a front-end that integrates with their existing infra.
Not ideal for
- Beginners who want a plug-and-play experience — Continue requires some configuration and model knowledge.
- Users who need AI-native IDE features like Cursor's tab prediction, multi-file agents, or Composer mode.
- Non-technical setup: connecting local models via Ollama requires basic command-line comfort.