How to Use OpenCode: The Beginner's Guide to the Open Source AI Coding Agent

March 22, 2026

OpenCode just hit #1 on Hacker News with over 1,000 points. If you haven't heard of it yet, you will soon — it's an open source AI coding agent with 120,000 GitHub stars, 5 million monthly users, and support for every major AI model including Claude, GPT-4, Gemini, and even local models. This guide will get you up and running in under 30 minutes.

The short version: OpenCode runs in your terminal, your IDE, or as a desktop app. It's privacy-first (your code never leaves your machine), free to start, and dramatically faster for certain workflows than browser-based AI tools. Here's how to use it.

What Is OpenCode?

OpenCode is an AI coding agent — not just an autocomplete or a chatbot, but something that can understand your entire project, write code across multiple files, run commands, and debug errors end-to-end. Think of it as giving Claude or GPT-4 access to your whole codebase, not just a single file or snippet.

What makes it different from Cursor or GitHub Copilot:

  • Open source — you can see exactly what it's doing and self-host if needed
  • Privacy-first — no code or context is stored on OpenCode's servers
  • Any model — Claude, GPT-4o, Gemini, Mistral, local models via Ollama, 75+ providers
  • Multi-session — run multiple agents in parallel on the same project
  • Any editor — terminal, VS Code extension, JetBrains, or desktop app
  • Free GitHub Copilot integration — use your existing Copilot subscription

Step 1: Installation

The easiest way to install OpenCode is via npm. You'll need Node.js 18 or higher — if you don't have it, download it from nodejs.org first.

Open your terminal and run:

npm install -g opencode-ai

Verify it worked:

opencode --version

If you prefer the desktop app, download it from opencode.ai. The VS Code and JetBrains extensions are available in their respective marketplaces — search "OpenCode."

Step 2: Connect Your AI Model

OpenCode doesn't include a built-in model — you connect your own. This is actually an advantage: you use models you're already paying for, and you're not locked in.

The three most popular options:

Option A — Claude (Anthropic)
Get your API key from console.anthropic.com. Then run opencode configure and select Anthropic as your provider. Paste your key. Claude Sonnet is the recommended model for most coding tasks — it handles long context and complex reasoning better than most alternatives.

Option B — GitHub Copilot (free if you already have it)
Run opencode configure and choose "GitHub Copilot." It will open a browser window for OAuth login. If you have an active Copilot subscription, you're done — zero additional cost.

Option C — Local model via Ollama (completely private, no API cost)
Install Ollama from ollama.ai, pull a model (ollama pull codellama), and configure OpenCode to use your local Ollama endpoint. Slower than cloud models, but 100% private and free to run.

Step 3: Your First Session

Navigate to a project folder in your terminal — or create a new one — and run:

cd my-project
opencode

OpenCode opens an interactive session. It scans your project files and loads relevant context automatically. You don't need to paste your code — it already sees it.

Start with a natural language description of what you want to build or fix. For example:

> I want to add user authentication to this Next.js app. 
  Use NextAuth.js with Google OAuth. 
  Show me what files need to change first.

OpenCode will analyze your project structure, identify the relevant files, and walk you through the changes step by step — or make them directly if you ask it to.

Step 4: The Multi-Session Workflow

One of OpenCode's most powerful features is running multiple agent sessions in parallel. Imagine having three Claude instances working on your project simultaneously — one handling the backend API, one writing tests, one updating documentation.

To start a named session:

opencode --session backend-api
opencode --session frontend-ui  
opencode --session tests

Each session maintains its own context and conversation history. You can share a session link with a teammate or reference it later for debugging — every session is logged and shareable.

Step 5: LSP Integration (The Hidden Power Feature)

OpenCode automatically loads Language Server Protocol (LSP) support for your project's language. This means the AI gets real-time type information, error signals, and code intelligence — the same data your editor uses for autocomplete and error highlighting.

In practice, this means OpenCode catches type errors, undefined variables, and import issues before suggesting code — not after. For TypeScript, Rust, or Python projects, this is a significant upgrade over tools that only see raw text.

You don't need to configure anything. It detects your language automatically and loads the right LSP.

Choosing the Right Model for the Task

Not all tasks need the most powerful model. Here's a practical breakdown:

  • Architecture and planning: Claude Sonnet or GPT-4o — complex reasoning, long context
  • Writing boilerplate and CRUD: Claude Haiku or GPT-4o-mini — fast and cheap
  • Debugging: Claude Sonnet — better at tracing errors across files
  • Test generation: Any mid-tier model — pattern matching, not reasoning
  • Privacy-sensitive work: Local model via Ollama — nothing leaves your machine

OpenCode lets you switch models mid-session. Use a cheap model for routine tasks, switch to Sonnet when you hit something complex. This keeps API costs low without sacrificing quality where it matters.

OpenCode vs Cursor: Which Should You Use?

This is the question everyone asks. The honest answer: they solve slightly different problems.

Use OpenCode when: you want a terminal-first workflow, you care about privacy, you're working in environments where you can't install a full IDE, you want to run parallel agents, or you prefer open source tooling.

Use Cursor when: you want a full IDE experience with inline suggestions, you're doing a lot of UI work where visual context matters, or you prefer a polished GUI over a terminal interface.

Many developers use both. OpenCode for architecture, refactoring, and complex tasks in the terminal — Cursor for the editing experience day-to-day. They're not mutually exclusive.

Common Mistakes to Avoid

  • Starting without context: Tell OpenCode what you're building before asking it to do anything. A 3-sentence project brief saves 30 minutes of back-and-forth.
  • Accepting code without reading it: OpenCode is fast. That doesn't mean it's always right. Read every change before applying it.
  • One giant request: Break complex features into steps. "Add auth, then add a dashboard, then add billing" in one prompt produces a mess. One thing at a time.
  • Ignoring the session share feature: If something breaks, share the session link — either with a teammate or to reference later. It's a log of exactly what changed.
  • Using the wrong model for the budget: Don't run GPT-4o on test generation when Haiku does it in 10% of the cost.

What to Build First

The best way to learn OpenCode is to use it on something real. Three project types where it shines immediately:

1. Refactoring an existing codebase — Open a project you already have, describe what's messy, and ask OpenCode to clean it up. You'll immediately see how it handles multi-file context.

2. Writing tests for existing code — Point it at a file, ask for comprehensive tests. It reads your implementation, understands the edge cases, and writes tests that actually match your code.

3. Building a new feature in an existing app — Tell it what you want, which files are relevant, and what the acceptance criteria are. Let it plan the implementation before writing a line.

If you want a deeper system for using AI coding agents effectively — not just OpenCode but the full workflow for building real projects with AI — the Vibe Coding Blueprint covers the complete playbook: project setup, context management, debugging loops, and how to ship without getting stuck.


OpenCode is moving fast — 120,000 GitHub stars and 5 million monthly users doesn't happen by accident. The developers who figure out how to use tools like this effectively in 2026 will have a significant advantage over those who don't. Start with a real project. Learn by building.