The first free offline/local LLM editor ever created; built around open-source models; not a VS Code fork.
The first ever
offline LLM
code editor.
Yee is the first free offline LLM code editor ever created: built for engineers who want Cursor-class and Windsurf-class AI workflows without sending private conversations, client code, or product IP through somebody else’s hosted LLM stack. It runs on 100% open-source local/offline LLMs, it is not a VS Code fork, and it is designed for HIPAA, SOC 2, and internal compliance workflows because there is no default phone-home model loop.
Cursor and Windsurf charge. Yee is free.
Cursor currently lists individual paid plans from $20 to $200/mo, Teams at $40/user/mo, and Enterprise as custom. Windsurf currently lists individual paid plans from $20 to $200/mo, Teams at $40/user/mo, and Enterprise as custom. Yee is built to run local/offline open-source LLMs for $0/mo.
Current individual paid tiers: Pro, Pro+, Ultra. Teams is $40/user/mo; Enterprise is custom.
Current individual paid tiers: Pro and Max. Teams is $40/user/mo; Enterprise is custom.
What Yee is built for.
A coding environment where the assistant is powerful because it is close to your repo, not because your repo has to leave.
Local by default
No model relay required
HoverLocal by default
Yee is designed so prompts, files, repo context, and agent conversations stay on your hardware when you use local models. No remote LLM vendor needs to see your IP for the editor to work.
Open-source LLM stack
100% local/offline models
HoverOpen-source LLM stack
Use open-source LLMs you can run, inspect, swap, and pin. The model layer is not a black box subscription meter wrapped around your engineering brain.
Not a VS Code fork
Built as Yee
HoverNot a VS Code fork
Yee is not another VS Code shell with AI glued to the side. The editor experience is built around local agents, offline context, and ownership from the start.
Offline agent loops
Read · plan · edit · verify
HoverOffline agent loops
Run coding loops that can inspect your workspace, propose edits, and verify changes without routing your codebase through someone else’s hosted model workflow.
IP stays yours
Your repo, your machine
HoverIP stays yours
Yee exists for the work you do not want in third-party LLM conversations: client code, internal algorithms, prototypes, security logic, and hard-won product ideas.
Compliance-capable
HIPAA · SOC 2 · internal controls
HoverCompliance-capable
No phone-home model loop means teams can develop against sensitive code and data without accidentally routing protected information through external AI conversations.
Free beats seat tax
$0/mo
HoverFree beats seat tax
Cursor and Windsurf both have free tiers, then paid plans for serious usage. Yee’s pitch is sharper: local open-source LLMs, no monthly editor tax, and no upsell just to keep coding.
Where developers reach for Yee first.
The more valuable the code, the less comfortable it feels to discuss it through someone else’s model pipeline.
Client code that cannot leak
Work on private client repos, regulated logic, HIPAA-sensitive workflows, SOC 2-controlled systems, and non-public product ideas without needing to ship the conversation to a remote model provider.
Founder IP and prototypes
Use local models for the early ideas that matter most: the weird edge case, the new algorithm, the architecture sketch, the thing nobody else should see yet.
Offline engineering sessions
Code on planes, in locked-down environments, or anywhere the network is hostile, slow, expensive, or simply not part of the trust boundary.
Transparent model control
Pick open-source models, pin versions, swap them when they improve, and understand what is running instead of treating the assistant as a mystery endpoint.
Stop renting privacy from your editor.
Yee is for developers who want the AI coding loop without the corporate model relay, subscription ladder, or VS Code fork baggage—so protected work can stay inside the compliance boundary.