OpenJarvis is a research framework from Stanford’s Scaling Intelligence Lab for building personal AI agents that run locally by default. Unlike cloud-dependent agent frameworks, OpenJarvis only calls external APIs when truly necessary — keeping your data private and your workflows fast. The project is available at github.com/open-jarvis/OpenJarvis.
Seven Built-in Agent Types
OpenJarvis ships with seven agent types ranging from simple chat to orchestrated multi-step workflows. The Orchestrator breaks complex tasks into subtasks, while the Operative serves as a lightweight executor for recurring personal workflows. This composable approach means you can build agents that match your exact needs rather than relying on one general-purpose model.
Tools, Memory & MCP Integration
Built-in tools include web search, calculator, file I/O, code interpreter, and retrieval. Crucially, OpenJarvis supports MCP (Model Context Protocol) for standardized tool use — meaning it can connect to any MCP server in the ecosystem. It also supports Google A2A for agent-to-agent communication and semantic indexing for local retrieval over documents, notes, and papers.
On-Device Learning
What sets OpenJarvis apart is its Learning primitive. The framework uses local interaction traces to synthesize training data, refine agent behavior, and improve model selection over time. Stanford researchers found that local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3x from 2023 to 2025. This closed-loop improvement path makes the agent genuinely better the more you use it.
Getting Started
The full framework, documentation, and research papers are available at github.com/open-jarvis/OpenJarvis. For the research overview, visit the project page or read the Stanford blog post for a deep dive into the architecture.

Leave a Reply