Code Intelligence Engine
31-language structural intelligence for AI-native IDEs. Tree-sitter parsing, knowledge graph, hybrid search.
Core Engine
A structural search engine — not an LLM wrapper. Fast, deterministic, queryable.
Tree-sitter extracts every symbol, reference, and signature. Classes, functions, interfaces, enums — with full qualified names and scope paths.
SQLite database with symbols, edges, and concepts. Call hierarchy, blast radius, cross-file references — all pre-computed and queryable.
FTS5 symbols, fuzzy matching, content trigram, regex grep, hybrid vector + text, and CodeRankEmbed semantic search.
CLI with 25 JSON commands. MCP server for LLM agents. Web explorer with D3 force graph. Claude Code agent for conversational analysis.
Integrations
BearWisdom isn't an LLM optimization layer. It's a structural search engine that serves both the editor and the AI.
Go-to-definition, find-references, symbol search, file explorer. The same APIs power IDE features and agent queries.
MCP server feeds structural context to any LLM. Blast radius before refactoring. Call chains before debugging. Concepts for onboarding.
Web explorer for visual architecture review. Force-directed knowledge graph. Concept sidebar. Symbol detail with code preview.
Knowledge Graph
Symbol Search
Concept Filtering
Real screenshots from the BearWisdom web explorer running on Microsoft's eShop reference architecture.
Command Line
Every query returns structured JSON. Pipe into jq, scripts, or your editor.
Performance
BearWisdom is optimized for structural intelligence, not token minimization. This benchmark measures how well an LLM answers code analysis questions with and without BearWisdom's pre-indexed tools, run against Microsoft's eShop reference architecture (675 files, 3,451 symbols).
| Metric | With BearWisdom | Native Only |
|---|---|---|
| Avg Recall | 71.8% | 79.1% |
| Avg Tool Calls | 7.1 | 7.3 |
| Total Tokens | 51,495 | 61,250 |
| Token Savings | 16% lower | baseline |
| Category | BW Recall | Native Recall | BW Calls | Native Calls | Insight |
|---|---|---|---|---|---|
| Impact Analysis | 56.7% | 56.4% | 10.3 | 17.3 | BW uses 40% fewer tool calls for same recall |
| Call Hierarchy | 66.7% | 100% | 1.7 | 2.0 | Native grep matches more names without semantic context |
| Cross-File Refs | 82.0% | 82.0% | 15.7 | 9.7 | Parity on recall |
| Concept Discovery | 57.1% | 50.0% | 5.0 | 7.0 | BW finds more concepts with fewer calls |
| Symbol Search | 100% | 100% | 3.3 | 2.0 | Both perfect |
| Architecture | 32.7% | 42.9% | 2.0 | 2.0 | Overview queries favor broad grep |
Where BearWisdom shines: Impact Analysis uses 40% fewer tool calls for equal recall. Concept Discovery finds more with fewer calls. Total token cost is 16% lower. Where native tools win: broad text search across many files (Architecture, Call Hierarchy).
What's Next
Setup
Rust is the only prerequisite. No runtime, no daemon, no config file.
Compile the CLI from source with a single cargo command.
Point BearWisdom at any directory. Index builds in seconds.
Use the CLI, launch the web explorer, or register the MCP server with your IDE.
Internals
A thin set of consumer crates over a single intelligence library.