Code Intelligence Engine

Your codebase,
understood.

31-language structural intelligence for AI-native IDEs. Tree-sitter parsing, knowledge graph, hybrid search.

$ cargo build --release

What BearWisdom Does

A structural search engine — not an LLM wrapper. Fast, deterministic, queryable.

Parse 31 Languages

Tree-sitter extracts every symbol, reference, and signature. Classes, functions, interfaces, enums — with full qualified names and scope paths.

Knowledge Graph

SQLite database with symbols, edges, and concepts. Call hierarchy, blast radius, cross-file references — all pre-computed and queryable.

6 Search Modes

FTS5 symbols, fuzzy matching, content trigram, regex grep, hybrid vector + text, and CodeRankEmbed semantic search.

4 Interfaces

CLI with 25 JSON commands. MCP server for LLM agents. Web explorer with D3 force graph. Claude Code agent for conversational analysis.

Built for AI-Native IDEs

BearWisdom isn't an LLM optimization layer. It's a structural search engine that serves both the editor and the AI.

The Editor

Go-to-definition, find-references, symbol search, file explorer. The same APIs power IDE features and agent queries.

The AI

MCP server feeds structural context to any LLM. Blast radius before refactoring. Call chains before debugging. Concepts for onboarding.

The Developer

Web explorer for visual architecture review. Force-directed knowledge graph. Concept sidebar. Symbol detail with code preview.

Explore Your Architecture

BearWisdom Explorer — force-directed knowledge graph with concept sidebar
Symbol search results Symbol Search
Concept-filtered graph view Concept Filtering

Real screenshots from the BearWisdom web explorer running on Microsoft's eShop reference architecture.

25 Commands, One Tool

Every query returns structured JSON. Pipe into jq, scripts, or your editor.

Index & Query
$ bw open ./eShop
675 files, 3451 symbols, 2909 edges (2.8s)
$ bw blast-radius Entity --depth 3
62 symbols affected across 24 files
$ bw calls-in PlaceOrder
8 callers: OrderController, OrderService, ...
$ bw architecture
9 languages, 12 hotspots, 4 entry points
Search & Navigate
$ bw hybrid "payment retry logic"
3 results (FTS5 + vector RRF)
$ bw fuzzy-symbols "CatServ"
CatalogService (0.92), CatalogServiceTest (0.87)
$ bw grep "throw.*Exception" --lang rust
14 matches across 7 files
$ bw symbol OrderService --refs
Kind: class · 18 refs · 4 methods

Benchmarks

BearWisdom is optimized for structural intelligence, not token minimization. This benchmark measures how well an LLM answers code analysis questions with and without BearWisdom's pre-indexed tools, run against Microsoft's eShop reference architecture (675 files, 3,451 symbols).

Metric With BearWisdom Native Only
Avg Recall 71.8% 79.1%
Avg Tool Calls 7.1 7.3
Total Tokens 51,495 61,250
Token Savings 16% lower baseline
Category BW Recall Native Recall BW Calls Native Calls Insight
Impact Analysis 56.7% 56.4% 10.3 17.3 BW uses 40% fewer tool calls for same recall
Call Hierarchy 66.7% 100% 1.7 2.0 Native grep matches more names without semantic context
Cross-File Refs 82.0% 82.0% 15.7 9.7 Parity on recall
Concept Discovery 57.1% 50.0% 5.0 7.0 BW finds more concepts with fewer calls
Symbol Search 100% 100% 3.3 2.0 Both perfect
Architecture 32.7% 42.9% 2.0 2.0 Overview queries favor broad grep
Key Insight

Where BearWisdom shines: Impact Analysis uses 40% fewer tool calls for equal recall. Concept Discovery finds more with fewer calls. Total token cost is 16% lower. Where native tools win: broad text search across many files (Architecture, Call Hierarchy).

Run your own: bw-bench full --project /path/to/project --output results/

Roadmap

v0.1 — the current release — ships with 31-language parsing, knowledge graph, 6 search modes, CLI, MCP, web explorer, and benchmark suite.
v0.2
Live Intelligence
Watch mode
OS file watcher feeding the existing incremental indexer — auto-reindex on save.
Incremental embedding
Only embed changed chunks instead of full re-embed.
Smart context selection
Given a task description, return the most relevant graph subset for LLM context.
SCIP import
Compiler-accurate index from rust-analyzer, tsserver — fills type inference gaps.
v0.3
Depth
RAG layer
Retrieve chunks, synthesize natural language answers via LLM.
Workspace / monorepo
Index multiple roots with cross-project references.
Git-aware indexing
Use git diff to scope re-indexing — faster than a full walk.
v0.4
Production
Plugin system
Custom connectors without forking.
Publish to crates.io
cargo install bearwisdom-cli
Streaming results
Stream search results as found for large codebases.

Get Started in 60 Seconds

Rust is the only prerequisite. No runtime, no daemon, no config file.

1
Build

Compile the CLI from source with a single cargo command.

cargo build --release -p bearwisdom-cli
2
Index

Point BearWisdom at any directory. Index builds in seconds.

bw open /path/to/your/project
3
Explore

Use the CLI, launch the web explorer, or register the MCP server with your IDE.

bw web # or bw mcp-serve

Inside BearWisdom

A thin set of consumer crates over a single intelligence library.

Parser
31 languages · Tree-sitter
Graph DB
SQLite · rusqlite
Search Engine
FTS5 · vector · hybrid
bearwisdom
core library · Rust
bw
CLI · 25 cmds
bw-mcp
MCP Server
bw-web
Web Explorer
bw-bench
Benchmarks