code-review-graph: A local knowledge graph that cuts Claude Code token usage by up to 49x

https://x.com/hasantoxr/status/2041487311536603629?s=12
Technical announcement / open-source tool promotion with benchmarks · Researched April 8, 2026

Summary

Hasan Toor announced code-review-graph, an open-source tool built by AI engineer Tirth Kanani that addresses a critical inefficiency in AI-assisted code review: when Claude Code (and other AI assistants) are asked to review code or add features, they typically re-read an entire codebase unnecessarily, wasting tokens on files unrelated to the change. This problem becomes severe on large projects—FastAPI has 2,915 files, Next.js has 27,732 files—making context bloat expensive and reducing review quality.

The solution builds a persistent structural knowledge graph of your codebase using Tree-sitter, parsing all code into an Abstract Syntax Tree (AST) and storing it in a lightweight local SQLite database. Rather than reading raw files, the AI assistant queries this graph to understand the "blast radius" of changes—which functions call the modified code, which tests cover it, which dependencies are affected—and reads only the relevant files. The results are dramatic: across real production repositories, token reduction ranges from 6.8x on average to 49x on specific daily coding tasks, while paradoxically improving review quality (8.8/10 with the graph vs. 7.2/10 without).

The tool requires zero configuration, installs in 30 seconds, works offline with no telemetry or cloud dependencies, and integrates seamlessly with Claude Code, Cursor, Windsurf, Zed, and other AI coding platforms via the Model Context Protocol (MCP). It supports 19+ programming languages and includes 22 MCP tools for blast-radius analysis, call-graph tracing, risk scoring, and semantic search. The implementation is production-tested on real commits, fully open-source under MIT license, and represents a broader trend of developers optimizing AI-assisted workflows by providing better structured context instead of raw code dumps.

Key Takeaways

About

Author: Hasan Toor (promoting); Tirth Kanani (creator)

Publication: X (Twitter)

Published: 2026-02-16

Sentiment / Tone

Enthusiastic yet technically credible. The announcement uses "BREAKING" language typical of tech news promotion, matching Hasan Toor's style as a tech educator and curator. However, the underlying content is grounded in concrete benchmarks from real repositories and technical depth. Tirth Kanani's Hacker News comment shifts to a problem-solution narrative with specific technical details, demonstrating deep expertise while remaining accessible. The tone positions token efficiency as a solved problem rather than speculative—the creator takes ownership of the solution and invites technical questions, suggesting confidence backed by implementation. No hype; the metrics speak for themselves."

Related Links

Research Notes

Hasan Toor is an established tech educator and AI tool curator with significant Twitter reach (hundreds of thousands of followers), known for spotting and promoting emerging AI tools rather than creating them himself. He operates as an early amplifier in the tech ecosystem. The actual creator, Tirth Kanani, is a credible AI engineer with demonstrated expertise in machine learning products, responsible AI research, and mechanistic interpretability. He actively participated in Hacker News discussions, answering technical questions directly—a signal of authentic ownership and confidence. The tool addresses a genuine and measurable problem: AI coding assistants like Claude Code waste tokens on irrelevant code. This isn't hypothetical—large codebases (Next.js at 27,732 files) make the issue acute. The benchmarks appear legitimate: they're measured on real open-source projects (httpx, FastAPI, Next.js) with real commits, not synthetic test cases. The fact that review quality improves while token usage decreases is noteworthy and counter-intuitive, suggesting the graph-based approach provides cleaner signal. The project achieved traction quickly—featured on Hacker News, integrated into multiple AI coding platforms (Claude Code, Cursor, Windsurf, Zed), and cited by other developers building similar tools (e.g., Claudette). This suggests the problem it solves is widely recognized. The MIT license and 100% open-source approach lower friction for adoption and inspection. Potential considerations: The tool's effectiveness depends on accurate static analysis via Tree-sitter. Languages with dynamic features, metaprogramming, or reflection might see lower accuracy. The SQLite approach works well for single-developer workflows but the multi-repo registry feature suggests awareness of team use cases. The 49x token reduction is the highest claim and appears to apply to specific live coding tasks on large repos—average reductions across all tasks and repos are lower (6.8x). No negative criticisms surfaced during research, though the tool is relatively new (announcement in Feb 2026, Hacker News discussion recent), so long-term reliability is unproven."

Topics

token optimization AI coding assistants knowledge graphs Tree-sitter code analysis Claude Code