URL copied — paste it as a website source in a new notebook
Summary
Hasan Toor announced code-review-graph, an open-source tool built by AI engineer Tirth Kanani that addresses a critical inefficiency in AI-assisted code review: when Claude Code (and other AI assistants) are asked to review code or add features, they typically re-read an entire codebase unnecessarily, wasting tokens on files unrelated to the change. This problem becomes severe on large projects—FastAPI has 2,915 files, Next.js has 27,732 files—making context bloat expensive and reducing review quality.
The solution builds a persistent structural knowledge graph of your codebase using Tree-sitter, parsing all code into an Abstract Syntax Tree (AST) and storing it in a lightweight local SQLite database. Rather than reading raw files, the AI assistant queries this graph to understand the "blast radius" of changes—which functions call the modified code, which tests cover it, which dependencies are affected—and reads only the relevant files. The results are dramatic: across real production repositories, token reduction ranges from 6.8x on average to 49x on specific daily coding tasks, while paradoxically improving review quality (8.8/10 with the graph vs. 7.2/10 without).
The tool requires zero configuration, installs in 30 seconds, works offline with no telemetry or cloud dependencies, and integrates seamlessly with Claude Code, Cursor, Windsurf, Zed, and other AI coding platforms via the Model Context Protocol (MCP). It supports 19+ programming languages and includes 22 MCP tools for blast-radius analysis, call-graph tracing, risk scoring, and semantic search. The implementation is production-tested on real commits, fully open-source under MIT license, and represents a broader trend of developers optimizing AI-assisted workflows by providing better structured context instead of raw code dumps.
Key Takeaways
code-review-graph uses Tree-sitter AST parsing to build a persistent, locally-stored SQLite knowledge graph of your entire codebase, enabling AI assistants to query only relevant code instead of re-reading entire repositories on every task.
Token reduction is context-dependent: httpx (125 files) achieves 26.2x fewer tokens, FastAPI (2,915 files) gets 8.1x, Next.js (27,732 files) achieves 6.0x on reviews and 49x on live coding tasks, with review quality improving from 7.2 to 8.8 out of 10.
Incremental updates re-parse only changed files and their dependants in under 2 seconds using file-hash deduplication and Git-based change detection, keeping the knowledge graph current without rebuilding the entire index.
The tool operates entirely offline with no external databases, cloud services, or telemetry—just a single SQLite file in a .code-review-graph/ directory with automatic updates via pre-commit hooks.
Blast-radius analysis traces which functions, tests, and dependencies are affected by code changes, allowing the AI to understand the full impact scope before reading files—a capability that both reduces token usage and improves code review quality.
Compatible with 9 platforms including Claude Code, Cursor, Windsurf, Zed, and GitHub Copilot via the Model Context Protocol (MCP), supporting 19+ programming languages plus Jupyter notebooks.
Created by Tirth Kanani, an AI engineer with 3+ years of ML products experience who built the tool after frustration with watching Claude re-read unnecessary code, addressing a real pain point in the AI-assisted development workflow.
Installation is frictionless—either pip install + one command, or a single VS Code/Cursor click—with zero configuration needed, making it accessible to developers of all skill levels.
The 3,700-line Python codebase includes 770 lines of tests and implements advanced features like qualified node naming (src/auth.py::AuthService.login), NetworkX graph traversal with caching, and optional hybrid search combining BM25 full-text search with vector embeddings.
This represents part of a broader ecosystem of token-optimization tools (Claudette, Understand-Anything, graphify) and reflects growing demand for structured context management as AI coding assistants become standard in developer workflows.
About
Author: Hasan Toor (promoting); Tirth Kanani (creator)
Publication: X (Twitter)
Published: 2026-02-16
Sentiment / Tone
Enthusiastic yet technically credible. The announcement uses "BREAKING" language typical of tech news promotion, matching Hasan Toor's style as a tech educator and curator. However, the underlying content is grounded in concrete benchmarks from real repositories and technical depth. Tirth Kanani's Hacker News comment shifts to a problem-solution narrative with specific technical details, demonstrating deep expertise while remaining accessible. The tone positions token efficiency as a solved problem rather than speculative—the creator takes ownership of the solution and invites technical questions, suggesting confidence backed by implementation. No hype; the metrics speak for themselves."
Related Links
code-review-graph GitHub repository Official repository containing the complete open-source implementation, documentation, benchmarks, and 570+ tests mentioned in the announcement.
code-review-graph official website Interactive demo site with visual explanations, benchmark comparisons, installation guides, and an interactive graph visualization showing how the tool works.
Hacker News discussion: code-review-graph Technical community discussion where Tirth Kanani (creator) answered detailed questions about the implementation, benchmarks, Tree-sitter integration, and incremental engine design.
Claudette: Alternative Go implementation A competing implementation of the same concept (knowledge graphs for code review) written in Go, showing the broader movement toward token-efficient AI-assisted code review.
Tirth Kanani's personal website Creator's portfolio demonstrating background in AI/ML engineering with 3+ years of experience building ML products, including expertise in PyTorch, LLMs, and MLOps.
Research Notes
Hasan Toor is an established tech educator and AI tool curator with significant Twitter reach (hundreds of thousands of followers), known for spotting and promoting emerging AI tools rather than creating them himself. He operates as an early amplifier in the tech ecosystem. The actual creator, Tirth Kanani, is a credible AI engineer with demonstrated expertise in machine learning products, responsible AI research, and mechanistic interpretability. He actively participated in Hacker News discussions, answering technical questions directly—a signal of authentic ownership and confidence.
The tool addresses a genuine and measurable problem: AI coding assistants like Claude Code waste tokens on irrelevant code. This isn't hypothetical—large codebases (Next.js at 27,732 files) make the issue acute. The benchmarks appear legitimate: they're measured on real open-source projects (httpx, FastAPI, Next.js) with real commits, not synthetic test cases. The fact that review quality improves while token usage decreases is noteworthy and counter-intuitive, suggesting the graph-based approach provides cleaner signal.
The project achieved traction quickly—featured on Hacker News, integrated into multiple AI coding platforms (Claude Code, Cursor, Windsurf, Zed), and cited by other developers building similar tools (e.g., Claudette). This suggests the problem it solves is widely recognized. The MIT license and 100% open-source approach lower friction for adoption and inspection.
Potential considerations: The tool's effectiveness depends on accurate static analysis via Tree-sitter. Languages with dynamic features, metaprogramming, or reflection might see lower accuracy. The SQLite approach works well for single-developer workflows but the multi-repo registry feature suggests awareness of team use cases. The 49x token reduction is the highest claim and appears to apply to specific live coding tasks on large repos—average reductions across all tasks and repos are lower (6.8x). No negative criticisms surfaced during research, though the tool is relatively new (announcement in Feb 2026, Hacker News discussion recent), so long-term reliability is unproven."