URL copied — paste it as a website source in a new notebook
Summary
Tech with Mak highlights a major open-source documentation effort—the "claude-code-best-practice" repository on GitHub that has accumulated over 22,000 stars. The repository, curated by contributions from Boris Cherny (creator of Claude Code at Anthropic) and his team, represents the most comprehensive guide to using Claude Code effectively in real-world production environments. Rather than being promotional material, this is a crowd-sourced collection of 86+ specific tips and patterns extracted from how the Anthropic team actually uses Claude Code internally.
The post encapsulates the repository's core thesis: developers are vastly underutilizing Claude Code's capabilities and patterns, operating with incomplete or incorrect mental models about how to structure work, delegate tasks to agents, and organize code context. The repository addresses this by documenting patterns across 10+ categories: planning, memory management (CLAUDE.md files), agents, commands, skills, hooks, workflows, debugging, git practices, and daily habits.
Specific practices highlighted include: always using plan mode to let Claude think through architecture before implementation; leveraging AskUserQuestion tool to let Claude interview the user and eliminate ambiguity upfront; using Git Worktrees to enable parallel development across isolated branches; structuring CLAUDE.md files to stay under 200 lines per file (to prevent Claude from ignoring instructions in longer files); building feature-specific subagents with progressive disclosure skills rather than generic "QA" or "backend" agents; using /loop for scheduling recurring tasks up to 3 days locally; and employing cross-model code review (running one model for planning/implementation and a different model for QA to catch bugs the original missed).
The repository also catalogs community workflows that have gained traction (Superpowers with 122K stars, Everything Claude Code with 116K stars, Spec Kit with 83K stars), identifies key architectural questions the community is still debating ("Why does Claude ignore CLAUDE.md instructions?" "When to use commands vs. agents vs. skills?"), and documents daily habits like updating Claude Code daily and following r/ClaudeAI and r/ClaudeCode communities for new patterns.
Key Takeaways
Always use plan mode for non-trivial tasks and give Claude a way to verify the plan works before full implementation—Boris runs 5-10 parallel Claude Code instances simultaneously (one per git worktree) to enable true parallelism and reduce context bloat
Use AskUserQuestion tool to have Claude interview you at the start of projects; this eliminates ambiguity and reduces rework by forcing specificity upfront before moving to execution in a fresh session
CLAUDE.md file size matters critically—target under 200 lines per file, use wrapping tags like <important if="..."> to prevent instructions from being ignored as context grows, and split large instructions across multiple files in .claude/rules/
Build feature-specific subagents with progressive-disclosure skills (not generic 'QA engineer' or 'backend engineer' agents)—this keeps main context focused, enables test-time compute (separate windows catch bugs the original agent missed), and lets you 'throw more compute' at problems when needed
Git Worktrees enable parallel development with isolated branches per agent; keep PRs small (~p50 of 118 lines), squash merge for clean linear history, and commit often (aim for at least one commit per hour as soon as a task completes)
Use /loop for local recurring tasks (up to 3 days) and /schedule for cloud-based tasks that run even when your machine is off; pipe terminal output directly into SKILL.md files using !`command` syntax so Claude only sees computed results, not raw logs
Cross-model QA (e.g., Claude Opus for planning, Claude Sonnet for implementation, Codex for review) outperforms single-model approaches; sessions degrade due to context bloat, so use /clear and start fresh rather than trying to fix in the same session when going off-track
The repository identifies 'billion-dollar questions' still unanswered: Can you convert entire codebases to specs and regenerate them from specs alone? Why does Claude sometimes ignore CLAUDE.md instructions? When exactly should you use commands vs. agents vs. skills?
Common debugging patterns: always take screenshots and share with Claude when stuck; use MCP to let Claude see Chrome console logs directly; ask Claude to run long-running terminal commands as background tasks; use /doctor to diagnose configuration issues
Daily habit: update Claude Code every morning, read the changelog, and follow r/ClaudeAI, r/ClaudeCode, and core contributors like Boris, Thariq, Cat, Lydia—the tool evolves monthly with new features (voice input, scheduled tasks, agent teams, browser automation) making stale knowledge costly
About
Author: Tech with Mak (@techNmak)
Publication: X (Twitter)
Published: March 2026
Sentiment / Tone
Enthusiastically informative with an "I just discovered a treasure map" tone. The post reads as a curated highlight reel of practical wisdom distilled from production usage. Tech with Mak positions the repository as a long-overdue documentation effort, implying that developers have been using Claude Code suboptimally without clear guidance. The tone is neither hype-driven nor overly technical—it's the voice of someone who found a resource so valuable they felt compelled to amplify it. The emoji usage (🚫👶 for "don't micromanage," → for progressive lists) keeps it accessible and even playful, but the underlying message is serious: this represents hard-won patterns that can dramatically improve productivity.
Related Links
claude-code-best-practice GitHub Repository The primary source material—the 22K+ star repository containing 86 tips, workflow patterns, and community best practices for Claude Code. Includes links to Boris Cherny's original tweets and interviews.
Official Claude Code Best Practices Documentation Anthropic's official guidance on using Claude Code, providing the authoritative reference that the GitHub repo builds upon and extends with community patterns.
How Boris Uses Claude Code Interactive guide documenting Boris Cherny's specific 13-step workflow, including his 5-terminal-tab setup, use of git worktrees, and plan-mode-first approach—the foundation for much of the best practices repo.
Building Claude Code with Boris Cherny - Pragmatic Engineer Newsletter In-depth interview with Boris Cherny covering how he built Claude Code, his design philosophy, and how the tool has evolved—provides credibility and deeper context for the best practices outlined in the Twitter post.
Head of Claude Code: What Happens After Coding is Solved - Lenny's Podcast Podcast interview with Boris Cherny discussing the future of Claude Code, the economics of AI development tools, and the vision beyond 'solving' code generation—contextualizes why best practices matter as the tool matures.
Research Notes
**Author Context:** Tech with Mak (@techNmak) is a tech educator with 24K followers who regularly posts breakdowns of technical concepts (MongoDB, serverless, ML systems). This post is in character—distilling complex systems into digestible summaries. The repository itself is maintained by shanraisshan on GitHub and appears to be a community effort that synthesized ideas from Boris Cherny's public tweets and interviews.
**Boris Cherny's Credibility:** Cherny is the creator and lead engineer of Claude Code at Anthropic (joined Sept 2024). He's disclosed that Claude Code represented ~12% of Anthropic's ARR by end of 2025, growing past $1B in ARR. He's given multiple interviews (Pragmatic Engineer newsletter, Lenny's Podcast, Y Combinator) and has posted extensively on his workflow. His advice carries weight because he observes how Anthropic's internal teams (even sales) use Claude Code at scale.
**Repository's Scale & Impact:** The claude-code-best-practice repo hitting 22K stars (and trending on GitHub monthly since at least March 2026) indicates significant developer hunger for structured guidance. The linked community workflows (Superpowers 122K stars, Everything Claude Code 116K stars) suggest a thriving ecosystem of builders extending Claude Code patterns.
**Broader Context:** This resource arrives at a moment when Claude Code is rapidly evolving—features like /schedule (cloud-based cron), agent teams, voice dictation (/voice), browser automation, and MCP server connections are being added at high velocity. The best practices doc serves as a stabilizing reference point, helping developers not get lost in the feature explosion. Reddit communities (r/ClaudeAI, r/ClaudeCode) report daily discussions of patterns and "gotchas."
**Potential Limitations:** The repository emphasizes Anthropic's internal workflow (Boris's setup with 5-10 parallel instances, heavy reliance on plan mode, use of all advanced features). Smaller teams or solo developers may find some patterns overkill. The "billion-dollar questions" section reveals that even the core team doesn't have answers to some fundamental questions (e.g., can you fully specify code and regenerate it later?), suggesting the field is still exploratory. Some tips conflict with stated Anthropic best practices (e.g., "don't put 'NEVER add Co-Authored-By' in CLAUDE.md" vs. using settings.json), indicating evolving guidance.
Topics
Claude CodeLLM Workflow OptimizationAI-Assisted DevelopmentPrompt EngineeringAgent SystemsBest Practices Documentation