Let Claude use your computer from the CLI - Claude Code Docs

https://code.claude.com/docs/en/computer-use
Technical documentation and feature guide with integrated safety guidance · Researched March 31, 2026

Summary

Anthropic's Claude Code documentation describes a new computer use feature that enables Claude to autonomously control macOS computers through GUI interaction. The feature allows Claude to take screenshots, move the mouse, type text, and interact with any application on a user's machine—enabling the AI to complete tasks that previously required manual desktop interaction. This capability works across native apps, simulators, and GUI-only tools without APIs, and is activated through a CLI-based system with granular per-application permission controls. The system is designed specifically for tasks requiring visual UI automation, such as building native applications, testing UIs end-to-end, debugging visual bugs, and controlling specialty software like design tools and hardware panels.

The feature operates through a hierarchical tool selection strategy: Claude prioritizes MCP servers first, then Bash commands, then browser automation (if Claude in Chrome is set up), and finally falls back to computer use only when other approaches can't work. This ensures the most precise and efficient tool is always used. The documentation emphasizes that this is a research preview feature still in beta, available only on Pro and Max subscription plans, currently limited to macOS, and requires Anthropic's authentication (not available through third-party providers like Google Cloud Vertex AI or Amazon Bedrock).

Anthropic built multiple safety mechanisms into the computer use system. Before Claude can control any application, users must explicitly approve each app within the session, and those approvals expire at session end. The system includes "sentinel warnings" for apps with broad system reach—like Terminal, Finder, and System Settings—to make users aware of the implications before approval. Additionally, Claude's terminal window is excluded from screenshots and app-hiding logic, preventing potential feedback loops where Claude could see its own output. Users can abort any computer use action instantly via the Esc key or Ctrl+C. Only one session can hold the computer-use lock at a time, preventing conflicts between simultaneous sessions. The documentation acknowledges that while Anthropic has implemented prompt injection defenses and classifiers that flag suspicious activity, vulnerabilities may persist, and users should avoid giving Claude access to sensitive data or accounts without human oversight.

Competitive context matters here: the feature arrived as Anthropic responds to the viral success of OpenClaw, an open-source AI agent framework that OpenAI's CEO has publicly praised and which prompted OpenAI to hire its creator. Anthropic explicitly acknowledges that computer use is "still early compared to Claude's ability to code or interact with text," indicating both confidence in the long-term direction and realism about current limitations. The feature integrates Claude's existing strengths in code generation and debugging with new autonomous GUI control, creating an end-to-end workflow where Claude can write code, compile it, launch it, test it through the UI, identify bugs, fix them, and verify the fixes—all in one conversation.

Key Takeaways

About

Author: Anthropic (documentation team)

Publication: Claude Code Documentation / Anthropic

Published: 2026-03-30

Sentiment / Tone

Technical, balanced, and cautiously optimistic. The documentation is matter-of-fact in describing capabilities but notably transparent about limitations and risks. Anthropic positions computer use as a powerful but early-stage feature, explicitly stating it's "still early compared to Claude's ability to code or interact with text" and repeatedly emphasizing security considerations rather than overselling capabilities. The tone avoids hype while clearly conveying the significance of the feature (end-to-end UI automation in one conversation). Safety warnings and security precautions receive substantial coverage, signaling that Anthropic is aware of the risks inherent in giving an AI system autonomous computer access and is treating the responsibility seriously, though not presenting computer use as a solved problem.

Related Links

Research Notes

**Author and credibility**: This documentation comes directly from Anthropic, the creators of Claude and the Claude Code product. Anthropic was founded in 2021 by former members of OpenAI and is one of the largest AI safety-focused research labs. The documentation reflects internal product decisions and safety engineering, so it carries significant weight as an authoritative source. **Broader competitive context**: Computer use was announced March 24-31, 2026 (contemporaneous with the documentation date), in direct response to the viral success of OpenClaw—an open-source AI agent framework. OpenClaw has been praised by Nvidia CEO Jensen Huang as "definitely the next ChatGPT," and OpenAI hired OpenClaw's creator, Peter Steinberger, to lead OpenAI's "next generation of personal agents" efforts. Anthropic's computer use announcement represents the company's answer to competitor moves in the AI agent space and demonstrates that autonomous computer control has become a battleground feature among frontier AI labs. However, computer use is not yet as polished or viral as OpenClaw; it's presented as a research preview. **Security and safety concerns**: Multiple independent journalists and tech outlets (Lifehacker, Ars Technica, Anthropic's own docs) have flagged legitimate security concerns. The feature grants Claude the ability to read any file, change system settings, and execute arbitrary actions—potentially including accessing sensitive data, credentials, or financial accounts if not carefully isolated. Anthropic's response is layered: per-app approval, prompt injection classifiers, terminal exclusion, and escape-key abort. However, the documentation honestly acknowledges that "vulnerabilities like jailbreaking or prompt injection may persist" and recommends using virtual machines or containers with minimal privileges for production use. This is a reasonable security posture, but it means the feature is realistically only safe for development/testing environments, not production workloads managing critical data. **Platform limitations and vendor lock-in**: Computer use is macOS-only, requires Anthropic authentication (not available through Amazon Bedrock, Google Cloud Vertex AI, or Microsoft Foundry), and is exclusive to Claude models. This creates significant friction for cross-platform teams or organizations committed to multi-vendor cloud strategies. Windows and Linux support are not yet available, limiting the feature's addressable market in enterprise and developer communities that heavily use Linux servers and containers. **Reactions and adoption signals**: Tech press coverage (CNBC, India Today, Ars Technica, SiliconANGLE) has been positive, with emphasis on the AI agent arms race. However, there's been limited hands-on coverage compared to OpenClaw's launch, suggesting either more cautious interest or lower early adoption. Notably, on the same day the computer use documentation was published (March 31, 2026), Anthropic's entire Claude Code source code was leaked via an npm registry mistake (a 60MB source-map file), which Ars Technica and other outlets noted. This security incident occurred just hours after a major security-focused product launch, creating some irony and perhaps dampening some positive sentiment. **Reality vs. hype**: The documentation's explicit acknowledgment of limitations is valuable. "Computer vision accuracy and reliability: Claude may make mistakes or hallucinate when outputting specific coordinates" is honest. The admission that latency "may be too slow compared to regular human-directed computer actions" is refreshingly clear. This contrasts with some media coverage that framed computer use as magical or near-fully-autonomous; the docs make clear it's a tool with real constraints. **Significance**: Computer use represents a significant inflection point in AI product development—moving from text/code-only interfaces to GUI automation. This broadens Claude's applicability to any task that currently requires graphical interaction, which is still the majority of knowledge work (design, video editing, trading platforms, etc.). However, the feature is only useful if it's reliable and safe, and the documentation suggests both dimensions remain under development. The feature is meaningful as a research direction but shouldn't be expected to replace human developers or autonomously handle critical tasks without substantial human oversight.

Topics

AI agents and autonomous task completion GUI automation and computer vision Claude Code CLI and tooling AI safety, sandboxing, and permission models Prompt injection defense mechanisms Developer productivity tools