Tags: Claude

Anthropic Tightens Claude Session Limits During Peak Hours

Anthropic Tightens Claude Session Limits During Peak Hours TechRadar
Anthropic announced that it will reduce the speed at which users burn through Claude's five‑hour session limits during weekday peak periods (5 a.m.‑11 a.m. PT / 1 p.m.‑7 p.m. GMT). Weekly limits stay the same, but the new rule means users will reach their session caps faster in those windows, affecting roughly seven percent of subscribers, especially those on Pro tiers. The change was disclosed by engineer Thariq Shihipar on X rather than through official channels, prompting frustration among users who must now plan usage more strategically. Read more

Anthropic Report Highlights AI Skills Gap and Uneven Job Impact

Anthropic Report Highlights AI Skills Gap and Uneven Job Impact TechCrunch
Anthropic’s latest economic impact report finds little evidence of widespread job displacement from AI so far, but warns of a growing skills gap between early users of its Claude model and newcomers. Early adopters are extracting significantly more value, especially in high‑income regions and knowledge‑worker hubs. The company cautions that as AI adoption spreads, displacement could accelerate, urging a monitoring framework to guide policy responses. Read more

Anthropic Nears Final Approval of Landmark AI Copyright Settlement

Anthropic Nears Final Approval of Landmark AI Copyright Settlement CNET
Anthropic is close to securing final court approval for a historic settlement that resolves claims that its Claude AI model was trained on pirated books. Nearly 100,000 authors have filed claims, and the company has agreed to pay a total of $1.5 billion, with $3,000 allocated to each qualifying work. The settlement includes a certification that no pirated content will be used in future Claude releases and a commitment to destroy existing pirated copies. The court is set to consider the final approval motion in late April, marking a significant milestone in AI‑related copyright litigation. Read more

Anthropic Unveils Auto Mode for Claude Code, Giving AI Autonomous Action with Safety Guardrails

Anthropic Unveils Auto Mode for Claude Code, Giving AI Autonomous Action with Safety Guardrails TechCrunch
Anthropic has introduced an "auto mode" for its Claude Code AI, allowing the system to automatically execute actions it deems safe while blocking those that appear risky. The feature, now in research preview, adds a safety layer that checks for dangerous behavior and prompt‑injection attacks before any action runs. Auto mode works with Claude Sonnet 4.6 and Opus 4.6 and is recommended for isolated, sandboxed environments. The rollout targets Enterprise and API users and follows Anthropic’s recent releases of Claude Code Review and Dispatch for Cowork, reflecting a broader industry move toward more autonomous coding tools. Read more

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards

Anthropic Announces Claude’s New Computer-Use Capabilities with Built‑In Safeguards Ars Technica2
Anthropic introduced a computer‑use feature for its Claude AI model, allowing the system to interact directly with a user's desktop. The company emphasized a set of safeguards designed to block risky actions such as moving money, modifying files, or accessing sensitive data, though it warned that these protections are not absolute. Users are advised to start with trusted applications and avoid handling sensitive information during the preview phase. Anthropic’s rollout follows similar moves by Perplexity, Manus, and Nvidia, and comes after the viral spread of OpenClaw, which prompted OpenAI to hire its creator to advance personal agents. Read more

Anthropic Expands Claude with Autonomous Computer Control in Code and Cowork

Anthropic Expands Claude with Autonomous Computer Control in Code and Cowork The Verge
Anthropic has introduced a new research preview that lets Claude’s Code and Cowork agents control a Mac computer on behalf of users. The feature lets the AI open files, browse the web, run development tools and interact with apps without any setup, and it is available to Claude Pro and Max subscribers. Users must run the Claude desktop app on a supported Mac and pair it with the mobile app. The system asks for explicit permission before taking actions and can fall back to direct control of the mouse, keyboard and display when integrations are unavailable. Read more

Anthropic Launches Claude Cowork: An AI Assistant for PC Tasks

Anthropic Launches Claude Cowork: An AI Assistant for PC Tasks Digital Trends
Anthropic has introduced Claude Cowork, a research‑preview AI assistant for Claude Pro and Max subscribers that can perform computer tasks on macOS and Windows without complex setup. The tool can open files, browse the web, interact with apps, and run developer utilities, using built‑in connectors for services like Gmail, Google Drive, and Slack when available, and otherwise controlling the mouse and keyboard. It always requests permission before accessing new apps or files and can be stopped at any time. Additional features include Claude Dispatch for mobile command input, Claude Channels for event integration, and scheduled task automation. Read more

Anthropic Introduces Claude Computer-Control Feature for Pro and Max Subscribers

Anthropic Introduces Claude Computer-Control Feature for Pro and Max Subscribers CNET
Anthropic announced that its Claude AI can now control a MacOS computer, allowing it to perform tasks such as opening files, scrolling, clicking, and using apps like Google Calendar or Slack. The capability is limited to Claude Pro and Claude Max subscribers, requires permission before each action, and includes safety safeguards to block prompt injections and other vulnerabilities. Users are advised not to use the feature with apps that handle sensitive data. The new function works with Anthropic's Dispatch service, enabling task delegation from a phone and supporting morning briefings or test runs. Read more

Anthropic Refutes Claims It Could Disrupt Military AI Systems

Anthropic Refutes Claims It Could Disrupt Military AI Systems Wired AI
The U.S. Department of Defense has expressed concern that Anthropic’s AI model, Claude, could be manipulated to interfere with military operations. Anthropic responded by stating it has no ability to shut down, alter, or otherwise control the model once deployed by the government. The company highlighted that it lacks any back‑door or remote kill switch and cannot access user prompts or data. In parallel, Anthropic has filed lawsuits challenging a supply‑chain risk designation that limits the Pentagon’s use of its software. The dispute underscores tension between national‑security priorities and emerging AI technologies. Read more