Anthropic is expanding to Australia & New Zealand. We’ll be opening an office in Sydney later this year—our fourth in Asia-Pacific after Tokyo, Bengaluru, and Seoul. We’ve begun hiring a local team and are exploring partnerships and investments in line with trends in local Claude use and Australia’s national AI priorities. We're excited to deepen our engagement with customers, researchers, and policymakers across the country. Read more: https://lnkd.in/ggCUQWN5
Anthropic
Research Services
Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems.
About us
We're an AI research company that builds reliable, interpretable, and steerable AI systems. Our first product is Claude, an AI assistant for tasks at any scale. Our research interests span multiple areas including natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.
- Website
-
https://www.anthropic.com/
External link for Anthropic
- Industry
- Research Services
- Company size
- 501-1,000 employees
- Type
- Privately Held
Employees at Anthropic
Updates
-
A statement from Anthropic CEO Dario Amodei: https://lnkd.in/e_6vm3Gm
-
A statement on the comments from Secretary of War Pete Hegseth: https://lnkd.in/e-guCny5
-
A statement from Anthropic CEO, Dario Amodei, on our discussions with the Department of War: https://lnkd.in/e7S682ph
-
Anthropic has acquired Vercept to advance Claude’s computer use capabilities. The Vercept team brings deep expertise in how AI systems see and interact with software, some of the most challenging problems in this space. We're excited to welcome them to Anthropic. https://lnkd.in/gEU8GJEm
-
We're updating our Responsible Scaling Policy (RSP) to its third version. Since it came into effect in 2023, we've learned a lot about the RSP’s benefits and its shortcomings. This update improves the policy, reinforcing what worked and committing us to even greater transparency. We’re now separating the safety commitments we’ll make unilaterally and our recommendations for the industry. We’re also committing to publish new Frontier Safety Roadmaps with detailed safety goals, and Risk Reports that quantify risk across all our deployed models. Read more: https://lnkd.in/eqd8Vcr2
-
Anthropic reposted this
We're introducing updates to plugins in Cowork, designed to help enterprises customize Claude for better collaboration across their teams. Admins can create private plugin marketplaces to distribute them across the org. A unified "Customize" menu also gives you more control over plugins, skills, and connectors in one place. We've added connectors from Google Workspace, Docusign, Apollo.io, Clay, Outreach, Similarweb, MSCI Inc., FactSet, WordPress, and Harvey, and partners like Slack, LSEG, S&P Global, Common Room, and Tribe AI have built plugins for joint customers. We've also created plugins across HR, design, engineering, ops, financial analysis, investment banking, equity research, private equity, and wealth management to help users see what's possible and start building their own. Now in research preview: Claude can work across Excel and PowerPoint end-to-end, running analysis in one and building the presentation in the other. Our team is live now covering these updates. Tune in: https://lnkd.in/gG_G8nZj Read the full post for more: https://lnkd.in/eFuFb5gB
-
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models. Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surveillance systems. These attacks are growing in intensity and sophistication. Addressing them will require rapid, coordinated action among industry players, policymakers, and the broader AI community. Read more: https://lnkd.in/eRV_9Ea2
-
New research: The AI Fluency Index. The AI Fluency Index is our first empirical measurement of how people collaborate with AI. We tracked 11 behaviors across thousands of Claude.ai conversations—for example, how often people refine and iterate on their work with Claude, question Claude’s reasoning, or fact-check its outputs—to analyze how people are using AI, and where there’s room to grow. Read the full report: https://lnkd.in/g4yXP_Y5
-
Anthropic reposted this
Today, we're launching Claude Code Security as a limited research preview. It scans codebases for vulnerabilities and suggests targeted software patches for human review, allowing teams to find and fix issues that traditional tools often miss. We expect that a significant share of the world’s code will be scanned by AI in the near future, given how effective models have become at finding long-hidden bugs and vulnerabilities. Claude Code Security is one step towards our goal of more secure codebases, and a higher security baseline across the industry. We’re opening a limited research preview of Claude Code Security to Enterprise and Team customers today. Open-source maintainers can apply for expedited access: https://lnkd.in/ge24VfiR