Written By Deepti Ratnam
Edited By: Deepti Ratnam | Published By: Deepti Ratnam | Published: Mar 10, 2026, 12:16 PM (IST)
Anthropic has launched a new AI tool, specifically designed to review software code before it is deployed. The company unveiled a feature named Code Review, available inside Claude Code. The company says that the new feature is launched with an aim to enhance developers’ work and their problems related to coding. The tool will focus on analyzing pull requests and finding problems in the code before developers merge it into projects.
As per Anthropic, the tool uses an agent-based system to examine code carefully. This will not just reduce the pressure on human reviewers, but will also improve the quality of software development.
Anthropic launched Code Review for developers keeping in mind the modern development teams that are producing large amount of code. While the AI coding assistant and automated coding tools have increased the speed of development. Nevertheless, it is also causing trouble when many pull requests are created every day.
To minimize this issue, Anthropic brought Code Review. The company understands the fact that human reviewers often do not have enough time to examine every line of code. In some cases, they may also only skim through the changes. So, to cut down the bugs or security problems before the code goes live, Anthropic’s new tool will indulge into deeper analyses and will check the code before it goes live.
Read More: Anthropic’s Claude now lets you import chat history from other AI chatbots
Step 1: The first step to use this feature, you need to contact an administrator, so that the person can enable the tool in the Claud Code settings
Step 2: Your system should also be connected to the GitHub by installing the Github app.
Step 3: As soon as this setup is complete, the AI will automatically sart reviewing the new pull requests in selected repositories.
Step 4: Immediately after the request is submitted, a group of AI agents will begin analyzing the code.
Step 5: These agents will work simultaneously to identify bugs, logic errors, and possible issues.
Step 6: In addition, they will also check each finding to remove false warnings.
Step 7: The system will then ranks the issue based on severity. If the request is more complex, then it will require deeper analysis with more AI agents working on them. Smaller code changes will have less time and light reviews.
Anthropic’s Code Review System will typically take up to 20 minutes to finish the process. Once the process is completed, the tool will post a summary comment on the pull request. It will also deliver detailed notes on specific issues.
According to company, the Code Review feature is more advanced version as compared to its earlier open-source review tool for Claude Code. It requires advanced computing resources to perform deeper analysis.
The service bill of Code Review will be based on token usage, and as per tech giant the average review may cost between 15 and 25 US dollars, based on the complexity of the code.