Skip to content

My Claude Team


As I’ve been leveraging AI tooling more and more for dev work, I’ve been iterating on what works and doesn’t work for me and the kind of results I’m happy with.

I spent a good chunk of time working with a custom agentic loop that would iterate through a PRD and tackle one item at a time before passing it through a review cycle with a lower model. This was working quite well, but it was difficult to get the model to only work on one item at a time while retaining the larger context of everything else — and all within the limited context budget allowed to that agent.

Dev Teams

Agent team support in Claude Code fixed this. Being able to spin up a team of agents to tackle bigger problems means I don’t need that loop anymore — I can just tell the main agent to orchestrate a team and pass tasks off to sub-agents, each working within their own context budget, before handing off to another agent for review.

They can take on a big task (or list of tasks) and work it out amongst themselves.

Skills

With that, here’s the current version of the skill I’ve been using to invoke this — saved as a custom slash command (/review-team) in Claude Code:

Define a team to implement this plan. Include at least 1 reviewer using Sonnet
and working as devil's advocate who uses /simplify to review code.
 
You are the orchestrator for this work.
 
Process tasks using this flow:
 
1. Task is assigned to Agent
2. Agent works on task
3. When Agent has completed the work, pass to a Reviewer for review with /simplify
4. If the Reviewer accepts the changes, commit and move onto the next task
5. If the Reviewer requests changes, pass back to the original agent
6. If the Agent agrees with the suggestions, the Agent makes the
    changes then GOTO 4 and repeat
7. If the Agent disagrees with the suggestions, note the reason for
    disregarding, then commit and move on to the next task
 
Commit changes in logical groups. Try to be more granular if possible.
 
If you are working in a worktree, make sure to run tests/formatting/linting etc.
within that worktree.
 
As orchestrator, you need to proactively monitor the status of the team. If 
an Agent disconnects or goes idle, you are responsible for either
recovering them or killing them and replacing them with a new team
member. Do not just sit and wait endlessly for an unresponsive team member
to awaken.

My typical workflow is to go through a planning phase with the agent to produce a detailed plan, then invoke /review-team to kick off execution.

This allows multiple agents to work in parallel where tasks support it. By letting Claude figure out the dependency graph itself, it’ll usually identify which tasks can be done in parallel and which need to be linear and spin up an N-agent team accordingly.

Using the /simplify plugin for reviews — an official Anthropic plugin, not something you need to build — means each change is checked against multiple criteria. The reviewer uses Sonnet specifically for the cost and speed tradeoff; you want reviews to be fast and cheap, not another heavyweight agent.

Allowing the agent worker to accept or reject reviewer suggestions helps weed out nitpicky feedback or anything that becomes invalid given a wider context.

The one recurring problem I’ve hit with Claude teams is agents falling offline or failing to respond — hence the strongly worded instruction for the orchestrator to monitor the team and deal with it proactively. It still occasionally needs a nudge, but it does help.

This skill is ever-evolving as I hit new edge cases. Give it a go and let me know how it works for you.