A few weeks ago, I wrote about how spec-driven development with Claude and GitHub CLI transformed how I turn fuzzy ideas into actionable GitHub issues. That was about the input side of my workflow—getting the work well-defined.
This post is about the execution side: how I distribute that work across cloud agents, local agents, and parallelized git worktrees to maximize throughput. The short version? I'm running what amounts to a distributed development team, and the "team members" are a mix of cloud agents (VMs), my development laptop, and some carefully orchestrated git magic.
My current workflow breaks down into three complementary strategies:
Each has its sweet spot. The real power comes from knowing when to use which—and how to combine them.
One of my favorite recent additions is the Claude Code GitHub Action. When I tag @claude in a GitHub issue comment, it spins up a cloud VM, checks out the repo, and starts working on the issue autonomously.
The workflow configuration is minimal:
name: Claude Code
on:
issue_comment:
types: [created]
issues:
types: [opened, assigned]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' &&
contains(github.event.comment.body, '@claude') &&
!github.event.issue.pull_request)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
That's it. I write a well-spec'd issue, comment @claude, and wake up to a PR. The agent has access to my full AGENTS.md guidelines and standards, so the output is usually 80-90% production-ready.
This is perfect for:
CTO.new remains a cornerstone for larger implementation tasks. As I covered in my previous post about AI agent orchestration, I assign GitHub issues directly to CTO by tagging @cto-new in a comment and it handles the full implementation cycle. Once complete, CTO.new will notify me via a comment on the issue with a link to the pull request.
The combination of Claude Code Web Agent for quick wins and CTO.new for meatier features gives me flexibility in how I distribute cloud work.
GitHub Copilot runs automatically on every PR through my CI pipeline. It handles:
This is the "automated code reviewer" layer that catches issues before I even look at a PR.
While cloud agents handle async work, local agents give me interactive control for rapid iteration.
I keep a code review command in my .claude/commands/ folder that works with both Claude and Gemini. When I want a quick local review without waiting for cloud CI, I can run it against any PR by ID:
/tutorpro:code-review 456
The command fetches the PR diff, reviews it against my standards documents, and posts structured feedback. Same review quality, but I control when and where it runs.
Claude Code in my terminal remains my primary interactive development tool. Whether I'm in Cursor, VS Code, or a raw terminal, it handles:
The local agents let me iterate FAST without network latency and outperforms any cloud agent VM (for now)
This is where things get exciting. For large refactors or when I have multiple related issues, I use a custom divide-and-conquer command that orchestrates parallel work across git worktrees.
Most developers (including past me) approach a 10-issue refactor like this:
That's fine, but it's slow. Each issue waits for the previous one, even when they're independent.
Git worktrees let you check out multiple branches simultaneously in different directories. My divide-and-conquer command automates this:
# Create worktrees for each subtask
git worktree add ../tutor-pro-worktrees/subtask-1-typography feature/css-refactor
git worktree add ../tutor-pro-worktrees/subtask-2-badges feature/css-refactor
git worktree add ../tutor-pro-worktrees/subtask-3-cards feature/css-refactor
Each worktree gets its own Claude Code session working on a specific, non-overlapping slice of the work. The key principles:
Last week, I tackled a comprehensive CSS refactoring across my entire component library. The spec-writer broke it into 10 phased issues:
| Issue | Description | Duration |
|---|---|---|
| #544 | Typography & Utility @apply Classes | ~11 min |
| #545 | CSS Standards Documentation | ~11 min |
| #546 | Create Text Component | ~45 min |
| #547 | Extend Badge Component with Variants | ~44 min |
| #548 | Migrate Badge Usage Across Codebase | ~45 min |
| #549 | Migrate Button Usage Across Codebase | ~45 min |
| #550 | Migrate Card Usage in Profile Components | ~61 min |
| #551 | Migrate Card Usage in Calendar Components | ~61 min |
| #552 | Enhance Avatar & Migrate Usage | ~60 min |
| #553 | Large File Comprehensive Refactors | ~61 min |
The issues were designed with clear boundaries—no file touched by more than one issue. I spun up 3-5 worktrees at a time, each with its own Claude subagent session. PRs started flowing in, getting reviewed and merged while other subtasks were still in progress.
Total wall-clock time? A fraction of what serial execution would have taken. The individual PRs merged in under 15 minutes each because they were focused and testable in isolation.
The divide-and-conquer command includes a coordination phase that prevents chaos:
## Session 1 (Subtask: Typography Classes):
- Worktree: ../tutor-pro-worktrees/typography
- Files to modify: src/index.css (typography section only)
- Do NOT touch: Component files, other CSS sections
- Test with: npm run lint && npm run build
## Session 2 (Subtask: Badge Migration):
- Worktree: ../tutor-pro-worktrees/badges
- Files to modify: src/components/ui/Badge.tsx, files using Badge
- Do NOT touch: Button, Card, or typography files
- Test with: npm test -- Badge && npm run lint
Each session knows exactly what it owns and what's off-limits. If an agent tries to modify a file outside its scope, it stops and alerts me rather than creating merge conflicts.
One underappreciated benefit of this setup: compute distribution.
When I'm running 3-5 local worktree sessions plus a cloud agent or two, I'm spreading the cognitive and computational load across:
This means:
It's the same principle that makes distributed systems resilient—except the "system" is my development workflow.
After months of iteration, here's my decision tree:
Use Cloud Agents (Claude Code Web, CTO.new) when:
Use Local Agents (Claude CLI, Gemini, Cursor Composer) when:
Use Divide-and-Conquer when:
The biggest shift in my thinking isn't about any individual tool—it's about seeing myself as an orchestrator rather than an implementer.
I spend more time:
And less time:
The tools do the implementation. I do the architecture, quality control, and creative problem-solving. It's a better division of labor—and honestly, it's more fun. Well, most days. Some days I still feel like a parent trying to coordinate carpools for teenagers who all need to be in different places at the same time.
If you want to experiment with this approach:
@claude on a well-defined issuegit worktree add to create a second working directory and run two agent sessions on non-overlapping tasksAGENTS.md or similar doc that all your agents can reference for consistencyThe individual pieces aren't complicated. The power comes from combining them intentionally.
I'm still experimenting. Cursor's Background Agents look promising but need more configuration to match my workflow. I'm also exploring MCP servers for cloud based agents for better tool integration and thinking about how to automate more of the coordination layer.
The landscape keeps evolving, and so does my process. But the core insight remains: distribute the work, maintain the standards, orchestrate the agents. Everything else is implementation details.
Jeremy
Thanks for reading! I'd love to hear your thoughts.
Have questions, feedback, or just want to say hello? I always enjoy connecting with readers.
Get in TouchPublished on December 17, 2025 in tech