About six months ago, I shared how I was using basic AI tools to build Leaderboard Fantasy. Back then, I was genuinely excited about using Claude Code, Copilot, and Warp to accelerate my development workflow. And honestly? That was just the beginning.
Since then, my approach has evolved significantly. I'm no longer just using AI as a coding assistant—I'm orchestrating a full ecosystem of cloud and local AI agents, each with specialized roles in my development pipeline. The result is something I genuinely didn't think possible for a solo developer: I'm producing software at roughly 10-20x the pace I would in a traditional professional environment.
Recently, cto.new featured a story about how I'm using their platform alongside GitHub to build at this scale. Reading that piece crystallized something for me—what I've built isn't just faster development, it's a fundamentally different approach to how solo builders can compete with teams.
Let me walk you through how this actually works, because it's not just about having AI tools—it's about orchestrating them with intentional process.
Everything starts with specs. I use Claude with custom commands to write detailed feature specifications. These specs are granular, focused, and break down complex features into manageable chunks—typically 4 to 8-hour implementation tasks. If a feature is too big, all the agents choke, time out, or deliver garbage. So I've helped my AI spec writer learn to be ruthless about chunking work.
Those specs get converted into GitHub Issues. GitHub is my central nervous system—it's where all the work lives.
This is where cto.new has become indispensable. I assign GitHub Issues directly to cto.new, and it does its thing. Often overnight. I wake up to a pull request waiting for me—a complete implementation that's usually 80-90% production-ready.
Here's the beautiful part: as the cto.new article mentions, I don't even use the web interface. I work almost entirely out of GitHub when working with cloud based agents. For example, I can create an issue, assign it to cto.new, and let it run. The workflow is so clean that it's become my preferred way to delegate work.
I've also had excellent results with GitHub Copilot for certain types of tasks, particularly those that benefit from context-aware code generation within a specific codebase. Different problems call for different agents. Copilot also polices all pull requests by running my github actions to assess code quality and test coverage, and also provides code review capabilities.
While I'm heavily invested in cloud agents, I also maintain local agents using Gemini and Claude via CLI interfaces. These are particularly useful for:
The local agents give me flexibility and keep me from being dependent on any single cloud service. Plus, there's something satisfying about maintaining your own local intelligence layer.
Here's where the process gets interesting. Cloud agent pull requests get reviewed by both cloud agents (I tag Copilot or other agents in PR comments) and sometimes by local agents via CLI custom commands. This creates multiple passes of quality assurance.
The feedback loop is key: pull request feedback gets addressed by me or delegated back to the appropriate agent. It's not fully automated because I want human judgment in the loop—but it's dramatically faster than traditional code review.
Finally, everything flows through a comprehensive CI/CD pipeline:
The CI/CD itself is orchestrated, but the key is that the agents understand it. When they write code, they're aware of the constraints and standards they need to meet.
So what does this actually produce? Let me be specific:
For Leaderboard Fantasy, I'm handling:
For TutorPro (coming soon), I'm building:
Both projects. Solo. Using agents.
Could I do this without AI agents? Technically yes, but it would take ages to finish. The difference between building one meaningful feature per week versus one (or more) per day on a part time basis is profound.
Here's the thing about this industry right now: everything is evolving rapidly. New models drop. New services launch. Pricing changes. I actively experiment with offerings from cto.new, Anthropic, GitHub Copilot, Cursor, amongst others. I pay attention to API limits, assess different models for different task types, and adjust my process accordingly.
Some experiments stick (cto.new became a cornerstone of my workflow). Some don't. But the willingness to experiment—to constantly optimize your agent selection and orchestration—is what separates struggling solo devs from those who are truly shipping at scale.
Not everything is rosy. Here are the real friction points:
Token Limits and Context Windows: Larger features still hit limits. You can't send an entire codebase context to every agent. This is why the 4-8 hour chunking is essential. I've also increased my spending on more forgiving limits with Claude code.
Agent Consistency: Different agents approach problems differently. What works well with one agent might need significant refactoring if handled by another. Consistency across the codebase requires vigilance.
Integration Friction: Getting all these tools to play nicely together—custom commands, CLI interfaces, GitHub integrations, CI/CD hooks—requires setup work. I'm trying to cross reference rules and context documents to keep a single source of truth, but it's been difficut. However, once it's done, it's really done.
The Learning Curve: Using AI agents effectively is a skill. It's not just prompting—it's understanding how to structure work so agents can succeed, how to write specs that don't over-constrain solutions, and how to review agent work intelligently.
Here's what genuinely excites me: we're at an inflection point. The tools to build solo have crossed a threshold where a single competent engineer with the right AI orchestration can do what used to require small teams.
This has implications:
The developers who win in the next few years aren't necessarily the fastest typists. They're the ones who can orchestrate AI effectively.
I don't expect my process to stay static. In the coming months, there will be new agents, new models, new integrations to experiment with. I'm planning to:
The market will continue to evolve and adapt to developer demands. I expect my process to evolve alongside it. The constant is this: the willingness to treat development itself as a continuous optimization problem, where your tools and workflows are the primary variables.
Six months ago, I was excited about AI-assisted vibe coding. Today, I'm running what effectively functions as a distributed development team. I'm shipping features that would require at least 5-7 people in a traditional setup.
Is it perfect? No. Is it the future of software development? Absolutely.
If you're curious about how you might evolve your own workflow, start with small experiments. Pick one AI agent, integrate it into one part of your process, and measure the impact. Build from there. If you want to learn how to get started, don't hesitate to contact me. I love sharing experience, wisdom and dad jokes.
The tools exist. The process is learnable. The productivity gains are ABSOLUTELY real.
The only barrier now is imagining what's truly possible and the fear of the unknown.
If you want more details on how I'm using cto.new specifically, check out the story they published about my workflow. It's a solid look at one concrete part of this larger ecosystem.
Jeremy
Thanks for reading! I'd love to hear your thoughts.
Have questions, feedback, or just want to say hello? I always enjoy connecting with readers.
Get in TouchPublished on November 24, 2025 in tech