I've been writing code for nearly 30 years. I can remember tinkering on Commodore PETs in grade school, and Apple IIs in high school. During my time at Lake Forest College, a college advisor noticed I had a knack for this stuff, guided me toward computer science, and the rest—as they say—is history. Twenty-six years of professional experience later, I'm a software architect who still loves the art of creating software.
But 2025 stood apart. Our entire industry has begun a profound transformation. AI has arrived!
This year, I shipped more functional software than any other year in my career. Not because I suddenly became a faster typist or discovered some hidden reservoir of energy. Because I fundamentally changed how I build software.
I went from vibe coding to structured workflow orchestration. From working with AI tools to orchestrating teams of AI agents. And I have the receipts.
As the year winds down, I pulled together the metrics from my two major projects. The numbers surprised even me.
| Metric | Value |
|---|---|
| Total Commits | 1,140 |
| Pull Requests | 139 |
| Application Code | 92,421 lines |
| Test Code | 5,111 lines |
| Total Lines of Code | 97,532 |
| Metric | Value |
|---|---|
| Total Commits | 1,773 |
| Pull Requests | 484 |
| Application Code | 147,879 lines |
| Test Code | 50,502 lines |
| Storybook Stories | 71,052 lines |
| Total Lines of Code | 269,433 |
| Metric | Value |
|---|---|
| Total Commits | 2,913 |
| Total Pull Requests | 623 |
| Total Lines of Code | 366,965 |
That's nearly 367,000 lines of production code and tests (excluding documentation) across two fully functional platforms—built primarily solo in about eight months on a part-time basis.
For context: in my professional career, projects of this scale typically required teams of 5-10 engineers working for 12-18+ months.
The year didn't start with a master plan. It started with curiosity and a side project. Now I'm obsessed and excited about the future of producing software with the help of coding agents.
When I started Leaderboard Fantasy in late April, I was doing what I'd always done: write code, test it, iterate. The only difference was I had some new AI tools in my toolkit—Claude Code, GitHub Copilot, Warp.
They were helpful. Autocomplete on steroids. Good for boilerplate. Occasionally magical.
But I was still thinking like a solo developer. One task at a time. One feature at a time. The AI was my assistant, not my team.
The commit history tells the story:
April 2025: 1 commit (just getting started)
May 2025: 70 commits (building momentum)
June 2025: 167 commits (shipping features)
Respectable progress, but nothing revolutionary.
July and August are supposed to be slower months. Family vacations. Summer activities. Less time at the keyboard.
Yet something clicked.
July 2025: 343 commits
August 2025: 369 commits
Those weren't just random commits. Those were the months I started treating AI agents differently. Instead of asking for code snippets, I started delegating entire features. I experimented with cto.new for overnight PR generation. I set up Claude Code GitHub Actions to handle issues asynchronously.
More importantly, I started writing things down. Standards. Guidelines. Architecture decisions. Not for the AI's benefit, but for mine—and the AI happened to benefit enormously. I even have commands to keep my documentation in sync with the project's current state.
The irony wasn't lost on me: the "vacation months" produced more commits than the "focused work" months because I'd learned to work through agents instead of alongside them.
Then came October.
My wife—a well-loved teacher who tutors kids on the side—needed a better way to manage her students, schedules, and communications. What started as "I'll build you a simple tool" turned into TutorPro, a comprehensive platform for educators.
But this time, I came prepared. I had a process. I had standards documented first. I had agent configurations dialed in. I had learned.
October 2025: 397 commits
November 2025: 804 commits
December 2025: 514 commits (and counting)
That's over 1,700 commits on TutorPro alone in three months. The commit velocity more than doubled from my earlier projects—not because I was working harder, but because the process was finally working for me.
Here's what took me embarrassingly long to figure out: the speed came from the structure, not despite it.
When I look back at what actually changed between June and October, it wasn't the tools. The tools were largely the same. What changed was my approach:
AGENTS.md that every AI agent referencesThe agents didn't get smarter. I got smarter about how to work with them.
There's a narrative out there that AI is going to make experienced developers obsolete. That a junior dev with good prompting skills can match a senior engineer.
I think that's backwards.
My 26 years of experience didn't become less valuable this year—they became more valuable. Here's why:
Architecture decisions still matter. Agents can write code, but they can't decide whether you should use Firebase or PostgreSQL, monolith or microservices, server components or client-side rendering. Those decisions require understanding tradeoffs that only come from having seen things fail.
Pattern recognition accelerates everything. When an agent generates code that "smells wrong," I can spot it in seconds. Not because I memorized every anti-pattern, but because I've debugged enough production incidents to develop intuition.
Standards documentation is a senior skill. Writing clear, comprehensive guidelines that both humans and AI can follow? That's technical writing informed by architectural thinking informed by battle scars. Juniors don't have those scars yet.
Orchestration is management. Distributing work across cloud agents, local agents, and git worktrees—while maintaining consistency and quality—is project management. It just happens that my "team" runs on GPUs instead of coffee.
My skills might be dated in some ways. I haven't touched React class components in years. My CSS-in-JS opinions are probably out of fashion. But my experience—the pattern recognition, the architectural judgment, the ability to see systems holistically—that's exactly what makes agent orchestration work.
I'm not a faster coder now. I'm a better technical lead.
It hasn't all been smooth. Some honest friction points from the year:
The learning curve is real. Figuring out how to prompt effectively, how to structure specs for agent consumption, how to configure rules and standards—this took months of experimentation. There were weeks where I felt slower, not faster.
Context window limits hurt. Large features still choke agents. The 3-8 hour task sizing isn't arbitrary—it's the maximum scope that reliably fits in context. Anything bigger and you get hallucinations or incomplete implementations.
Debugging agent work is its own skill. When an agent produces buggy code, you have to debug both the code and the prompt/spec that led to it. That's two failure modes to diagnose.
The tooling landscape is exhausting. New models every month. New services, new pricing, new capabilities. I've tried Cursor, Copilot, cto.new, Claude Code, Gemini CLI, and probably a dozen others I've already forgotten. Keeping up is a job in itself.
Some days the agents are just... wrong. And those are frustrating days. Like herding cats, except the cats occasionally decide to refactor your entire component library unprompted.
If I could go back to the beginning of this year, here's what I'd say:
Write your standards first. Before you start coding with agents, write down how you want code to look, how errors should be handled, how tests should be structured. This document will save you hundreds of hours of fixing inconsistent AI output.
GitHub is your central nervous system. Issues, PRs, Actions—use them religiously. Cloud agents work best when work is tracked in a system they can read and write to.
Smaller tasks, more parallelization. One big feature request to an agent will fail. Ten small, well-scoped issues will succeed and can run in parallel. Break everything down.
Your experience is the differentiator. Don't try to compete with AI on speed of syntax. Compete on judgment, architecture, and orchestration. That's where decades of experience shine.
The first month will feel slower. Setting up standards, learning prompting patterns, configuring tools—it's an investment. The compound returns come later.
Leaderboard Fantasy is running strong, with active contests and a growing user base every tournament weekend. TutorPro launches very soon, built with everything I learned throughout the year.
But more than the products, I'm excited about what this year proved: a solo developer with the right process can compete with small teams. Not by working 80-hour weeks, but by working differently. Part time...from the couch!
The future of software development isn't AI replacing developers. It's developers who can orchestrate AI teams outpacing those who can't.
And for those of us with decades of experience? We're not obsolete. We're finally getting the leverage our experience deserves.
The Bottom Line
367,000+ lines of code. Two production platforms. One engineer (plus a lot of silicon teammates).
Not because AI is magic, but because I learned how to conduct the orchestra.
If you're an experienced developer wondering whether AI tools are "for you"—whether your skills are too dated, whether you're too set in your ways—I hope my year gives you a different perspective.
Your experience isn't a liability. It's the foundation that makes agent orchestration actually work.
The tools exist. The process is learnable. And the best is yet to come.
Here's to 2026.
–Jeremy
Thanks for reading! I'd love to hear your thoughts.
Have questions, feedback, or just want to say hello? I always enjoy connecting with readers.
Get in TouchPublished on December 30, 2025 in tech