Every golf course has a caddie who knows the lay of the land—distances, hazards, which club to pull. Leaderboard Fantasy has one too. Ours just happens to run on Gemini and Spring Boot.
Caddie is the AI agent inside Leaderboard Fantasy that helps users research players, compare stats, analyze their fantasy rosters, check leaderboards, and even get weather forecasts for tournament week. It's the kind of feature that makes users say, "Wait, I can just ask it?" And then they never stop asking.
The problem was that behind Caddie's clean chat interface, the code powering it was anything but clean. I had built a custom tool-calling system on top of Spring AI 1.0 that worked—but it was a maintenance headache hiding in plain sight. When Spring AI 1.1 landed with native tool calling support, I knew it was time to rip out the plumbing and let the framework do what frameworks are supposed to do.
This is the story of that migration: what the old system looked like, what Spring AI 1.1 brought to the table, and why the result is one of those rare refactors where every metric improved.
Before we dig into the migration, it's worth understanding what Caddie actually does. It's not a simple Q&A bot—it's a tool-equipped AI agent with access to real data.
When a user opens Caddie during a tournament week and asks something like "How has Scottie Scheffler been playing lately?", the agent doesn't guess. It calls a tool, hits the database, and comes back with actual career averages, recent form, FedEx Cup ranking, and OWGR position. Ask it to "Compare Rory and Xander for this week", and it pulls tournament-specific stats for both players and presents a structured comparison.
Here's what Caddie has in its bag—seven tools in total:
| Tool | What It Does |
|---|---|
| Player Stats | Career and tournament stats, rankings, recent form |
| Player Comparison | Head-to-head statistical breakdown of two players |
| Roster Analysis | Rates each player on a user's roster, spots weak links |
| Player Alternatives | Suggests similar players based on performance metrics |
| Contest Info | Scoring rules, roster size, tournament details |
| Leaderboard | Live standings, season averages, custom date ranges |
| Weather | Multi-day tournament forecast via Open-Meteo |
All of this runs on a Spring Boot 3.5 backend with MongoDB, powered by Google Vertex AI Gemini (gemini-2.5-flash). Conversations persist across sessions. There's frustration detection, incident logging, and a daily health summary email. It's a real production feature, not a toy.
And all of that context is what made the old tool-calling approach so painful.
When I first built Caddie on Spring AI 1.0, the framework didn't have native tool calling in the way it does now. So I did what any self-respecting engineer would do: I built my own.
Each tool was a Function<Request, Response> bean registered by name:
@Component("getPlayerStats")
@Description("Get detailed statistics for a specific player")
public class PlayerStatsTool implements Function<PlayerStatsTool.Request, PlayerStatsTool.Response> {
public record Request(
@Description("Player name") String playerName,
@Description("Player ID") String playerId
) {}
@Override
public Response apply(Request request) {
// ... lookup logic
}
}
That part was fine. The problem was everything else.
The system prompt had to teach the LLM a specific text format for invoking tools—essentially a custom protocol embedded in a prompt:
TOOL FORMAT:
\`\`\`tool_code
getPlayerStats(playerName="Scottie Scheffler")
\`\`\`
When the model responded, my service had to:
```tool_code blocks using regexcomparePlayers(, getPlayerStats(, etc.StringBuilder per toolThe AIAgentService had ballooned to roughly 1,300 lines. More than half of that was orchestration plumbing: parsing, routing, formatting, combining. The actual business logic—the interesting part—was buried under layers of glue code.
It worked. Users loved Caddie. But every time I wanted to tweak a tool's output format or add a new capability, I had to wade through a swamp of regex patterns and string manipulation. The kind of code where you add a comment that says // don't touch this and mean it.
Spring AI 1.1 introduced first-class support for tool calling that fundamentally changed the programming model. Three features made the migration possible:
@Tool and @ToolParam annotations. Instead of implementing Function<Request, Response> and registering beans by name, you annotate methods directly. The framework reads the annotations and generates the tool schema that gets sent to the model.
ChatClient fluent API with .tools(). Instead of calling chatModel.call(prompt) and manually handling everything, you build a ChatClient that knows about your tools and handles the entire invocation loop automatically.
ToolContext for session data. This was the quiet game-changer. You can pass application-scoped context (like a tournament ID or contest ID) into tools without it being part of the AI's parameter schema. The model never sees those values—it just knows the tools exist. Your code injects the context at call time.
Together, these features meant I could delete my entire custom orchestration layer and replace it with framework-managed tool calling.
Here's what the core of AIAgentService looks like after the migration:
return ChatClient.create(chatModel)
.prompt(new Prompt(messages))
.tools(playerStatsTool, playerComparisonTool, rosterAnalysisTool,
playerAlternativesTool, contestInfoTool, leaderboardTool, weatherTool)
.toolContext(toolContextMap)
.call()
.content();
That's it. Seven lines replaced roughly a thousand.
The ChatClient sends the prompt and the tool definitions (derived from annotations) to Gemini. Gemini decides whether to invoke tools based on the user's question—returning structured function-call requests, not text to parse. Spring AI automatically invokes the corresponding Java method, serializes the result, sends it back to Gemini, and Gemini synthesizes a natural-language response. The .content() call returns the final string.
No regex. No routing. No manual formatting. The model handles the presentation, and the framework handles the plumbing.
And the tools themselves? Much cleaner:
@Component
public class PlayerStatsTool {
@Tool(description = "Get detailed statistics for a specific player in a tournament")
public Response getPlayerStats(
@ToolParam(description = "Player name to search for") String playerName,
@ToolParam(description = "Player ID from the API", required = false) String playerId,
ToolContext toolContext) {
String tournamentId = (String) toolContext.getContext().get("tournamentId");
// ... lookup logic, same business logic as before
}
}
No more Function<Request, Response>. No more @Component("getPlayerStats") name registration. No more inner Request records. Just a method with annotated parameters and a ToolContext for session data. The @Tool description tells the model what the tool does; @ToolParam describes each parameter. Spring AI handles the rest.
The ToolContext is built from the user's chat request—tournament, contest, and roster context flow in naturally:
Map<String, Object> toolContextMap = new HashMap<>();
if (tournamentId != null) toolContextMap.put("tournamentId", tournamentId);
if (contestId != null) toolContextMap.put("contestId", contestId);
if (playerIds != null) toolContextMap.put("playerIds", playerIds);
This is elegant because the model never sees these IDs as parameters it might hallucinate values for. It just knows "I have a tool that gets contest info"—and when it calls that tool, Spring injects the real contest ID from the user's session.
The migration touched 16 files. Here's the scorecard:
| Metric | Before | After |
|---|---|---|
AIAgentService lines |
~1,300 | ~670 |
| Tool orchestration code | ~600 lines of parsing, routing, formatting | 0 |
| Tool definition pattern | Function<Request, Response> + name registration |
@Tool + @ToolParam annotations |
| Adding a new tool | New class + dispatcher entry + formatter + system prompt update | New class with @Tool method, pass to .tools() |
| Response formatting | Java StringBuilder per tool (~100 lines each) |
Model handles it via system prompt guidance |
The service shrank by nearly half, and the complexity that remained is the interesting stuff: conversation management, system prompt construction, context injection, incident detection. The plumbing is gone.
One thing I was nervous about: the old system had meticulous Java-side formatting. Every tool response was hand-crafted markdown—comparison tables with aligned columns, emoji indicators for above/below average, collapsible detail sections. Giving that up felt risky.
It turned out to be a win.
Instead of hundreds of lines of formatting code per tool, I added a RESPONSE FORMATTING section to the system prompt with guidelines for how Caddie should present data—use markdown tables for comparisons, keep responses concise and data-focused, use golf-appropriate language. The model follows those guidelines and adapts its formatting to the context of the conversation, which rigid Java formatting never could.
When a user asks a quick question, the response is brief. When they ask for a deep comparison, the model builds a detailed table. That kind of adaptive formatting would have required even more conditional logic in the old system.
During this same migration, I added a completely new tool—weather forecasts for tournament locations, powered by the Open-Meteo API. In the old system, this would have meant:
Function<Request, Response> classformatWeatherResponse() method with markdown formattingWith Spring AI 1.1? I wrote the WeatherTool class with an @Tool method, added it to the .tools() list, and it just worked. The model discovered it, called it when users asked about weather, and formatted the response beautifully. The entire addition was self-contained—no touching the orchestration layer because there is no orchestration layer to touch.
That's the real payoff of this migration. It's not just about the code you delete—it's about how easy it becomes to add what's next.
For those who want the specifics:
| Component | Version / Tech |
|---|---|
| Spring Boot | 3.5.3 |
| Spring AI | 1.1.2 (upgraded from 1.0.1) |
| AI Model | Gemini 2.5 Flash via Vertex AI |
| Java | 21 |
| Database | MongoDB |
| Hosting | VPS with Docker Compose |
| Tunnel | Cloudflare Tunnel |
If you're running Spring AI 1.0 with custom tool-calling logic—or worse, prompt-engineering your way through function invocation—the upgrade to 1.1 is worth every minute.
Start with one tool. Pick your simplest Function<> bean, convert it to @Tool/@ToolParam, wire it through ChatClient.tools(), and verify it works end-to-end. Once you see the framework handling the invocation loop, converting the rest goes fast.
Embrace ToolContext. If you're currently embedding session-specific IDs into tool parameters or the system prompt, ToolContext is a cleaner pattern. The model doesn't need to know your internal IDs—it just needs to know what tools are available.
Trust the model for formatting. You'll be tempted to keep your Java-side response formatters. Try removing them and giving the model formatting guidelines instead. The adaptive responses are worth it.
Test the conversation flow, not just the tools. The migration changes how tools get called but shouldn't change what they return. Your integration tests should focus on end-to-end conversations—ask Caddie a question, verify the response includes the right data.
This migration fits into a pattern I keep seeing in my work with AI: the frameworks are catching up to what we've been building by hand. Six months ago, custom tool-calling orchestration was the only option. Today, it's unnecessary complexity.
Spring AI is maturing fast. The jump from 1.0 to 1.1 wasn't just an incremental release—it was the framework saying, "We see what you're building, and we can handle the hard parts." Native tool calling, ToolContext, the ChatClient fluent API—these aren't nice-to-haves. They're the kind of abstractions that let you focus on your domain instead of your plumbing.
For Caddie, that means I can spend more time making the agent smarter—better prompts, richer context, new tools—and less time maintaining the machinery that connects it all together.
And for Leaderboard Fantasy users? They just know that when they ask Caddie a question, they get a great answer. They don't care whether Spring AI is parsing tool calls or whether regex is. They care that it works.
It works better now. And the code finally looks like it should.
-Jeremy
Thanks for reading! I'd love to hear your thoughts.
Have questions, feedback, or just want to say hello? I always enjoy connecting with readers.
Get in TouchPublished on February 17, 2026 in tech