I had three weeks before my fantasy football draft and a simple idea: build an AI assistant that could provide real-time draft recommendations. What I didn't have was the coding skills to implement it. What I did have was Claude Code and enough product management experience to know what good software should look like.
This is the story of building FantasyAgent—an AI-powered draft assistant that provides real-time recommendations for fantasy football drafts. It's a story about what AI-assisted development actually feels like from the perspective of someone learning to code with AI as a pair programmer.
🎯 The Challenge
The requirements seemed straightforward enough:
- Monitor fantasy drafts in real-time
- Analyze available players and team needs
- Provide recommendations within 15 seconds for snake drafts
- Handle auction drafts with budget-aware bidding suggestions
- Work reliably during actual drafts
The technical challenges were less obvious. I needed to integrate with multiple APIs, handle real-time data, build a web interface, and deploy everything reliably. As a product manager, I understood what needed to be built. As someone with limited coding experience, I had no idea how to build it.
🤝 Week 1: The Sleeper Success
I started with Sleeper, a fantasy platform with a developer-friendly API. No OAuth complications, clear documentation, and immediate access. Claude Code and I built the foundation in the first week.
The breakthrough came when Claude suggested using CrewAI to build a multi-agent system. Instead of one AI trying to handle everything, we created four specialized agents:
- Roster Analyst: Evaluated team composition and positional needs
- Value Expert: Compared available players to their average draft position
- League Strategist: Analyzed opponent tendencies and market dynamics
- Head Coach: Synthesized everything into clear recommendations
The agents worked in parallel, each contributing specialized analysis. This architecture made the system both more capable and easier to debug when something went wrong.
The Player ID Problem
The first major challenge was player identification. Sleeper uses different player IDs than FantasyPros, our data source for rankings and projections. Josh Allen the quarterback has ID "4984" in Sleeper but "17298" in FantasyPros. Worse, there's also Josh Allen the linebacker with completely different IDs in both systems.
Claude Code built a mapping system that matched players across platforms using name, team, and position verification. We ended up mapping 11,389 players between the two systems. What could have killed the project became a competitive advantage—we could seamlessly integrate data from multiple sources.
🔥 Week 2: Yahoo's OAuth Hell
Confident from the Sleeper success, I decided to add Yahoo support. This is where I learned the difference between cooperative and hostile developer ecosystems.
Yahoo requires OAuth 2.0 authentication. As a product manager, I understood OAuth conceptually. As a developer, I'd never implemented it. The gap between theoretical knowledge and practical implementation was enormous.
Yahoo's developer experience is intentionally difficult. API access requests disappear into a void. Documentation is outdated. Token refresh flows fail silently. Even when Claude Code generated working OAuth code, navigating Yahoo's approval process and constantly changing requirements proved nearly impossible.
Context Loss Becomes a Problem
By week two, Claude Code started losing track of our previous work. Each session felt like onboarding a new developer who'd never seen the codebase. Our OAuth debugging attempts blurred together across different conversations.
This taught me the importance of documentation for AI-assisted development. Not documentation for future human developers, but for future AI sessions. I started maintaining:
- ACTION_LOG.md: What we built and why
- ISSUE_LOG.md: What broke and how we tried to fix it
- CONTEXT.md: Current state and next steps
These files became essential for maintaining continuity across Claude Code sessions.
⚡ Week 3: The Production Push
With the draft approaching and Yahoo still broken, I made a strategic decision: abandon Yahoo and focus on making Sleeper excellent. This proved to be the right call.
We built auction draft functionality using Value-Based Drafting (VBD) calculations. The system could recommend maximum bid amounts based on player value, position scarcity, remaining budget, and roster needs. More importantly, it updated recommendations in real-time as the auction progressed.
Performance Optimization
Initial response times were 15-20 seconds—too slow for 90-second draft timers. Claude Code helped optimize the analysis pipeline by running the four agents in parallel instead of sequentially. Response times dropped to 3-5 seconds, fast enough for real draft scenarios.
We also implemented intelligent caching. Player rankings were cached for 30 minutes. Expensive calculations were precomputed when possible. API failures had graceful fallbacks with cached data.
🏆 Draft Day: August 12, 2025
The system performed well during my actual draft. Every recommendation arrived within the target timeframes. The VBD calculations correctly identified value picks. Budget management prevented early overspending. The multi-agent analysis provided insights I wouldn't have considered manually.
More importantly, the system didn't crash, hang, or produce obviously wrong recommendations during live usage. For a system built in three weeks by someone learning to code, this felt like a significant achievement.
🧠 What I Learned About AI-Assisted Development
Context Management Is Critical
The biggest challenge wasn't technical complexity—it was maintaining context across development sessions. Claude Code can hold tremendous complexity in its working memory during a session, but that knowledge disappears between conversations.
The solution was treating documentation as infrastructure. Every decision needed to be logged. Every error needed to be documented. Every learning needed to be captured. This documentation wasn't for humans—it was for future AI sessions.
Platform Politics Matter
No amount of AI assistance can overcome platform hostility. Sleeper's cooperative API enabled rapid development. Yahoo's intentionally difficult developer experience created insurmountable obstacles. The lesson: choose your platforms carefully, especially when learning new technologies.
AI Amplifies Domain Expertise
Claude Code quickly absorbed fantasy football concepts like VBD theory, positional scarcity, and roster construction strategy. My product management background became an advantage—I instinctively focused on error handling, user experience, and production reliability rather than just making code work.
The AI didn't just write code—it became a genuine pair programmer that understood both the technical requirements and the business domain.
Production Pressure Accelerates Learning
Having a real deadline with real stakes (my fantasy league) created forcing functions that side projects lack. Every decision had to consider production reliability. Every feature had to work under pressure. This constraint made both the development process and the final product better.
🔧 The Technical Stack
The final system included:
- Backend: Python with FastAPI for the web server
- AI Framework: CrewAI for multi-agent orchestration
- Data Sources: Sleeper API for draft data, FantasyPros for rankings
- Frontend: Vue.js for the web interface
- Deployment: Local server with production monitoring
The architecture prioritized reliability over sophistication. Simple components with clear interfaces. Extensive error handling and fallback systems. Human-readable logs for debugging during live drafts.
📊 What Actually Worked
Claude Code excelled at:
- Generating boilerplate code and standard implementations
- Debugging syntax errors and logic issues
- Learning domain-specific concepts quickly
- Suggesting architectural improvements
- Handling complex API integrations when documentation existed
⚠️ The Limitations
Where AI assistance struggled:
- Navigating hostile developer ecosystems (Yahoo's OAuth process)
- Maintaining context across long development projects
- Understanding implicit business requirements
- Debugging environmental issues vs. code issues
- Making architectural trade-offs with incomplete information
🎯 The Strategic Implications
This experience changed how I think about the relationship between product management and technical implementation. AI assistants don't eliminate the need for technical understanding, but they dramatically lower the barriers to building functional systems.
For product managers, this creates new possibilities. Ideas can be validated through working prototypes instead of mockups and specifications. The time from concept to testable system has collapsed from months to weeks.
For organizations, this suggests that the constraint on innovation shifts from technical feasibility to product-market fit and execution quality. When implementation becomes less of a bottleneck, identifying what to build becomes more important.
🔗 The Open Source Release
FantasyAgent is available on GitHub: github.com/adamrubinsky/FantasyAgent
The repository includes the complete Sleeper integration, CrewAI agent system, player mapping data, and documentation of the development process. It's ready for the 2025-2026 fantasy season—or as a reference for building similar AI-powered applications.
🎬 Looking Forward
Three weeks with Claude Code taught me that AI-assisted development isn't about replacing technical expertise—it's about making technical implementation accessible to domain experts who previously couldn't translate their knowledge into working systems.
The future probably isn't "AI vs. developers" but rather "domain experts with AI assistance" competing against traditional development approaches. The teams that figure out this collaboration model first will have significant advantages in building products that solve real problems.
The constraint on building great software is shifting from technical capability to understanding what needs to be built and why. For product managers willing to learn alongside AI assistants, that's an exciting development.
Building FantasyAgent showed me that the gap between having an idea and having a working system is smaller than ever before. The question isn't whether AI will change how software gets built—it's whether you'll be early or late to figure out the new process.
What will you build when implementation is no longer the primary bottleneck?
Built with Claude Code over 21 days. Currently helping analyze my 2025 fantasy season. The code is open source and available for anyone who wants to build their own AI draft assistant.