ABOUT ME

How can we build AI products that solve real problems, not just showcase capabilities?

I'm an AI Product Manager obsessed with architecting systems that automate the repetitive to unlock human potential. My journey from sales to a Product Solutions Architect at Google was driven by a single question: "How can this be better?" Today, I bridge the gap between complex model orchestration and human-centric design.

At Google, I ideated and built an AI compliance pilot that achieved a 78% automatic reapproval rate and won the 2024 'Risk Taker' Award—not just for the technical solution, but for navigating the delicate balance between commercial enablement and platform safety. (See the full case study)

The Strategy: I believe the AI revolution requires active refinement of systems to prevent hallucinations and ensure accuracy. I utilize RAG Ops to benchmark diverse LLMs, identifying the "Goldilocks" zone of performance, latency, and cost. I use AI agents to move from concept to image-ready mockups, communicating ideas where traditional resources fall short. And I build custom tools to clean and structure messy web data, ensuring RAG systems are fed only high-fidelity information.

How I Build: I leverage a high-velocity cycle using custom agents for PRDs/TRDs, Cursor for building, and Vercel for deployment. I architect RAG Operations to manage the delicate trade-offs between model precision and response speed. I design AI solutions that solve specific high-friction challenges to create a more usable experience for everyone—what I call the "Curb-Cut Effect."

Beyond enterprise work, I'm growing The Tenant's Voice, my own 0-to-1 AI social good project that helps users navigate complex legal systems. With 50+ daily active users and up to 90% success rates in deposit disputes, it's proof that production-grade AI can serve both business and social impact. (Explore my AI Product Lab)

I'm not a developer; I'm a product manager who understands the 'how' and applies the latest AI tools to solve real problems. What happens when AI systems fail silently? How do we design for explainability from day one? These are the questions that drive my work.