6 LIVE SESSIONS ON AI PRODUCT MANAGEMENT
Become An AI PM
If you’re a PM and you haven’t built an AI product yet, you are already behind. The good news, you can build an AI product right now. In these sessions I will show you what it takes to be an AI PM and how to build an AI product from scratch.
🗓️ 6 live sessions ⏱️ 45 minutes each ☑️ 100% free
The gap between "I use AI" and "I build AI" is increasing. These sessions will help you reduce it.
SESSION 1 OF 6
🗓️ 28 March 2026 | 11AM GMT / 4:30PM IST
Traditional PM builds logic. AI PM builds behaviors. You're no longer defining "if X then Y." You're designing how to get the best guess at Y, and what happens when that guess is wrong. In this session, you'll learn why AI products are fundamentally probabilistic (the same input can produce different outputs every time), why your competitive advantage shifts from code to data, and how the 4 Pillars and 7 Questions Framework give you a complete operating model for any AI product decision.
» The deterministic vs. probabilistic mindset shift » The 4 Pillars of AI PM
» Why "data is the product"
» » 7 Questions Framework
SESSION 2 OF 6
🗓️ 29 March 2026 | 3PM GMT / 730PM IST
The most expensive AI mistake is building AI where plain software would have worked better. Teams do it because of FOMO, stakeholder pressure, and demos that create false confidence. In this session, you'll learn a 5-Gate Decision Framework you can use in your next product review to decide "AI or not AI" before anyone writes a line of code. You'll also see the AI Cost Iceberg: why 80% of AI costs are invisible until you're already committed.
» The 5-Gate AI Decision Framework
» 5 reasons teams build AI when they shouldn't
» Probabilistic vs. deterministic: your first filter
» A live exercise on a real AI initiative
SESSION 3 OF 6
🗓️ 04 April 2026 | 11AM GMT / 4:30PM IST
Open Perplexity. You see search queries breaking down, 15 sources with clickable citations, comparison tables, follow-up suggestions. All of that is product logic. The model does roughly 30% of the work. In this session, you'll learn to see the invisible product decisions behind every AI product you use. You'll also see why every major AI product failure (Air Canada, Chevrolet, McDonald's, Google) maps to a specific system layer that a PM could have fixed.
» The 70/30 Split: product logic vs. model
» The 5-Question AI Product Teardown Framework
» Live teardown of Perplexity
» Failure-to-layer map (5 failures & root causes)
» Why Grammarly suggests, not correct
» How to trace any AI bug to its system layer
SESSION 4 OF 6
🗓️ 05 April 2026 | 3PM GMT / 730PM IST
Agents are the most hyped concept in AI right now. Most explanations are either too technical or too vague. In this session, you'll learn exactly how agents work: a brain (LLM) that reasons, tools (APIs) that act, and a loop that repeats until the goal is met. You'll see a real agent architecture end to end, understand when to use agents vs. RAG vs. simple prompts, and learn the specific product decisions that separate agents that work from agents that fail.
» The Brain + Tools + Loop architecture
» A real agent walkthrough (email cl.. → response)
» When to use agents vs. RAG vs. simple prompts
» PM ownership areas for agent products
» Why more steps = more failure points
» How to define stopping conditions and guardrails
SESSION 5 OF 6
🗓️ 11 April 2026 | 11AM GMT / 4:30PM IST
PMs who ship AI products spend 70% of their time on planning and 30% watching the AI code. The biggest mistake is opening a code editor with a 2-line prompt. In this session, you'll learn the 5-file prep system (PRD, Planning, Tasks, Knowledge, Decisions) that prevents the most common vibe coding disasters. You'll also learn why thin vertical slices work and why ChatGPT's "text box and nothing else" launch beat every AI product that tried to do 5 things at once.
» The 5-file system for vibe coding prep
» Why the Knowledge file prevents context collapse
» The thin vertical slice scoping method
» The 70/30 planning-to-coding ratio
» The 6 biggest vibe coding mistakes (and how to avoid them)
» What a first Claude Code session looks like with good prep
SESSION 6 OF 6
🗓️ 12 April 2026 | 3PM GMT / 730PM IST
Evals are the single most important skill in AI product management. They're also the skill most AI courses skip entirely. "I look at the output and see if it's right" is not testing. It's vibes. In this session, you'll learn how to define "good," measure it with a structured rubric, and build test cases that tell you whether your AI product actually works before you ship it to real users.
» Define evaluation dimensions for any AI product
» Why a 1-4 scoring scale beats 1-5
» 3 test cases built live (must-pass, edge, fail-safe)
» The LLM-as-judge concept for automated evaluation
» What a Golden Set is and why it defines your quality bar
» The testing loop: test → judge → diagnose → fix → retest

Sid built AI products at Yelp, most recently the AI API that helps users find the best local businesses through conversation. He runs the AI PM Accelerator, an 8-week live cohort where PMs design, build, and ship real AI products. 70K+ PMs follow his work on LinkedIn. This series is built from the same course material that Cohort 1 students used to ship working AI products in 8 weeks.
Every session is free. Every session is live.
Register once, get access to all 6 sessions. Recordings available for 48 hours after each event.
No credit card. No spam. Just 6 sessions of real AI PM knowledge.