Sign In
Register

Partner with us

Register

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Meghana
Makhija
Senior Product Manager – Tech
Amazon
Meghana Makhija leads enterprise-scale AI, multimodal, and agentic systems across Product Quality and Trust & Safety at Amazon. With over a decade of experience across Amazon, Deloitte, and J.P. Morgan Chase, she specializes in building production-grade AI platforms that operate in regulated, high-risk environments, with initiatives impacting over 300 million customers through automation, defect prevention, and responsible AI deployment. AI systems she has led have been featured in major outlets including The Washington Post, Fast Company, The Verge, and Amazon Science. Meghana is a Forbes Technology Council member and an IEEE Senior Member. Her thought leadership on enterprise AI and product systems has been published across platforms including Forbes, AI Journal, and the PDMA Knowledge Hub, and she's a peer reviewer for IEEE publications. She holds leadership roles across global technology communities including IEEE Women in Engineering, Women in AI, and the Product Development and Management Association (PDMA). She also serves as an international AI and technology awards judge, invited speaker, and is a recipient of industry recognition for leadership and impact in technology. Meghana actively mentors founders and emerging leaders through global programs and professional networks. Her work sits at the intersection of enterprise AI, governance, and product leadership, focused on turning AI from experimentation into reliable, accountable systems at scale.
Button
17 June 2025 10:15 - 10:45
Shipping AI is easy. Running it is not: Why AI products fail after launch
AI pilots are easy. Production systems are not. Many AI features perform well in controlled demos but fail once deployed into real-world products. Outputs become inconsistent, user trust erodes, and teams quietly roll back what once looked promising. The issue is rarely the model itself — it’s the lack of governance, clear decision boundaries, and guardrails needed to operate AI reliably at scale. In this session, I’ll share what actually breaks when AI moves from experimentation to production. From silent failure modes and unpredictable behavior to missing accountability and weak feedback loops, I’ll unpack the patterns that cause AI products to degrade after launch. Drawing on real-world enterprise experience, this talk introduces a practical framework for running AI as a product capability—not just a feature. Attendees will learn how to design effective guardrails, define human-in-the-loop systems where they matter, and build evaluation and monitoring mechanisms that keep AI systems reliable, trustworthy, and scalable over time.