Yoodli AI Roleplays
Yoodli helps you ace your next sales call, pitch, or interview with AI roleplay coaching
Yoodli and Google Cloud at SEC Denver: Key Takeaways on Scaling Sales Enablement with AI
April 21, 2026
•
5 min read
On April 1, 2026, Yoodli CEO and co-founder Varun Puri took the stage at the Sales Enablement Collective Summit in Denver alongside Patrick Austin, Sr. Program Manager of Business Enablement Systems and Operations at Google Cloud. Their session: Learn. Practice. Do.: The Future of Sales Enablement is Experiential covered how AI sales roleplay training is reshaping enterprise enablement.
Patrick has spent his career in education and L&D — starting as a high school English teacher before moving into IT, then Google Fiber, and eventually into Google Cloud, where he has spent 12+ years across instructional design, learning architecture, field enablement, and the internal ed tech stack. He now leads Google Cloud’s internal learning tech stack, which supports training for over 15,000 sellers.
Here’s what they covered, and what every enablement leader should take away from it.
The core premise: static content doesn’t create behavior change. Practice does.
Varun opened with the philosophy that has shaped Yoodli since its founding. Most organizations deliver training through passive formats — PDFs, pre-recorded videos, one-off instructor-led sessions. These transfer information. They don’t change how people perform in real conversations.
The model that actually works is Learn. Practice. Do., a continuous loop where learners build context, rehearse in realistic conditions, and then apply skills in the field. The middle step is the one most organizations skip entirely. Sellers watch a training video, maybe sit through a live session, and then they’re expected to deliver in front of a customer with no reps in between.
Varun framed it this way: most people think about AI roleplays in terms of objection handling or pitch certification. That’s like using an iPhone just to make calls. The opportunity is much larger and includes experiential learning that spans onboarding, tool adoption, manager coaching, and partner certification.
What Google Cloud looked for when evaluating AI roleplay vendors
Patrick was direct about what Google Cloud required, and what disqualified most vendors. When you’re deploying to 15,000 sellers and 150,000 partners, the bar is different than a 50-person pilot. He narrowed it to three things.
1. Ease of use — for non-technical people
The UI needs to be intuitive enough that enablement teams can build and launch programs without involving engineering. If creating a scenario requires a ticket, it won’t get done fast enough, and it won’t get updated when the pitch evolves. Patrick emphasized the ability to interrupt the AI mid-roleplay as a specific feature that mattered: learners need control over the pace of practice, not just the content.
2. Depth of rubric customization and an always-available AI tutor
This is where most vendors fall short. Generic AI scoring can tell you if someone avoided filler words or spoke at a reasonable pace. That’s not what Google Cloud needed. They needed the AI to evaluate sellers against the Google Way of Selling — their methodology, their IP, and their narrative framework.
Patrick’s framing was precise: mimicking a methodology at a surface level is easy. Truly replicating it the way a human expert would is extremely difficult. Rubric customization is how you protect your organization’s intellectual property and make sure the AI is grading what actually matters.
Paired with this: the AI tutor. The ability to give learners an always-available coach that can explain concepts, answer questions, and get someone up to speed without requiring a human facilitator or a completed training module.
3. Enterprise readiness including where the puck is going
For Google Cloud, enterprise readiness means robust user security, provisioning, reporting, dashboards, and strong support infrastructure. As a Gemini builder, Google holds the bar high for any vendor entering their ecosystem.
Patrick added something worth noting: enterprise readiness isn’t just about current capabilities. It’s about whether the vendor is ahead of where the space is going. The AI enablement landscape is moving fast, and Google needed a partner thinking beyond today’s feature set.
What Google Cloud built and the results
Patrick shared the numbers from Google Cloud’s rollout.
- 14 certification programs launched, with top certifications reaching 83% completion
- ~12,000 unique learners trained, averaging 4 programs each
- Measurable skill gains: 5% lift in Problem-Solution-Impact Technique, 3% in Narrative Storytelling; strongest adoption in EMEA and Japan
- ~500 hours of manager time saved per month — equivalent to 125 manager-weeks of 1:1 coaching time per year (based on 30,000 minutes of practice in March alone)
Yoodli is now embedded across Google Cloud’s entire GTM flow: onboarding, partner certification, tool enablement, manager coaching, and more. The Gemini certification program — where sellers practice using Gemini through a custom AI tutor rather than sitting through ILT sessions — was called out as a flagship example of what tool adoption looks like when it’s tied directly to measurable skill outcomes.
The honest pitfalls of AI roleplays
One of the most valuable parts of the session was the candor about where AI roleplay tools, including Yoodli, have real limitations.
AI fatigue
Google trains sellers on 400+ products. At that scale, over-deploying AI assessments creates fatigue. Learners start treating AI-graded interactions as a box to check rather than a practice opportunity. The tool loses adoption. The fix isn’t fewer AI tools — it’s more intentional design about when and where AI grading adds signal versus overhead.
Not everyone learns through roleplay
This was the most important thing said on stage, and it’s largely absent from conversations about AI sales training. AI roleplays are one modality. Some learners find audio and video-based, real-time simulation overstimulating. Others simply absorb content differently. PDFs, videos, asynchronous formats, and conversational tutoring all have a place.
Varun closed the loop on this as they wrapped up: there’s no right way or one way of learning. People consume content differently. The best enablement stacks combine AI practice with the formats your reps actually learn from. Learn how Yoodli supports all modes of learning.
Parting thoughts from the stage
Patrick closed with two pieces of advice for enablement leaders evaluating AI tools:
- Test it, break it for yourself. Don’t evaluate AI roleplays through a demo. Get in and practice. Try to break it. Find the edges.
- You’ll know the “aha” when you feel it. The moment when a practice session feels genuinely realistic, when you have to think, adapt, and respond rather than going through motions is the signal that the tool is worth deploying.
Three questions to bring to your next vendor evaluation
- Who owns the rubric? If the AI is grading your sellers but you didn’t define the criteria, you don’t have a training program. You have a scoring tool.
- Can the platform grow beyond cold calling certification? If the tool can’t support onboarding, tool adoption, manager training, and partner certification, you’ll be re-evaluating in 18 months.
- Does the vendor’s point of view align with yours? The best partnerships aren’t transactional. If how they think about learning doesn’t match how you think about it, the product won’t fit either.
Want to see it in action?
If you want to see how Yoodli is configured for a team like yours — custom rubrics, AI tutor, org-wide reporting — reach out to our team.
Bring Yoodli to your team