Overview
Chewy’s customer service reputation comes from empathy and care. As our classes grew larger, roleplays with trainers became harder to scale. Agents wanted more practice before talking to real customers, but trainers couldn’t keep up. To solve this, I designed the Chewy Customer Simulator, a custom GPT built inside ChatGPT.
A custom GPT is a tailored version of ChatGPT that can be programmed with specific instructions, guardrails, and knowledge (learn more). Instead of being a general AI, it becomes a specialized training tool. In this case, I designed the simulator to act only as a Chewy customer during mock calls, follow detailed scenario scripts, and provide structured coaching feedback based on Chewy’s Quality Effectiveness rubric. This allowed agents to practice realistic conversations while receiving instant, personalized feedback — all without needing a live trainer.
The Challenge
Traditional roleplays worked but didn’t scale. Trainers were stretched thin, feedback varied, and practice often felt staged. We needed a way to give new hires realistic, consistent practice without losing the human touch that makes Chewy special.
The Solution
I built the Chewy Customer Simulator, a custom GPT that lets agents practice live calls in a safe space. The AI only plays the customer role, keeping the experience realistic, and provides structured feedback at the end of each call. Feedback is based on Chewy’s quality standards, so agents get the same type of coaching they’d hear from a trainer. Trainers also get access to a separate GPT that compiles feedback across a whole class into clear insights, saving time and helping them focus on higher-value coaching.
Click Play to watch a demonstration.
Key Features
Realistic Practice: Scenarios included natural dialogue, emotional shifts, and real Chewy policies.
Safe Environment: Agents could make mistakes without impacting real customers.
Consistent Coaching: Feedback followed Chewy’s rubric, so every agent got high-quality guidance.
Trainer Support: Results flowed into Smartsheet, then into a coaching insights GPT to highlight wow moments and growth areas.
Action Mapping
Business Goal: Reduce ramp time and raise agent quality scores.
Behaviors: Show empathy, build rapport, verify accounts, resolve issues, and close confidently.
Activities: Practice full calls with AI customers, followed by targeted feedback.
Support: Trainer insights and coaching discussions built on agent data.

Designing the AI Mock Call GPT
Defined the Training Purpose
I started by clarifying the goal: create a safe, scalable way for new agents to practice realistic customer calls, while receiving consistent feedback aligned with Chewy’s quality standards.
Wrote Custom GPT Instructions
I built master instructions that kept the AI locked into the role of the customer, outlined the call phases (roleplay first, then evaluation), and included guardrails to ensure professionalism and brand safety.

Built the Evaluation Framework
Next, I mapped the AI’s coaching criteria to Chewy’s Core Service Behaviors: Rapport & Personalization, Empathy, Ownership, Issue Resolution & Compliance, and Confidence. This ensured feedback wasn’t generic, but tied directly to the company’s performance rubric.

Designed the Feedback Format
I structured the evaluation so agents received star ratings, clear strengths, and actionable opportunities for improvement, followed by a short summary that wrapped feedback in supportive language.


Created a Scenario Template
I developed a consistent format for all scenarios that included the customer personality profile, the customer situation, stage directions for emotional shifts, customer dialogue, and reference agent responses.

Adapted Real Calls into Scenarios
To make the roleplays feel authentic, I pulled real Chewy customer calls into Adobe Premiere, used auto-transcribe to capture them, and then reformatted the transcripts with ChatGPT into structured scripts. These were enriched with stage directions so learners practiced responding not just to words, but also to customer tone and urgency.


Uploaded Scenarios into GPT Knowledge
I stored all finalized scenarios in the GPT’s Knowledge section, so trainers and learners could instantly launch them while keeping the roleplay itself clean and uninterrupted.

Tested Roleplay & Feedback Flow
I tested the simulator to ensure the AI stayed in character, handled emotional escalation realistically, and only delivered feedback after the call ended, preserving immersion.
Designing the eLearning Course
Oriented Learners
I began with an introduction that explained what the simulator is, why it matters, and how it helps agents build confidence before speaking with real customers.

Created a Tutorial Video
I produced a short video that walked learners through launching the simulator, using voice mode, and ending a call to receive their feedback.
Click Play to watch a demo video.
Designed System-Like Screens
I built pages that replicated the tools agents use on the job, complete with mock customer data, screenshots of internal systems, and excerpts from knowledge base articles.
Embedded Scenario Guidance
I included call flow scaffolding such as sample greetings, empathy prompts, and verification steps so learners had clear reference points during practice.

Added Scaffolding and Hints
I layered in optional hints that learners could reveal if they felt stuck, giving them nudges without solving the problem for them.
Integrated Simulator Access
I linked directly to the custom GPT so learners could move fluidly between the AI roleplay and the system practice environment.

Built Reflection and Submission Flow
I instructed learners to copy their AI-generated feedback and submit it through a Smartsheet form, ensuring trainers could access and review performance data for coaching.


Designing the Trainer Experience
Introduced the Simulator to Trainers
I created a clear overview that explained the purpose of the simulator, how it worked, and why it would support their coaching instead of replacing it.
Provided a Trainer Guide
I developed a step-by-step guide that outlined how to launch scenarios, how learners should submit results, and how to use the insights GPT for debrief discussions.

Built a Data Pipeline
I designed a flow where agents submitted their AI feedback into Smartsheet, giving trainers a single place to access class data quickly and privately.

Created the Agent Insight GPT
I built a companion GPT that compiled all Smartsheet submissions, surfaced trends, highlighted wow moments, and identified growth opportunities for coaching.

Ran Train-the-Trainer Sessions
I conducted live prep sessions to walk trainers through using the tools, interpreting AI feedback, and facilitating coaching conversations confidently.
Simplified Class Facilitation
I ensured trainers only needed to manage scenario selection, feedback collection, and discussion, freeing them from repetitive evaluation tasks and letting them focus on higher-value coaching.
Results
At Chewy, our operating principles of Think Big and Deliver Results guided this project from the start. The Customer Simulator reflects big thinking by reimagining how we scale high-quality coaching through AI-powered simulations, and it delivers results by giving agents targeted, personalized feedback that directly improves their performance.
During the pilot, feedback from both trainers and agents was overwhelmingly positive. Agents valued the psychologically safe environment to practice and reported feeling more confident before supporting live customers. Trainers appreciated the structured, consistent feedback and the time saved from repetitive evaluations, which freed them to focus on higher-level coaching.
When agents transitioned from the pilot to production, we saw measurable improvements across key performance metrics. Quality Effectiveness and CSAT scores increased, while AHT and Wrap Time decreased, showing that agents were not only more competent but also more efficient in handling real customer interactions. Most importantly, the simulator helped accelerate ramp time, allowing agents to build confidence and demonstrate competence earlier, especially when handling top call drivers.
Conclusion
The success of this pilot positioned the Customer Simulator as a scalable solution to the Two Sigma Problem, proving that personalized coaching can be delivered broadly without sacrificing quality or empathy.
The Chewy Customer Simulator also showed me how powerful AI can be when paired with thoughtful learning design. It gave agents a safe way to practice, trainers a tool to scale coaching, and Chewy a way to keep empathy at the heart of customer service while training faster and more effectively.


