Salesloft is a sales engagement platform used by enterprise revenue teams to run prospecting, outreach, and pipeline execution at scale. The product sits at the center of a seller's daily workflow: cadences, call tasks, email sequences, and account activity across thousands of reps.
The problem was consistent: sellers were leaving pipeline on the table. Not because of effort, because manual prospect research was a hard ceiling on how much territory one rep could work. The same research was happening before every call, every email, and every outreach sequence, manually, inconsistently, and invisibly to the product.
The PM had done early discovery and identified the directional opportunity. Together, we turned those signals into a live but limited build, a scrappy experiment to find out if AI-generated research was something sellers would actually use. I ran the structured research round with our own reps. The question wasn't whether they liked the concept. It was: what's missing, what's breaking trust, and what would have to be true for this to replace their current process?
The AI SKU wasn't the starting point. It was the result. What began as a research agent became a multi-capability platform when EAP signal was strong enough to qualify for a standalone add-on. The revenue line didn't precede the work. The work created it.
The mandate was seller productivity. The discovery was that AI-powered research could deliver it, and that the value was large enough to monetize as its own product line.
I was Product Design Manager and AI Design Lead from the first research interview through launch and post-launch iteration. IC and leadership scope ran in parallel the entire time. I was designing and directing simultaneously.
Led all research, interaction design, trust frameworks, and governance patterns for Account Research, Person Research, and Lead Identification agents. End-to-end ownership on all three.
Set AI design direction across the SKU, managed two associate designers, and partnered directly with Product, Engineering, and GTM on roadmap and strategy.
Collaborated on trust patterns for Ask Salesloft and Coaching Agent, capabilities owned by a senior designer on another team, to establish a shared AI interaction language across all five SKU capabilities.
Five capabilities shipped in the SKU. I owned three fully. I helped align the interaction standards for all of them.
With a live build in reps' hands, I designed the research plan and interviewed 10 of Salesloft's own sellers. The PM's early work had established directional opportunity. My goal was sharper: find the gaps between what we'd built and what would actually change how sellers worked.
I ran and synthesized all 10 interviews, then drafted a prioritized recommendation set before any additional design work began. Four themes came back clearly enough to act on:
Participants said the build was useful for top-of-funnel research but didn't carry them through qualification, renewal, or re-engagement. They wanted context that shifted with the sales stage: job openings for prospecting, company challenges for qualification, and churn signals for renewals. A static research panel wasn't enough.
Four of five participants asked the same question in different words: I have all this. What do I do with it? The gap wasn't research quality. It was the last mile. Sellers wanted the agent to close the loop: suggest a follow-up email, generate a talking point, or insert a detail into a draft. Information without a path to action stalls at novelty.
Three participants flagged concrete discrepancies: revenue figures that didn't match LinkedIn, and acquisition data from 2023 labeled as recent. This wasn't abstract skepticism. It was specific, verifiable errors. Reps who found one wrong fact stopped acting on the output. Until accuracy improved, adoption would plateau.
Three participants pointed to the same structural gap: the agent had no awareness of account history. Past emails, closed opportunities, call history, and Salesforce relationship data. None of it surfaced. Without that context, the tool was doing external research in isolation from the history sellers needed to act on.
Not all findings shipped. That was the point. Full sales cycle support and CRM integration were real asks, but both required engineering investment that would have delayed the EAP by months. I used the synthesis to make the case for a tighter scope: nail trust and data accuracy first. Without those, nothing else would earn usage anyway. Actionability got scoped to research-to-email, rather than the full automation loop sellers described.
Three decisions defined how this system worked and why it earned adoption. Each one had an easier alternative. Each time, the easier answer was wrong.
The default move in AI product design is to hide the seams: clean output, no visible reasoning, confidence by omission. We rejected it. Research had shown reps walking away from tools after encountering wrong AI outputs. Skepticism wasn't irrational. It was earned. A polished UI over unreliable data accelerates abandonment, not adoption.
I made the reasoning visible: citations, sourcing indicators, and explicit affordances for flagging errors. Sellers who could verify an output acted on it. Sellers who couldn't, didn't. Trust was the adoption mechanism, not a feature, not a phase.
The obvious answer is individual toggles for each user. The problem: ungoverned flexibility at the individual level means chaos at the enterprise level. If every seller configures differently, there's no consistent output to support, no standard to sell against, and no way to improve the model.
I designed a governed customization layer instead. Admins define focus areas, preview AI prompts before deployment, and control which configurations apply to which teams. Sellers get a tailored experience, admins maintain control. Flexibility without fragmentation is only possible when the customization has structure.
The tempting move was a dedicated research hub, a purpose-built surface sellers could visit before a call. The problem: a separate tool requires behavior change, and sellers don't have that motivation at 8am before their first call.
Research made the integration point clear: sellers opened their email or call task, and that's when research happened. So when a prospect entered a cadence, the agent triggered automatically. By the time a seller opened their task, intelligence was already waiting. Zero friction, zero behavior change, zero reason not to look.
The Early Access Program ran March to May 2025 with 10 customers. Not a usability test, but a decision filter: what was blocking adoption, what was missing, and whether the concept had enough signal to warrant a standalone SKU.
I led synthesis across weekly feedback loops. Three themes surfaced independently across six of ten accounts. That wasn't an edge case pattern, it was table stakes.
Person Research wasn't on the roadmap. A rep at a database infrastructure company described needing contact-specific intelligence the moment a cadence triggered. That session turned a gap into a shipped capability.
The EAP also generated the SKU rationale. When multiple accounts independently said the capability was worth paying for separately, Product used the synthesis to build the business case. The add-on SKU came from what we learned, not from a premise we started with.
I owned three of the five SKU capabilities end-to-end. The more structurally important work was making all five feel like the same system, without owning all of them.
As capabilities shipped across teams, AI interactions started diverging. Trust signals, disclosure patterns, loading states, feedback mechanisms, and error handling each followed different logic depending on who built them. From a user's perspective, it was five different AI experiences with a shared price tag.
I defined the shared interaction language for the full SKU through IC work and direct collaboration with the senior designer on the conversational AI capabilities. We aligned on how AI attribution was surfaced, how errors were communicated, what the feedback loop looked like, and how human-in-the-loop control was represented. The patterns from the research agents became the reference point, adapted for Ask Salesloft and Coaching Agent.
Trust in an AI system is cumulative. One bad experience, an unexplained error, a confidence signal that doesn't match the output, reduces a seller's willingness to rely on anything else in the same product. Consistency isn't aesthetic polish. It's the mechanism by which trust built in one part of the product transfers to the rest.
The pattern work wasn't on anyone's roadmap. It existed because fragmentation was already happening, and the cost compounds with every new capability that ships without a shared reference point.
The AI SKU launched on May 13, 2025. The numbers closed the case.
Trust-first design drove real adoption. Sellers used the system because it made them more confident, not just more efficient. That distinction is what separated adoption from compliance.
Most enterprise AI tools see passive enablement, not active usage. Sellers chose to open the research tab because it made them better at their jobs, not because it was assigned to them.
Trust is not negotiable, and it is not a phase. Trust has to be designed into the first interaction, not retrofitted after adoption stalls. Sellers who couldn't verify an output didn't come back. The cost of a broken trust signal isn't a support ticket. It's permanent disengagement.
Staying close to real users is how you know what's blocking adoption. The internal research round shaped the roadmap before EAP even started. Direct access to reps told me exactly where the ceiling was: full sales cycle support, CRM integration, and data accuracy. Without that, we would have built to the concept, not to the workflow.
Speed and time to value are design decisions, not engineering ones. Embedding the agent into the cadence trigger, so intelligence was already waiting when a seller opened their task, is what drove 85% adoption. A separate tool would have shipped faster and been used by almost no one. Designing for time to value means removing everything between the seller and their first useful moment.
What I'd do differently: instrument the behavioral change more deliberately during the EAP. The qualitative signal was strong, but a structured before/after on research time per account would have made the impact case sharper at launch, and given us a cleaner benchmark for every capability that followed.