Salesloft is a sales engagement platform used by enterprise revenue teams to run prospecting, outreach, and pipeline execution at scale.
The problem was consistent: sellers were leaving pipeline on the table. Manual prospect research was a hard ceiling on how much territory one rep could work. The same research happened before every call and email, inconsistently and invisibly to the product.
The PM had done early discovery and identified the directional opportunity. Together, we turned those signals into a live build and ran structured research with our own reps. The question wasn't whether they liked the concept. It was: what's missing, what's breaking trust, and what would have to be true for this to replace their current process?
The AI SKU wasn't the starting point. It was the result. The revenue line didn't precede the work. The work created it.
I was Product Design Manager and AI Design Lead from the first research interview through launch and post-launch iteration, designing and directing simultaneously.
Led all research, interaction design, trust frameworks, and governance patterns for Account Research, Person Research, and Lead Identification agents. End-to-end ownership on all three.
Set AI design direction, managed two associate designers, partnered with Product, Engineering, and GTM on strategy.
Collaborated with the senior designer on Ask Salesloft and Coaching Agent to establish a shared AI interaction language across all five SKU capabilities.
Five capabilities shipped in the SKU. I owned three fully. I helped align the interaction standards for all of them.
With a live build in reps' hands, I interviewed 10 of Salesloft's own sellers and synthesized findings into a prioritized recommendation set. My goal: find the gaps between what we'd built and what would actually change how sellers worked.
Four themes came back clearly enough to act on:
The build was useful for outreach but didn't carry sellers through qualification or renewal.
Sellers had research but no bridge to the next step. Information without action stalls at novelty.
Concrete errors (wrong revenue figures, stale acquisition data) caused reps to stop acting on output entirely.
The agent had no awareness of account history, limiting its value to external research only.
Not all findings shipped. I made the case for a tighter scope: nail trust and data accuracy first. Without those, nothing else would earn usage anyway.
Three decisions defined how this system worked and why it earned adoption. Each had an easier alternative. Each time, the easier answer was wrong.
The default move is to hide the seams: clean output, no visible reasoning, confidence by omission. We rejected it. Research showed reps walking away after encountering wrong AI outputs. Skepticism was earned, not irrational.
I made the reasoning visible: citations, sourcing indicators, and explicit affordances for flagging errors. Trust was the adoption mechanism, not a feature, not a phase.
Individual toggles create chaos at the enterprise level. There's no consistent output to support, no standard to sell against. I designed a governed layer instead: admins define focus areas, preview AI prompts before deployment, and control which configurations apply to which teams. Flexibility without fragmentation is only possible when the customization has structure.
A dedicated research hub requires behavior change sellers don't have motivation for. Research made the integration point clear: sellers opened their task, so the agent triggered when a prospect entered a cadence. Intelligence was already waiting. Zero friction, zero behavior change, zero reason not to look.
The Early Access Program ran March to May 2025 with 10 customers, a decision filter rather than a usability test. Three themes surfaced independently across six of ten accounts:
Person Research wasn't on the roadmap. One session turned a gap into a shipped capability. The EAP also generated the SKU rationale. The add-on SKU came from what we learned, not from a premise we started with.
The AI SKU launched May 13, 2025.
Trust-first design drove real adoption. Sellers used the system because it made them more confident, not just more efficient. That's what separated adoption from compliance.
Team adoption tracks account-level enablement. AE usage rate tracks individual rep engagement within those accounts, a deliberate distinction given that not all roles in an enabled account were the target user.
Trust is not negotiable, and it is not a phase. It has to be designed into the first interaction. The cost of a broken trust signal isn't a support ticket. It's permanent disengagement.
Staying close to real users is how you know what's blocking adoption. The internal research round shaped the roadmap before EAP started. Without it, we would have built to the concept, not to the workflow.
Speed and time to value are design decisions, not engineering ones. Embedding the agent into the cadence trigger, so intelligence was waiting when a seller opened their task, is what drove 85% adoption.
What I'd do differently: instrument the behavioral change more deliberately. A structured before/after on research time per account would have made the impact case sharper at launch.