01 AI ARR · Trust Systems

Designing the Trust System Behind a $2M AI SKU


Role AI Design Lead
Scope Multi-Agent SKU
Outcome $2M Revenue

Salesloft is a sales engagement platform used by enterprise revenue teams to run prospecting, outreach, and pipeline execution at scale.

The problem was consistent: sellers were leaving pipeline on the table. Manual prospect research was a hard ceiling on how much territory one rep could work. The same research happened before every call and email, inconsistently and invisibly to the product.

The PM had done early discovery and identified the directional opportunity. Together, we turned those signals into a live build and ran structured research with our own reps. The question wasn't whether they liked the concept. It was: what's missing, what's breaking trust, and what would have to be true for this to replace their current process?

The AI SKU wasn't the starting point. It was the result. The revenue line didn't precede the work. The work created it.


I was Product Design Manager and AI Design Lead from the first research interview through launch and post-launch iteration, designing and directing simultaneously.

IC Scope

Led all research, interaction design, trust frameworks, and governance patterns for Account Research, Person Research, and Lead Identification agents. End-to-end ownership on all three.

Leadership Scope

Set AI design direction, managed two associate designers, partnered with Product, Engineering, and GTM on strategy.

Cross-Team Scope

Collaborated with the senior designer on Ask Salesloft and Coaching Agent to establish a shared AI interaction language across all five SKU capabilities.

Five capabilities shipped in the SKU. I owned three fully. I helped align the interaction standards for all of them.

Ownership Capability  
Full
Account Research Agent AI-generated account intelligence surfaced directly in seller workflows. The flagship capability that validated the SKU concept.
 
Full
Person Research Agent Contact-level intelligence born directly from EAP signal. Not on the original roadmap. Added because customers asked for it.
 
Full
Lead Identification Agent AI-powered prospecting that identifies high-fit contacts within target accounts based on configurable ICP criteria.
 
Collab
Ask Salesloft Conversational AI interface. Owned by a senior designer. I collaborated on trust and disclosure patterns to align with the shared interaction language.
 
Collab
Coaching Agent AI-powered call analysis and coaching. Separate ownership, shared trust framework and pattern standards.
 

With a live build in reps' hands, I interviewed 10 of Salesloft's own sellers and synthesized findings into a prioritized recommendation set. My goal: find the gaps between what we'd built and what would actually change how sellers worked.

Four themes came back clearly enough to act on:

Finding 01

Full sales cycle support

The build was useful for outreach but didn't carry sellers through qualification or renewal.

Finding 02

Path to action

Sellers had research but no bridge to the next step. Information without action stalls at novelty.

Finding 03

Data accuracy

Concrete errors (wrong revenue figures, stale acquisition data) caused reps to stop acting on output entirely.

Finding 04

CRM integration

The agent had no awareness of account history, limiting its value to external research only.

Not all findings shipped. I made the case for a tighter scope: nail trust and data accuracy first. Without those, nothing else would earn usage anyway.

Internal research synthesis artifact — prioritized findings presented to Product

Three decisions defined how this system worked and why it earned adoption. Each had an easier alternative. Each time, the easier answer was wrong.

Decision 01

Lead with trust, not automation

The default move is to hide the seams: clean output, no visible reasoning, confidence by omission. We rejected it. Research showed reps walking away after encountering wrong AI outputs. Skepticism was earned, not irrational.

I made the reasoning visible: citations, sourcing indicators, and explicit affordances for flagging errors. Trust was the adoption mechanism, not a feature, not a phase.

Decision 02

Design customization as a governed system, not a settings page

Individual toggles create chaos at the enterprise level. There's no consistent output to support, no standard to sell against. I designed a governed layer instead: admins define focus areas, preview AI prompts before deployment, and control which configurations apply to which teams. Flexibility without fragmentation is only possible when the customization has structure.

Decision 03

Embed into the existing workflow. Don't build a new one.

A dedicated research hub requires behavior change sellers don't have motivation for. Research made the integration point clear: sellers opened their task, so the agent triggered when a prospect entered a cadence. Intelligence was already waiting. Zero friction, zero behavior change, zero reason not to look.


The Early Access Program ran March to May 2025 with 10 customers, a decision filter rather than a usability test. Three themes surfaced independently across six of ten accounts:

10
EAP Customers
March to May 2025
3
First priorities
Surfaced independently across accounts
1
New agent added
Person Research born directly from EAP signal
Priority Finding Frequency
First
Customizable research parameters Define what the agent looks for based on ICP, industry, and sales motion. A generic output wasn't useful enough to displace manual research.
6 / 10
First
Data accuracy and feedback loops Sellers needed a way to flag errors, surface sources, and improve the model over time. Trust had to be earned incrementally.
6 / 10
First
Contact-level research Account-level intelligence was valuable but incomplete. Sellers needed person-specific context before calls, not just company context.
3 / 10
Second
Research inputs for AI email generation Pipe specific findings directly into email drafts, closing the loop between research and outreach.
3 / 10
Second
Deeper CRM data integration Pull from related objects: health scores, renewal dates, conversation history.
2 / 10
"That level of flexibility would add a lot."
A
Enterprise SaaS
EAP
Customization
"How do we get feedback into the actual research agent when something doesn't get it right?"
B
Environmental Compliance
EAP
Trust Signals

Person Research wasn't on the roadmap. One session turned a gap into a shipped capability. The EAP also generated the SKU rationale. The add-on SKU came from what we learned, not from a premise we started with.


The AI SKU launched May 13, 2025.

$5M
Pipeline Generated
First quarter post-launch
$2M
ARR Generated
50+ closed deals · 3 quarters
85%
Team Adoption
Across enabled accounts
26%
AE Usage Rate
Active engagement, enabled accounts

Trust-first design drove real adoption. Sellers used the system because it made them more confident, not just more efficient. That's what separated adoption from compliance.

Team adoption tracks account-level enablement. AE usage rate tracks individual rep engagement within those accounts, a deliberate distinction given that not all roles in an enabled account were the target user.


Trust is not negotiable, and it is not a phase. It has to be designed into the first interaction. The cost of a broken trust signal isn't a support ticket. It's permanent disengagement.

Staying close to real users is how you know what's blocking adoption. The internal research round shaped the roadmap before EAP started. Without it, we would have built to the concept, not to the workflow.

Speed and time to value are design decisions, not engineering ones. Embedding the agent into the cadence trigger, so intelligence was waiting when a seller opened their task, is what drove 85% adoption.

What I'd do differently: instrument the behavioral change more deliberately. A structured before/after on research time per account would have made the impact case sharper at launch.

Let's build something that actually works for people.

Email ↗ LinkedIn ↗