Salesloft is a sales engagement platform used by enterprise revenue teams to run prospecting, outreach, and pipeline execution at scale. A pattern repeated across every initiative: without a shared interaction language, AI at scale fragments. In the product, in the org, and in user trust.
As the product rapidly expanded AI capabilities, features began shipping across 6 teams and 20+ surfaces simultaneously. Momentum was high. Coherence was not.
This was the landscape. Twenty-one surfaces across five product areas, each making AI interaction decisions without a shared reference point.
The risk wasn't visual inconsistency. It was trust erosion at scale and the compounding cost of letting fragmentation harden before anyone named it. There was no shared AI interaction language. There was no owner. I made myself both.
I initiated and led this effort as Product Design Manager and AI Design Lead without a formal mandate. The trigger was repeated: designers bringing WIP solutions to reviews, each attempting to solve the same AI interaction problems in isolation.
Defined and documented 13 core AI interaction patterns across 8 categories, covering identifiers, disclosure, loading, summaries, insights, suggestions, nudges, synthesis, inputs, references, and governance signals.
Facilitated 3 cross-team workshops to align on pattern definitions, then met individually with 8 designers across 6 teams to audit existing AI surfaces against the agreed patterns.
Gave PMs a shared pattern vocabulary to articulate the right interaction approach for their problems and solutions. Aligned interaction patterns to our human-in-the-loop product philosophy across Design, Product, and Engineering.
This initiative is expanding. Following the Salesloft-Clari merger, I am leading Phase 2: cross-org standardization across the combined entity.
The default approach to standardization is top-down: write the rules, enforce them, defend them in reviews. That produces compliance, not adoption.
I built an AI UX Pattern Library covering 13 patterns across 8 categories, intentionally scoped to emerging AI interactions, not traditional design system components. Having named definitions changed the nature of every conversation that followed. Teams could disagree about implementations without disagreeing about what a pattern was supposed to do.
Three cross-team workshops aligned the team on definitions before any individual audits began. Then I met with each of the 8 designers one-on-one, auditing their surfaces against the library. Screenshots. Documentation. Side-by-side comparisons on a shared FigJam canvas. Designers who had pushed back hardest on standardization became the most active contributors by the third workshop, because the patterns reflected decisions they had helped make.
Patterns defined. Team aligned. The next step was applying both to surfaces that had already shipped.
I produced a remediation spec for every AI surface across the product: what needed to change, against which pattern, and why. Disclosure language, iconography, loading states, citation components, and feedback mechanisms. Each surface documented, each gap prioritized. Pattern alignment stopped being a design conversation and became a product backlog.
Design reviews became faster and more objective. Reusable patterns reduced time-to-design on new AI features. AI releases felt cohesive across surfaces for the first time. PMs began looping me in earlier on AI requirements, before features were scoped.
The pattern work directly anchored the cross-SKU alignment that contributed to the $2M ARR initiative, connecting interaction standards to a revenue outcome. That's the distinction between a UI consistency project and a product strategy lever.
Design system integration was scoped as the next phase and temporarily paused following the Salesloft-Clari merger. The patterns continued to guide active product work throughout. The merger expanded the mandate: Phase 2 is cross-org standardization across the combined entity, and I am leading it.
Fragmentation is cheap to prevent and expensive to fix. Getting ahead of it without a mandate is the highest-leverage thing a design lead can do in a fast-moving AI org. The patterns that stuck were the ones teams helped write. Standards with authorship don't need to be enforced. They get enforced.
Driving alignment without authority is the same problem at every scale. The org size changes. The skill doesn't.
The Clari merger didn't create a new problem. It revealed that the problem was always bigger than one org, and that the foundation was already built.