Werkspace · AI & the Double Diamond
Point of View · AI & Design Leadership · 2025

AI Doesn't Replace the Double Diamond. It Accelerates It.

How I think about integrating AI into enterprise UX methodology — and what it means for design orgs that want to move faster without sacrificing the rigor that makes research credible.

A research-first perspective on a tool-first moment

I came to UX through a neuroscience problem. At UT Southwestern, I was studying how medical students failed to retain information under the cognitive overload of "Blocktober" — back-to-back intensive coursework that overwhelmed working memory before students could form long-term understanding. The solution wasn't more content. It was a better system for delivering it. That early lesson has stayed with me: the methodology matters as much as the output.

Which is why I approach the current wave of AI in UX design with both genuine enthusiasm and a practiced skepticism. In a DFW market where UX maturity is still being built — where I've spent my career convincing organizations that research-first design produces better outcomes than opinion-first design — the last thing I want to do is hand executives a reason to believe AI can shortcut the process that makes design trustworthy.

The good news is that AI, implemented correctly at the leadership level, doesn't shortcut the Double Diamond. It removes the friction that slows teams down at each phase — freeing researchers to spend more time on the high-judgment work that machines can't do.

AI won't tell you which problem is worth solving. It won't earn the C-suite's trust in a research readout. It won't know that a user's frustration in an interview is concealing a workflow they've simply stopped trying to use. That's still human work. But it can synthesize 150 interview transcripts overnight, and that changes what's possible.

AI mapped to each phase of the Double Diamond

The Double Diamond — Discover, Define, Develop, Deliver — is still the right framework for enterprise UX. Its value isn't the methodology itself, it's the discipline it enforces: diverge before you converge, understand the problem before you build the solution. AI doesn't change that logic. It changes the speed and scale at which teams can execute within it.

Here's how I think about AI integration at each phase — including the tools that are proving their value in enterprise environments in 2025.

Double Diamond · AI integration map
Discover Diamond 1 · Diverge

Broad exploration of the problem space. User interviews, market analysis, behavioral data, stakeholder discovery. The goal is to find the real problem — not the stated one.

AI accelerators
AI interview moderation Automated transcription Sentiment analysis Desk research synthesis

AI runs parallel interview streams at scale — 40 interviews in a week instead of a month. Human researchers focus on probing the unexpected moments AI can't follow.

Define Diamond 1 · Converge

Synthesizing discovery data into a clear, defensible problem statement. Journey maps, affinity mapping, persona development, backlog prioritization.

AI accelerators
Thematic clustering Auto-tagging & coding Pattern identification Insight summarization

What used to take a week of affinity mapping — clustering hundreds of research notes into themes — now takes hours. Researchers validate and refine instead of building from scratch.

Develop Diamond 2 · Diverge

Ideation, concept generation, and iterative prototyping. The solution space is explored broadly before being narrowed. Design sprints, RITE studies, wireframes to high-fidelity.

AI accelerators
Rapid prototype generation AI design briefs Concept variation Personalized test stimuli

Design teams generate 3–5x more concept variations in the same sprint cycle. Testing stimuli can be personalized per participant — moving beyond the "same static mockup for everyone" model that's been standard for 50 years.

Deliver Diamond 2 · Converge

Validation, handoff, release, and continuous post-launch monitoring. Usability testing, accessibility review, developer handoff, and the feedback loops that keep the product improving.

AI accelerators
Automated usability scoring Heatmap analysis Behavioral pattern alerts AI-generated stakeholder reports

Post-launch feedback loops that used to require a dedicated researcher to monitor become continuous and automated — surfacing friction points in real time rather than in quarterly reviews.

80%
Of UX researchers now use AI in some part of their workflow — up 24 points in one year
50%+
Reduction in synthesis time reported by teams using AI-assisted analysis tools
20+
AI-native UX research platforms launched since 2023 — the tooling ecosystem is maturing fast

This is an organizational change problem, not a tools problem

Most teams that struggle with AI integration aren't struggling because they picked the wrong tool. They're struggling because no one defined how AI fits into the existing research practice — what it replaces, what it augments, and where human judgment is non-negotiable. That's a leadership decision, not a practitioner decision.

From my experience building and scaling research orgs in enterprise environments, here's how I structure AI adoption at the VP or Senior Director level:

01
Audit before you adopt
Map where time is actually spent across your research cycle. In most enterprise teams, 60–70% of researcher hours go to synthesis, transcription, and reporting — exactly where AI has the strongest ROI. Start there, not with the most exciting tool.
02
Define the human-in-the-loop standard
Every AI output that reaches a stakeholder or informs a product decision needs a researcher's review. This is non-negotiable — and it needs to be org policy, not a verbal guideline. AI hallucination in research synthesis can corrupt a product roadmap quietly and permanently.
03
Protect the divergent phases
The Discover and Develop phases are where the best design insights live — and they're also where AI is weakest. AI tools trained on historical data reflect median users, not edge cases. The interviews where someone says something unexpected are the ones that change the product direction. Don't automate those away.
04
Use AI to compress cycles, not skip them
The goal of AI in the Double Diamond isn't to eliminate phases — it's to compress the time between them. A team that can move from 40 discovery interviews to a synthesized insight map in 48 hours instead of two weeks can run more cycles, test more hypotheses, and deliver better-validated solutions in the same timeline.
05
Build AI fluency across the team
In low-UX-maturity environments, AI tools can actually create a new maturity gap — between teams that know how to use them critically and teams that trust the output uncritically. Part of my role as a design leader is building the judgment to use these tools well, not just access to the tools themselves.
06
Bring the C-suite into the AI story early
Executives who see AI as a cost-reduction lever will try to cut research headcount the moment AI starts speeding up synthesis. Getting ahead of that narrative — framing AI as a velocity multiplier that lets the same team run more research cycles, not a replacement for researchers — is a VP-level stakeholder management job.

What I won't compromise on, regardless of the tooling

01
Research is how design earns its seat at the table
I've spent my career in markets where design credibility is built one stakeholder at a time. The reason I've been able to get C-suite buy-in on design-led decisions is that the research behind those decisions is unimpeachable. AI that speeds up synthesis is useful. AI that makes the underlying research look thinner than it is will cost you that credibility — permanently.
02
Cognitive load is still the central design problem
I came to this field through a neuroscience lens — studying how information architecture affects what people can actually learn and retain. AI-generated interfaces can optimize for engagement metrics while increasing cognitive load in ways that don't show up in usability scores. A design leader needs to hold the line on that distinction.
03
AI fluency is now a design org hiring criterion
The gap between a researcher who uses AI tools critically and one who doesn't is widening fast. Teams that can run 150 AI-assisted interviews in a sprint cycle and synthesize them in 48 hours have a structural advantage. Building that capability into hiring, onboarding, and performance standards is part of my job as a design leader right now.
04
The methodology is the accountability structure
The Double Diamond isn't just a process map — it's a framework for making design decisions defensible. When a product decision goes sideways, the question isn't "what did the AI say?" The question is "what did the research say, and who validated it?" The answer to that question needs to be a human with a name and a methodology.

Where AI in UX goes wrong at the enterprise level

Having built research practices inside organizations that were simultaneously adopting Agile, navigating platform migrations, and learning to trust design for the first time, I've seen what happens when tools get adopted without governance. The same pattern holds for AI.

Synthetic users are not users. AI-generated personas built on averaged behavioral data will reflect the median — which means they'll miss the edge cases that define product-market fit in complex enterprise environments. They're useful for stress-testing ideas. They are not a substitute for talking to the people who will actually use the product.

Speed without synthesis is just noise faster. The value of AI in research isn't that it produces insights — it's that it removes the friction between data and insight, leaving researchers more time to do the interpretive work that actually matters. A team that uses AI to produce more deliverables faster without improving the quality of its interpretation hasn't gotten faster. It's just gotten louder.

The narrative risk is real. In every organization I've led design in, there has been a moment where someone — usually a well-meaning executive — suggests that AI means you can do the same work with fewer researchers. Managing that narrative proactively, with data on what research velocity actually produces in business outcomes, is one of the most important things a VP of Design does right now.

The teams that will use AI best in the next five years aren't the ones who adopt it fastest. They're the ones led by people who understand deeply what AI can't do — and design their orgs around protecting those things.