AI Doesn't Replace the Double Diamond. It Accelerates It.
How I think about integrating AI into enterprise UX methodology — and what it means for design orgs that want to move faster without sacrificing the rigor that makes research credible.
A research-first perspective on a tool-first moment
I came to UX through a neuroscience problem. At UT Southwestern, I was studying how medical students failed to retain information under the cognitive overload of "Blocktober" — back-to-back intensive coursework that overwhelmed working memory before students could form long-term understanding. The solution wasn't more content. It was a better system for delivering it. That early lesson has stayed with me: the methodology matters as much as the output.
Which is why I approach the current wave of AI in UX design with both genuine enthusiasm and a practiced skepticism. In a DFW market where UX maturity is still being built — where I've spent my career convincing organizations that research-first design produces better outcomes than opinion-first design — the last thing I want to do is hand executives a reason to believe AI can shortcut the process that makes design trustworthy.
The good news is that AI, implemented correctly at the leadership level, doesn't shortcut the Double Diamond. It removes the friction that slows teams down at each phase — freeing researchers to spend more time on the high-judgment work that machines can't do.
AI won't tell you which problem is worth solving. It won't earn the C-suite's trust in a research readout. It won't know that a user's frustration in an interview is concealing a workflow they've simply stopped trying to use. That's still human work. But it can synthesize 150 interview transcripts overnight, and that changes what's possible.
AI mapped to each phase of the Double Diamond
The Double Diamond — Discover, Define, Develop, Deliver — is still the right framework for enterprise UX. Its value isn't the methodology itself, it's the discipline it enforces: diverge before you converge, understand the problem before you build the solution. AI doesn't change that logic. It changes the speed and scale at which teams can execute within it.
Here's how I think about AI integration at each phase — including the tools that are proving their value in enterprise environments in 2025.
Broad exploration of the problem space. User interviews, market analysis, behavioral data, stakeholder discovery. The goal is to find the real problem — not the stated one.
AI runs parallel interview streams at scale — 40 interviews in a week instead of a month. Human researchers focus on probing the unexpected moments AI can't follow.
Synthesizing discovery data into a clear, defensible problem statement. Journey maps, affinity mapping, persona development, backlog prioritization.
What used to take a week of affinity mapping — clustering hundreds of research notes into themes — now takes hours. Researchers validate and refine instead of building from scratch.
Ideation, concept generation, and iterative prototyping. The solution space is explored broadly before being narrowed. Design sprints, RITE studies, wireframes to high-fidelity.
Design teams generate 3–5x more concept variations in the same sprint cycle. Testing stimuli can be personalized per participant — moving beyond the "same static mockup for everyone" model that's been standard for 50 years.
Validation, handoff, release, and continuous post-launch monitoring. Usability testing, accessibility review, developer handoff, and the feedback loops that keep the product improving.
Post-launch feedback loops that used to require a dedicated researcher to monitor become continuous and automated — surfacing friction points in real time rather than in quarterly reviews.
This is an organizational change problem, not a tools problem
Most teams that struggle with AI integration aren't struggling because they picked the wrong tool. They're struggling because no one defined how AI fits into the existing research practice — what it replaces, what it augments, and where human judgment is non-negotiable. That's a leadership decision, not a practitioner decision.
From my experience building and scaling research orgs in enterprise environments, here's how I structure AI adoption at the VP or Senior Director level:
What I won't compromise on, regardless of the tooling
Where AI in UX goes wrong at the enterprise level
Having built research practices inside organizations that were simultaneously adopting Agile, navigating platform migrations, and learning to trust design for the first time, I've seen what happens when tools get adopted without governance. The same pattern holds for AI.
Synthetic users are not users. AI-generated personas built on averaged behavioral data will reflect the median — which means they'll miss the edge cases that define product-market fit in complex enterprise environments. They're useful for stress-testing ideas. They are not a substitute for talking to the people who will actually use the product.
Speed without synthesis is just noise faster. The value of AI in research isn't that it produces insights — it's that it removes the friction between data and insight, leaving researchers more time to do the interpretive work that actually matters. A team that uses AI to produce more deliverables faster without improving the quality of its interpretation hasn't gotten faster. It's just gotten louder.
The narrative risk is real. In every organization I've led design in, there has been a moment where someone — usually a well-meaning executive — suggests that AI means you can do the same work with fewer researchers. Managing that narrative proactively, with data on what research velocity actually produces in business outcomes, is one of the most important things a VP of Design does right now.
The teams that will use AI best in the next five years aren't the ones who adopt it fastest. They're the ones led by people who understand deeply what AI can't do — and design their orgs around protecting those things.

