Convergence Design
Monday, March 9, 2026
AI broke how organizations think together. Here’s how to fix it.
A product manager builds a feature prototype with an AI tool. Takes about an hour. It looks production-ready. Clean UI, solid interactions. Six months ago, this would’ve taken a design sprint and an engineering spike to even get close.
The interesting part isn’t the prototype.
In one version, the PM brings it to the engineer. They start going back and forth. The engineer pokes at the logic, flags edge cases the PM hadn’t considered. Because the tool exposes enough of the code, the PM can actually follow the technical conversation. Not nodding along. Pushing back. Iterating in real time, layering in business context and customer framing that the engineer doesn’t carry.
The prototype becomes a conversation. A shared thinking space. By the time they’re done, both of them understand what they’re building, why certain tradeoffs make sense, what they’ve intentionally left out. The artifact is useful, sure. But the comprehension they built together? That’s the real output.
In the other version, a leader builds the same prototype alone over lunch. It’s gorgeous. They present it at the next team sync: “Here’s what we’re building.” The room nods. It looks like alignment. It looks like a decision got made.
Except nobody went through the thinking. Nobody debated the tradeoffs. Nobody asked why this approach instead of three other viable ones. It’s the same pattern I’ve seen play out a hundred times: one person’s context, wrapped in a polished artifact, thrown over the wall and mistaken for organizational understanding.
Same tool. Same artifact. Entirely different outcome.
Three things are coming apart at the same time. And they’re compounding faster than most organizations have noticed.
Three Decouplings
For most of the history of knowledge work, production and comprehension were coupled. Not perfectly. Ghostwriting, delegation, and management by memo always meant the coupling had gaps. But mostly, writing code meant understanding it. Building a prototype meant working through the tradeoffs. Drafting a strategy doc meant wrestling with the logic until it actually held together. The output was a byproduct of the thinking.
AI didn’t just widen those gaps. It broke the coupling entirely. And it broke it three ways.
The first decoupling: production from comprehension.
A junior engineer can now generate code faster than a senior engineer can audit it. A product manager can produce a spec in an afternoon that would’ve taken a week of conversations to build. The artifacts look the same. The understanding behind them doesn’t. And nobody’s doing this through ill intent. People are genuinely trying to move faster and contribute more. That’s what makes it so hard to spot.
Ganesh Pagade calls it cognitive debt. The term captures exactly what’s happening. Unlike technical debt, cognitive debt doesn’t show up in any dashboard. Velocity metrics look great. The dashboard says you’re fine. But the debt accumulates silently until an incident hits and the on-call engineer is staring at “a black box written by a black box.”
Can your team explain what they shipped last sprint? Not what it does. Why it’s built the way it’s built. If not, you’re carrying cognitive debt, and the interest rate is compounding.
The second decoupling: individual context from shared context.
Every person in an organization now has their own AI. They feed it their own context, their own framing, their own read on the problem. The AI generates outputs that reflect that individual perspective. Joan Westenberg calls what breaks here “the coherence premium”: “the degree to which every part of an operation derives from the same understanding.” As she puts it, “knowledge is sticky and context is lossy.” Every handoff loses information. Every translation introduces drift. If you’ve ever mapped out how information actually flows through an organization, you know how much gets lost even without AI in the picture. Now multiply that.
Five people walk into a room to make a decision. Each of them has spent the past week working with their own AI on their own slice of the problem. They’ve each generated analyses, explored options, pressure-tested assumptions. Individually, they’re more prepared than they’ve ever been. Collectively? They’re looking at the same problem from five completely different mental models. Each of them has been living in a private context universe that nobody else can see.
The meeting feels productive. Everyone sounds sharp. The alignment is a mirage.
The third decoupling: capability from sustainability.
A UC Berkeley study followed a tech company of about 200 people for eight months, conducting over 40 in-depth interviews. The core finding: they felt more productive, but they did not feel less busy. AI didn’t contract the work. It intensified it.
Three mechanisms drove this. People voluntarily absorbed responsibilities that used to belong to someone else, because AI made it feel accessible. PMs started writing code. Researchers tackled engineering problems. The boundaries between roles blurred, and that sounds like progress until you realize nobody renegotiated the workload. Prompting felt like chatting, not working, so it bled into lunch breaks and late evenings. And everyone managed more parallel threads simultaneously, creating this continuous cognitive load even though it felt like they were flying.
The result wasn’t more hours. Same hours, more density. Workers reported cognitive fatigue and burnout even as their output metrics improved. The to-do list expanded to fill every hour AI freed up, and then kept going.
This is the part that doesn’t get talked about enough. The burnout isn’t coming from people who resist AI. It’s coming from the people who embrace it most.
The common root.
Going back to the opening: same tool, same artifact, entirely different outcome. These aren’t three separate problems. They’re the same problem at different scales. Individual. Team. Organizational. In every case, we can move faster than we understand and produce more than anyone can absorb.
Each of these decouplings has been observed independently. Pagade on cognitive debt. Westenberg on the coherence premium. The Berkeley researchers on intensification. The pattern across all three is unnamed. So is the discipline that responds to it.
Agents Will Solve Coherence (But Not the Way You Think)
The prevailing argument about organizational coherence goes like this: coherence is the moat. Solo operators win because they can hold everything in one head. Organizations lose because shared understanding degrades with every handoff, every meeting, every layer of hierarchy. Scale breaks coherence, so the advantage goes to those who stay small.
Westenberg argues this compellingly. I think she’s describing a snapshot of right now, not where this is going.
I’m already seeing it. The collaboration tools, the project management platforms, the communication layers are all building connections to organizational knowledge. When someone prompts an AI within one of these tools, it pulls from shared docs, past decisions, ongoing conversations. The agent already has context beyond what any individual carried into the room. Right now, these connections are fragmented. Enterprise search tools connect sources across Slack, Drive, Jira, and GitHub, but they’re reactive: you ask, they answer. Workflow agents automate routines, but their scope is narrow. Personal agents run continuously but serve one person’s world, not the team’s.
But extend the trajectory further. The direction is clear, and parts of this are already shipping.
Agents operating continuously across an entire tooling surface. Every Slack thread. Every ticket update. Every design artifact. Every meeting transcript. An agent like that would have more coherent organizational context than any human in the company. It wouldn’t forget what was discussed three weeks ago, carry a biased read of a conversation it only half-followed, or miss context because it was in a different time zone. Perfect memory. Zero judgment.
Absorbing information, though, is not the same as maintaining coherent context. What agents can plausibly hold is the factual layer: what was decided, what the data shows, what constraints exist, what happened in which meeting. Call this information coherence. An agent can tell you that the pricing committee decided on a 15% increase last Tuesday. It can’t tell you that the committee was split 3-2 and the CFO only agreed because Q3 numbers forced her hand.
What they can’t hold, at least not yet, is the interpretive layer: which facts matter more than others, how to weigh contradictory signals, what the organization’s actual priorities are versus its stated ones. Call this interpretive coherence. That’s judgment, not aggregation.
The infrastructure for the information layer is being built right now. The integrations are shipping. The timeline is genuinely uncertain, and I’ll resist pretending otherwise. But the direction is clear. At enterprise scale, the barriers aren’t just temporal. They’re architectural: data governance, access controls, compliance frameworks, and the reality that organizations of 10,000 people run on hundreds of tools that were never designed to share context. I’ve seen orgs where the product analytics live in one system, the customer feedback in another, and the engineering backlog in a third, all maintained by teams that barely talk to each other.
If agents can maintain information coherence better than humans ever could, then that layer stops being a competitive advantage. It becomes table stakes. Every organization with the same tooling gets the same coherence baseline.
It stops being about who knows the most and becomes something harder: what are the people in the room actually for?
The Human Premium
One answer: nothing. Automate it all. If agents have better context, let agents decide. Some organizations will try this. The argument is internally consistent. And for the majority of organizational decisions, the routine ones that nobody feels strongly about, it’ll work.
They’ll produce a lot of output. It’ll be coherent. It’ll be fast. And most of it will feel like it was made by nobody in particular.
The harder question is what happens at the decisions that matter. The ones where the stakes are high, the tradeoffs are real, and someone has to be willing to own the call.
When agents hold organizational context better than any person can, the human role isn’t to know more. It’s to care differently.
A product manager who’s watched real users struggle through an onboarding flow carries more than session data or funnel metrics. They carry the weight of having been in the room. The frustration of watching someone give up on something you built. That conviction that this specific problem matters more than the numbers suggest. An agent can process every session recording and every support ticket. It can’t carry that weight. That’s not data. It’s judgment you formed by being there.
An engineer who’s been burned by a fragile architecture before will push back on the “fast” solution with a stubbornness that no risk analysis can reproduce. They’ve lived through the incident. They remember the 2 AM debugging session. That scar tissue is worth more than any risk matrix. Designers carry the same kind of hard-won instinct. So do operations leads, support managers, anyone who’s been close enough to the work to develop judgment the data doesn’t fully capture.
These are human qualities. Flawed, inconsistent, sometimes wrong. But they’re also the reason the best products feel like someone gave a damn. Because a person made a choice with conviction and owned it.
The premium moves from knowledge to judgment. From “who has the most context” to “who makes the best call with what’s known.”
And there’s a hard economic case here too, not just a philosophical one.
The failure mode of velocity without human judgment is already showing up. High output, low confidence. Teams shipping things nobody fully understands. Pagade’s cognitive debt in action: every metric looks healthy while comprehension silently erodes underneath. The cost shows up later, in incidents, in rework, in products that technically function but nobody feels strongly about.
Human judgment at key moments isn’t a bottleneck. It’s the cheapest insurance you can buy against the failures that pure velocity creates.
There’s a harder version of this problem, though. If the messy, struggle-through-it work is what built judgment in the first place, and AI automates exactly that work, then we might be eroding the capacity for judgment at the exact moment we need more of it.
This is the part that deserves more attention than it usually gets. Think about the junior engineer who never debugged a production incident because AI fixed it first. They can produce clean code all day. But when something breaks in ways the AI didn’t anticipate, they don’t have the scar tissue to know where to look. The same pattern plays out across every role. PMs who never sat through the painful interview. Designers who generated alternatives before developing the instinct to know which direction to explore. The output looks the same. The judgment behind it doesn’t. And judgment came from the experience, not the output.
Protecting human judgment might require deliberately designing the experiences that build it. Not a blanket “use less AI” policy. Something more surgical: the incident rotation stays manual, the design critique happens before anyone generates alternatives, the PM sits through the full interview instead of reading the synthesis. Not because efficiency doesn’t matter. Because the judgment these experiences build is what makes everything else worth trusting.
Senge’s work on learning organizations makes the same case: shared understanding develops through collective experience. AI makes that argument more urgent, not less. The experiences that build judgment are exactly the ones AI is most eager to automate.
There’s a second tension. Models are getting better at taste. At judgment. At the kind of nuanced, contextual reasoning that used to feel uniquely human. That gap may not last forever.
But organizations don’t just need to ship products. They need to ship products that reflect choices. Priorities. Values. Tradeoffs that someone was willing to own. That requires conviction. And conviction comes from people who went through the thinking, not agents who processed the data.
At least for now. And “for now” might be a longer window than people think.
If human judgment is the premium, how do you make sure it gets exercised at the moments that matter, not burned on alignment theater?
Convergence Design
The answer isn’t better tools or better meetings. It’s a different question entirely: when should humans come together, and what should they be doing when they do?
I’ve run and facilitated design sprints, strategy workshops, and planning sessions across enough organizations to know: half the time is context-building. Day one is almost entirely about getting everyone on the same page. Lightning talks. Research readouts. Constraint mapping. Hours spent constructing a shared foundation so the team can make good decisions on days two through five.
When agents hold organizational context continuously, that foundation already exists. Research is synthesized. Constraints are mapped. Stakeholder positions are documented and current. The system just maintains it as a baseline.
This doesn’t make the sprint obsolete. It makes the context-building phase obsolete. And that changes everything about what the time together is for.
The convergence shifts. Instead of “let’s share what we know,” it becomes “let’s decide what to do with what we all now have access to.” The time humans spend together gets concentrated on the part that was always the hardest and most valuable: the judgment calls. What do we build? What do we deliberately leave out? What tradeoffs are we willing to own? What’s our conviction, not just our analysis?
That’s convergence design: the intentional design of when and how humans come together to exercise judgment on agent-maintained context.
And “come together” doesn’t mean in a room. Convergence can be asynchronous: a shared doc that three people build on over 48 hours, a structured decision thread where the context is pre-loaded and the team works through tradeoffs on their own time. For distributed teams, this is actually more natural. Async gives people space to think and reflect before converging on a call.
Most organizations still struggle with this. Not because they lack smart people, but because a cross-functional meeting with 12 stakeholders from four teams in three time zones burns its first 40 minutes just establishing shared context. The deciding happens in the last 10 minutes, if it happens at all.
And this is where it gets political, which is worth being honest about. Redesigning when and how humans converge means changing meeting structures, decision rights, and implicitly, power dynamics. The SVP whose recurring meeting gets reclassified from “convergence point” to “information transfer” will have opinions about that. The middle managers whose role as context translators gets partially automated by agents will feel the shift. Convergence design isn’t just a design challenge. It’s an organizational change challenge, and anyone who pretends otherwise hasn’t tried it.
This isn’t the first attempt to fix how organizations decide together. Decision rights frameworks, six-page memos, meeting-free days. Most failed because they tried to make humans better at context transfer. Convergence design starts from a different premise: the context transfer is being automated. The question shifts from “how do we share information more efficiently” to “what are the humans in the room actually for.”
Now notice the scale problem. Chesky at Airbnb pulled decision-making back up so his top 30 or 40 people share “one continuous conversation.” That’s convergence, but it’s explicitly centralizing. It works for one product. Gleit at Meta built “canonical docs” and “canonical nomenclature” so that even when people disagree, they’re operating from the same facts. Gridley at WHOOP teaches her team how the executives think, so they can anticipate decisions without waiting for approvals, distributing the interpretive framework, not just information. These are real examples of convergence working. But they’re all sub-1,000 person contexts. I’ve worked with teams inside organizations running 200 teams across 12 time zones building a product portfolio. You can’t pull every decision up. You can’t run one continuous conversation across 10,000 people. The math doesn’t work.
The design challenge at scale is that convergence has to work fractally. Same pattern, different altitudes. A pair of engineers converging on a technical tradeoff. A product trio converging on a feature direction. Between those altitudes is where most organizations actually struggle: the product area lead converging direction across five squads, the division head keeping 30 teams aligned without bottlenecking every call. A leadership team converging on portfolio priorities.
But the fractal only works if the infrastructure supports it at every altitude. Each convergence point needs pre-loaded context from the organizational layer: the relevant decisions, the data, the constraints, the open questions. And the outputs need to flow back. When the engineering pair makes an architecture call, that decision and its rationale re-enter the organizational context. When the product trio converges on feature direction, the tradeoffs they weighed and the alternatives they rejected become available to the leadership team at the next altitude. The agent maintains the chain. Each altitude’s convergence becomes the next altitude’s pre-loaded context. The re-explaining gets eliminated without eliminating the thinking itself.
Four questions define convergence design at any altitude:
What decisions require convergence here? Who needs to be in the room? What context should be pre-loaded? And how do you know convergence actually happened, versus just another meeting that produces the illusion of alignment?
One practical test for that last question: convergence is working when the people who leave can explain not just what was decided, but why other approaches were rejected. If they can’t, you’ve produced the illusion, not the thing itself.
In practice, convergence design tends to be driven by the people who see the connections, the ones who notice which tools feed into which decisions, which teams are operating from contradictory assumptions, where shared context has quietly fragmented. They’re not always facilitators by title. But they’re the ones who make the decision about how the team converges, not just what it converges on.
I’ve watched this pattern play out across organizations of wildly different sizes. At one company, a feature that should have been a quarter’s work took over a year. The pattern was textbook: product writes a spec in isolation, gets a rough engineering estimate, gives design two days, then throws it over the wall. Engineers had committed to timelines they’d never had a chance to pressure-test. Nobody was the villain. Everyone was stuck in broken handoffs.
The fix wasn’t more process. It was a two-day workshop where product and design mapped the full user journey together before anyone touched a spec. We traced the messy reality of how their customers actually moved through the core workflow, from initial intake through the final outcome that determined whether the engagement was worth it. That single convergence point produced 18 months of aligned roadmap. Not because the workshop was magical. Because everyone went through the thinking. The initiative we mapped out in that room is still shipping features today.
I’ve run versions of that same workshop at organizations ranging from 40 people to several thousand. The surface details change. The conference room gets bigger, the stakeholder map gets more political, and you need three sessions instead of one because not everyone can be in the same room. But the mechanism is identical: convergence that works means everyone goes through the thinking, not just the people who happened to prepare the best pre-read.
I learned the failure mode at that same company. Earlier in the role, I’d mapped every pain point in the organization, built the case for cross-functional squads, and presented it at an offsite. Big room, senior people, good energy. It looked like alignment. Within weeks, the org reverted to its old structure. Not because anyone disagreed with the vision. The habits hadn’t changed. The hiring strategy hadn’t caught up. The muscle memory of how the org had always worked was stronger than one good presentation. A presentation isn’t convergence. That transformation took over a year of persistent, small convergence points before it stuck.
Agents change the cost structure of all of this. They maintain the context layer between convergence points. So when humans converge, they’re not starting from scratch and not relying on whoever prepared the best pre-read.
That’s the trajectory, and it’s moving faster than most org charts can absorb. The pieces are shipping: enterprise search connecting Slack and Jira, workflow agents handling routine coordination, personal AI pulling from shared docs. Each quarter, the connective tissue gets denser. What doesn’t exist yet is the full organizational layer, the agent that holds context continuously across every team, every tool, every time zone. That’s coming. The architecture points there clearly.
Which means right now, convergence design matters more, not less. In the gap between where the tooling is and where it’s headed, the convergence points where humans come together are still the primary mechanism that integrates context across a fragmented landscape. The people in the room aren’t just exercising judgment. They’re also doing the work that agents will eventually handle: synthesizing signals from different systems, reconciling contradictory assumptions across teams.
The practices you build now, the muscle of converging well, become what agents eventually build on. You don’t get good at convergence after the tools arrive. You get good at it so the tools have something to accelerate.
The machine holds the context. The humans bring the conviction. Convergence design is what connects them at the right moments.
The Real Question
Models will keep getting better at taste and judgment. That line will keep blurring. I’m not going to pretend otherwise.
But here’s what’s true right now.
Organizations aren’t struggling because they lack speed. They’re struggling because speed without comprehension produces output nobody trusts and decisions nobody owns. The three decouplings are real, and they’re compounding. And most organizations are responding by buying more tools, not by rethinking how their people converge. The same intensification driving burnout is also driving the alignment mirages and the cognitive debt. It’s all connected.
The organizations that figure this out first won’t be the ones with the best AI tooling. They’ll be the ones that looked honestly at where their teams were producing artifacts nobody fully understood, and where velocity had quietly become a substitute for judgment.
The question isn’t how to slow down. Nobody’s slowing down. The question is how to make space for human judgment to operate at the speed we can now move.
If any of this resonates, here are four things you can try this week. They won’t transform your organization. They’ll show you where the transformation needs to happen.
The convergence audit. Start with your own calendar: look at your last five cross-functional meetings. How many arrived at an actual judgment call versus spent the time getting everyone on the same page? Then go wider. Look at your top 20 recurring meetings across teams. Classify each as convergence (exercising judgment together) or information transfer (building shared context). The ratio tells you how much of your organization’s collaborative time goes to work that agents will eventually handle.
The pre-loaded meeting. Take one recurring meeting this week. Pre-load an AI with the relevant decisions, data, and open questions. Share the summary with everyone beforehand. Start the meeting at the judgment call instead of the context download.
The comprehension check. Pick something your org shipped last sprint. Ask the team lead: can the engineers explain why it’s built the way it’s built? Not what it does. Why those tradeoffs, why that architecture, why that scope. If they can’t, that’s cognitive debt, and you’re paying interest on it whether you see it or not.
The shared thinking partner. Next time your team is stuck on a strategic tradeoff, feed the AI the shared context and let it generate options nobody in the room proposed. Use it to surface the assumptions you’re all making. The shift from “my AI helps me think” to “our AI helps us think together” is small, but it changes the dynamic.
These are starting points, not the full picture. But they’ll show you where the gaps are, this week.
What’s new is the urgency. And the fact that for the first time, the context-building half of convergence can be automated entirely. Which means we can spend all of our time together on the part that actually matters: the convictions, the tradeoffs, the choices someone has to own.
That’s convergence design.
We’ve never had the conditions for it before. We’ve always burned half our time together just getting on the same page.
Now the page is maintained for us. The question is whether we use that to converge better, or just to ship faster.
That question is already in the room, whether you’ve designed for it or not.
References:
- Ganesh Pagade, “Cognitive Debt: When Velocity Exceeds Comprehension” (2026). See also Margaret-Anne Storey’s independent exploration and follow-up of the concept.
- Joan Westenberg, “The Coherence Premium” (2026)
- Aruna Ranganathan and Xingqi Maggie Ye, “AI Doesn’t Reduce Work – It Intensifies It”, Harvard Business Review (2026), based on UC Berkeley research
- Peter Senge, The Fifth Discipline (1990), on learning organizations and shared mental models
- Brian Chesky on Lenny’s Podcast, “Brian Chesky’s New Playbook” (2023), on Airbnb’s shift to a unified functional model and “shared consciousness” among leadership
- Naomi Gleit, “Canonical Everything” (Medium, 2022)
- Hilary Gridley on Lenny’s Podcast, “How to Build a Team That Can Take a Punch” (2025)
Further reading:
- Rohan Narayana Murty and Ravi Kumar S, “When Every Company Can Use the Same AI Models, Context Becomes a Competitive Advantage”, Harvard Business Review (2026)
- Sangeet Paul Choudary, “AI’s Big Payoff Is Coordination, Not Automation”, Harvard Business Review (2026)
- Fabrizio Dell’Acqua, Ethan Mollick, and colleagues, “The Cybernetic Teammate” (2025), on how AI reshapes teamwork and expertise
- Jay R. Galbraith, Designing Organizations (1995), on organizational coordination theory
- Henry Mintzberg, The Structuring of Organizations (1979)
- Jake Knapp and John Zeratsky, Click: How to Make What People Want (2025), on structured team convergence methods
- Patrick Hebron, Rethinking Design Tools in the Age of Machine Learning (2022), on the decoupling of creation and comprehension