There’s a question I keep turning over lately, and I don’t think the software industry has seriously grappled with it yet.
When you sit down to design a feature, a new workflow, a data view, an integration point, who are you picturing on the other end? A person, presumably. Someone with a mouse or a thumb, a screen, a tolerance for loading spinners, a preference for dark mode. You’re optimizing for their cognition, their attention, their patience.
But what if that’s increasingly the wrong picture?
I’ve been building a lot with AI agents lately. Not just using AI tools, actually wiring up systems where agents are doing the work: scraping, summarizing, making decisions, calling APIs, chaining outputs into inputs. And the thing that keeps stopping me isn’t the AI part. It’s the software I’m trying to connect it to.
Most software is deeply, stubbornly human-shaped.
The information I need is locked inside a UI. The action I want to automate requires clicking through a three-step modal. The data I need to extract is rendered in a table designed for a person to read, not a machine to parse. Authentication assumes there’s a human in the loop to do an OAuth dance. Rate limits are tuned for human usage patterns, not agent ones. Error messages are written to reassure people, not to give a calling agent enough context to recover and retry.
This isn’t a criticism, it’s just an observation. Software has been built for humans because humans were the users. That assumption is now, quietly but quickly, becoming optional.
Here’s what I think is happening under the surface.
We’re entering a period where a meaningful, and growing, percentage of software interactions won’t be initiated by a human at all. They’ll be initiated by an agent acting on behalf of a human. The human sets intent (“book me a flight, summarize this report, update this record, find me the best option”), and the agent figures out how to fulfill it by orchestrating a set of tools and APIs and data sources.
In that world, your UI is just overhead. Your onboarding flow is irrelevant. Your beautiful empty states and delightful micro-animations mean nothing. What matters is: can an agent understand what your software does, access its capabilities reliably, and get structured, predictable output it can work with?
That’s a completely different design target.
Now, to be fair - MCP adoption is moving fast. Model Context Protocol, the emerging standard for how AI agents connect to tools, has seen a remarkable wave of adoption. A lot of SaaS companies are already shipping MCP interfaces. If you follow the AI space at all, you’ve probably noticed that “we have an MCP server” is quickly becoming table stakes.
So maybe this isn’t a problem anymore? I’d argue it’s actually where the real problem starts.
Because here’s what I’ve observed: most companies are bolting MCP onto a product that was conceived, designed, and roadmapped entirely around human experience. They’re adding an agentic interface without changing how they think about what they’re building. It’s a technical checkbox, not a shift in perspective.
And you can tell. The MCP surface area is thin, exposing only what was easy to expose. The tool descriptions are written for developers to read, not for a model to reason about. The capabilities mirror the human UI rather than reflecting what an agent would actually need to accomplish a goal. It’s human-shaped software wearing an agentic costume.
The deeper issue is that product development, as a discipline, has no vocabulary for this.
Think about how software gets built. Discovery starts with user research. Personas are human. Every Jira ticket, every story, every acceptance criterion is written through the same lens: “As a [human persona], I want to [do a human thing], so that [I get a human outcome].” The entire craft of product management - the workshops, the frameworks, the certifications, the books - it’s all built around understanding human cognition, motivation, and behavior.
There is no equivalent discipline for agent experience design. No AX research methodologies. No established patterns for what a well-designed agentic interface actually looks like. Nobody is writing stories that read: “As an agent orchestrating a procurement workflow on behalf of a user, I need to query available inventory without initiating a session, so that I can make a purchasing decision without human intervention at each step.”
That framing doesn’t exist in most product teams. Which means even the companies genuinely trying to build for agents are improvising, because the profession hasn’t caught up yet.
This matters more than it might seem, because the stakes are shifting.
When an agent can decide which tool to use based on what’s most accessible, composable, and reliable, brand loyalty means a lot less. The tool that’s easiest for an agent to reason about and work with reliably will get chosen, repeatedly, at scale, without a human ever consciously making that call. Discoverability and usability by agents becomes a new form of competitive advantage - one that most product teams have no framework to even measure.
The companies that will feel this first are probably not the big enterprise vendors with locked-in contracts. It’ll be the SaaS tools that live or die on integration and workflow automation, the ones where switching costs are low and the agent just… picks something else.
I don’t think this means UX is dead or that we should stop caring about human experience. Most software will need to serve both audiences for a long time. But right now, the balance is almost entirely skewed toward one side, and the other side is arriving faster than most roadmaps are accounting for.
So what would it actually look like to take this seriously?
It starts, I think, with making agents a first-class persona in your product process. Not an afterthought, not a technical integration, but an actual seat at the table in discovery. What does this agent need? What context does it have? What does a successful non-human interaction with this feature look like? What breaks when there’s no human to click “confirm”?
It means auditing your existing MCP surface area - not just whether you have one, but whether it was designed or just generated. Are your tool descriptions actually useful to a model trying to reason about when and how to use them? Are your capabilities scoped around agent goals or around UI screens?
It means rethinking some foundational assumptions about authentication, state, error handling, and rate limiting - all the invisible infrastructure that was spec’d for a human-paced, session-based interaction model.
None of this is urgent in the way a production outage is urgent. But it has the shape of something that compounds quietly, and where the gap between companies who thought about it early and those who didn’t will be hard to close later.
We’re at one of those moments where the ground is shifting and most people are still looking at the surface. The UI still works. The users are still clicking. The MCP server shipped last sprint. Revenue is fine.
But the deeper question is worth sitting with: has your product team ever written a story for an agent? Have you ever done discovery with a non-human user in mind? Have you ever asked what your software looks like from the outside, to something that has no patience for your onboarding flow and no interest in your brand?
The discipline to answer those questions well doesn’t really exist yet. Someone is going to build it. The companies that do it first will have designed their products for a future that’s already here - they just won’t realize it until everyone else is scrambling to catch up.