In Adyen’s recent paper, Agentic commerce has an infrastructure problem, the company identified five structural constraints that must be navigated for agentic commerce to scale. The industry is at the beginning of a genuine shift, and naming these constraints clearly is the first step toward solving them. Adyen’s Agentic Foundations series aims to deconstruct the barriers, bring clarity to where today’s systems fall short, and explain how to move from experimentation to real-world solutions.
The real bottleneck in agentic commerce isn't the interface
The industry is fixated on the front end of agentic commerce. Seamless chat interfaces and assistants that interpret intent are easy to prototype. But as outlined in the paper, this is not where it falls short. The first place where enterprise-level demos consistently fail is at a more fundamental layer: the data.
Today's commerce infrastructure was built for humans, and human shoppers are good at navigating ambiguity. If a product description is missing a weight limit, or if an inventory count is slightly out of sync with the warehouse, they can infer, wait, or refresh. An autonomous agent can't.
For an agent, this ambiguity is a hard failure. Moving from discovery to transaction requires a level of data precision that most systems today were never designed to support. And to transition from a helpful assistant that suggests ideas to a transactional agent that executes purchases, the industry must close the inventory gap.
Rethinking product data
Agentic commerce, at scale, ultimately comes down to execution. Execution depends on product data that is coherently structured, real time, and accessible to machines. Success depends on the machine-readability of a product feed.
Agents query product state repeatedly throughout a single session, and inconsistencies that a web UI could easily hide, like a 10-minute lag in stock levels, become systemic blockers.
Essentially, a traditional feed is a brochure, but an agent needs a technical manual to function.
Relying on legacy feeds causes the foundation to crumble, as typical marketing feeds lack transactional data such as dimensional weight or regional availability. Without these specific fields, a Large Language Model (LLM) will often hallucinate or fill in the blanks, leading to inaccurate predictions.
What leaders will get right
Agentic commerce can't scale on legacy product data, and the companies that lead aren't treating it as a data-cleanup exercise. To win, companies must anchor their strategy in these four data foundations:
1. Data completeness
Before optimization, companies need to take a step back and answer: Does the data exist? It's more common than many expect for critical data to be only partially available. Large consumer brands have seen cases where product weight lives in one system and the rest of the catalog in another, with no reliable link between them. For a human, that's manageable; for an agent calculating shipping feasibility, it's a dead end.
This layer also ensures consistency, preventing regional overrides and pricing from conflicting across systems.
2. An agent-ready schema
First, the source of truth, or system of record, must be identified. Once that's determined, it needs to be fit for purpose. Currently, most standard feeds are not optimized for transaction execution.
Moving from a simple 10-field feed to an agent-ready schema, the 25-30 fields required for a machine to act, is a meaningful architectural lift. But it's not just about adding more fields. It's about maintaining that level of detail consistently across every product, and keeping it continuously up to date.
3. A functional distribution pipeline
Even if both of the layers above are in place, it will only create value if it's usable. This constraint relates to how a catalog is translated and consumed. Is the data formatted in the protocols developers expect? Is it pushed to the right endpoints in real time? Is it reliable, low-latency, and easy to integrate with?
If an agent can't ingest data as easily as a competitor's, that's an immediate competitive disadvantage.
4. Generative Engine Optimization
Finally, once the data is available and accessible, is it optimized for ranking? Just as SEO evolved for Google, Generative Engine Optimization (GEO) is becoming its own discipline. These models evolve in real time. Trying to reverse-engineer exactly what attributes or descriptions cause an agent to recommend a specific product over another is a moving target.
Success on this front will require constant iteration and the discipline to pivot as new signals emerge.
The bottom line
If product data remains fragmented across systems, agents are forced to infer rather than execute. Most merchants are somewhere along the four-stage journey of ensuring data exists, making it consistent, structuring it for machines, and optimizing it for agentic SEO. The first two stages are prerequisites. The latter is where the competitive edge will come in.
The companies that lead will not be those building the most compelling interfaces. They will be those ensuring the underlying systems are built to support them.
For a deep dive into the technical requirements of the machine-readable catalog, readers can explore Adyen’s article Agentic commerce and product feeds: a guide for retailers.