Artificial intelligence has already pushed well beyond experiment status in procurement. Companies are now moving from generative AI experiments to early-stage deployments of agentic AI, systems that not only interpret prompts but also trigger downstream actions, learn from outcomes, and adjust their own playbooks without human intervention.
Early pilots suggest the gap between “writing a recommendation” and “closing the purchase order” is set to narrow dramatically over the next 18 months, redrawing where value is created, who creates it, and how quickly it can be captured.
Execution, Not Just Recommendations
Unlike generative AI, which focuses on content creation and analysis, agentic AI handles execution. An agent can identify sourcing options, compare supplier risk, initiate RFQs, and trigger purchase orders. Interfaces are shifting from software dashboards to conversational inputs embedded in tools like Teams or Slack. Behind the scenes, agents combine language models, business rules, and process automation to complete workflows across procurement and finance systems.
According to Gartner’s April 2025 Market Guide, 42% of global enterprises have at least one agentic pilot in indirect procurement, up from 11% last year. Most are limited in scope but show sharp efficiency gains. Companies report cycle time cuts of over 50% and stronger policy adherence.
Real-world traction is beginning to surface outside the conference circuit. Unilever told analysts in May that an autonomous buying bot built with a commercial agent framework now handles more than 30% of its spot buys under €25 000, generating a 9-percent price variance improvement against historical benchmarks. Fairmarkit, meanwhile, has rolled out a three-layer architecture, end user, agent network, and GenAI services, that allowed one U.S. retailer to cut RFQ processing time from two weeks to three hours. The vendor claims 80% of agent-generated awards mirror or beat human decisions on total cost.
Why Is Agentic AI Succeeding In Procurement?
A decade ago cognitive procurement was painted as a five-year horizon and never fully delivered. Two things are different in 2025. First, foundation models can now interpret unstructured spend data and supplier risk signals at scale, no heavyweight rules authoring required.
Second, APIs are standardizing how those insights feed transactional engines, eliminating the integration drag that sank earlier expert-system attempts. As a result, analysts at IDC forecast that agentic tooling will account for 25 percent of software-as-a-service spend in procurement platforms by 2027.
Governance Becomes Central
The main challenge is governance. When agents can commit spend, define specifications, or select suppliers, clear controls are needed. Companies like BT Group are aligning agent tasks with delegation-of-authority levels, triggering approvals when thresholds are exceeded and logging decisions otherwise.
To mitigate bias or flawed assumptions, some firms are feeding scenario data, currency risk, logistics delays, ESG scores, into agents before awards are made. Others are tracking agent outcomes via performance dashboards similar to those used for BPO vendors.
Concerns over workforce displacement remain, but early adopters report the opposite. In a Deloitte survey, 64% of procurement employees using agentic tools said their roles improved, with time reallocated to supplier development and strategy. Coca-Cola Europacific Partners has stated it expects to upskill, not reduce, headcount as agentic capabilities expand.
Autonomy Without Oversight Reinforces Hidden Risks
Agentic AI promises speed and scalability, but without structured accountability, it can quietly entrench existing weaknesses. Companies often assume that more data yields better decisions, but bias, outdated supplier performance metrics, or incomplete risk signals can still drive faulty outcomes at scale.
As agentic systems mature, governance models must evolve in parallel, not only to trace how decisions are made, but to challenge whether the inputs and assumptions still hold. Codifying dissent, through scenario stress-testing, agent overrides, or counterfactual analysis, may prove just as critical as automation itself.