All blog articles
The AI agent hype cycle: why overpromising hurts real innovation
Guides
SHARE

The AI agent hype cycle: why overpromising hurts real innovation

This article examines the AI agent hype cycle and how exaggerated expectations – either for instant automation or total skepticism – are stalling adoption. Drawing on enterprise implementations in supply chain workflows, it makes the case for incremental wins, domain-specific AI agents, and real-world context as the foundation for scalable, AI-driven transformation.

Tomaz Suklje
July 22, 2025
TIME TO READ:
MINUTES

Over the past few years, I’ve been in more strategy calls and boardroom debates about AI agents than I can count. And one pattern keeps repeating: people either expect magic or have already decided it’s all hype.

In this piece, I’m aiming to shed light on why those extremes hold us back - and how we’ve learned to deliver real results by starting small, leaning on domain-specialized agents, and focusing on workflows that actually matter. No silver bullets. Just AI that works – when you pair it with context, patience, and a healthy dose of pragmatism.

From the founder’s desk: navigating the two AI extremes

After years of working side by side with enterprise leaders, I’ve noticed a recurring pattern in every boardroom or strategy call about AI Agents. Most of the time, I interact with two camps:

  • The Magic Seekers: The ones who expect a Harry Potter moment - an AI agent that will wave a virtual wand and make entire departments, processes, or headaches vanish.
  • The Burned Skeptics: Those who, after tinkering with a generic GPT or seeing a failed AI pilot, have decided “it’s all hype,” lumping every AI agent into the same underwhelming bucket.

Ironically, both mindsets risk blocking real, AI-driven progress.

Why the hype hurts

Overpromise #1: “Replace your entire team instantly”

Clickbait headlines and breathless pitches often tell execs they can “fire the department” - if only they install the right automation or adopt the latest AI agent platform. This promise is enticing but distorts reality.

Enterprise processes are full of edge cases

No credible AI can handle them without thoughtful mapping, human oversight, and iteration. In our own implementations, we’ve seen how far this promise is from reality. Take a demand forecast rollout with a pharma manufacturer. We had to deal with wildly inconsistent formats and fragmented data flows.


On our end, it took about three months of hands-on work: aligning across departments, navigating internal dynamics and employee vacations, and tuning the agent to match how their business actually runs.

To us builders, it felt like a long haul. But for their leadership – it was a blink – three months to go from error-prone manual processing to a system that eliminated 95% of the grunt work and gave them real-time visibility. The speed did not come from shortcuts, but from respecting the workflow complexity upfront – and from deploying AI agents already specialized for this domain. That combo made the difference between a fragile prototype and something they could trust in production – a proven path to value.

True value is incremental

Real AI transformation doesn’t start with big shake ups. It starts with better Mondays. It’s not about replacing teams or redesigning entire workflows overnight. It’s about removing friction from daily tasks, cutting down on manual effort, and giving people space to focus on higher-value work. That’s what builds trust.

We’ve seen adoption stick when teams experience practical wins early on. Like when an agent quietly turns messy spreadsheets into ERP-ready data, or when it clears out a backlog without needing to expand the team or loading more work onto people. It’s a more cost-efficient way to scale.

Overpromise #2: “AI can do anything (just add data)”

Some founders oversell the adaptability or “magic” of their agents.

Context matters

Moving from chatbot to automation of business-critical processes means hitting limits - data quality, fragmented inputs, domain context, adoption hurdles.

Off-the-shelf models can get you maybe 80% of the way. But that’s rarely enough for the enterprise. Getting to production-grade performance requires agents that don’t just process data. They need to capture, interpret, and consolidate business context at scale.

That’s where we’ve focused our product. Our AI Agents built for supply chain workflows and consistently deliver outputs with over 99% accuracy – because they’re trained to handle real-world inputs across formats, systems, and exceptions. But even then, the real challenge is often human, not technical.  

In one global packaging company, for instance, orders flew in every flavor – PDFs, spreadsheets, email texts – with customer-specific logic baked in. Our agent could handle the inputs right away, but surfacing the knowledge locked in people’s heads took longer. Much of the “how things really work” lives with operators who’ve been stitching processes together for years.

That’s why we don’t treat deployments as handoffs, we work side by side with teams. Humans remain critical in transferring judgment and nuance to the agent. In return, the agent takes on the burden of navigating silos, jumping formats, and unifying fractured data into something reliable and data-driven.  

“Plug and play” sounds great on paper

In the real world, no two enterprises are alike. Every enterprise has its own mix of legacy systems, evolving workflows, and deeply embedded ways of working. Even companies using the same ERP often have different field codes, naming conventions, approval flows, data quirks and process nuances shaped by years of workarounds and adaptations.  

The real blockers aren’t just technical, they’re also cultural. Operators often take pride in managing exceptions. It gives them a sense of control and makes them critical to processes. Meanwhile, ownership over end-to-end workflow change is often diffuse. Without clear accountability, there’s little incentive to simplify or standardize – so complexity persists.  

We’ve seen AI agents hit friction when they’re dropped into this variability without alignment. What’s needed is a more capable model AND upfront work to clarify workflows, challenge legacy exceptions, design fallback logic, and build consensus on what should be automated and how it integrates. That’s what creates a foundation where AI can actually plug in and deliver.  

The cost of disillusionment

When teams jump in with ungrounded expectations:

  • Initial enthusiasm is quickly replaced with frustration and disengagement.
  • Failed pilots become cautionary stories that slow adoption for months.

On the other extreme, “GPT fatigue” leads to blanket cynicism.

Teams who've trialed a generic LLM and found it lacking for nuanced work now reject all AI agent proposals.


Unfortunately, they're missing out on solutions purpose-built for their industry, data, or workflow needs.  We had a finance team come to us after trying – and shelving – an OCR-based setup that was supposed to speed up reconciliation. In practice, it just dumped raw text and left them to clean up the mess. Understandably, they were skeptical.  

We started small, focusing just on invoices from indirect procurement. No magic - just clean, reliable matching with purchase orders and delivery notes, powered by domain-trained AI and context-aware LLMs. It freed up time and even helped them catch early payment discounts. Trust came back once the results did.

Bias often stalls solutions that could actually move the needle.

Once a team feels burned – by a failed pilot, a demo that didn’t translate to their environment, or a tool that overpromised – it’s hard to get them to re-engage. Understandably so. But that skepticism can become self-fulfilling: they stop asking better questions, stop evaluating new approaches on their own terms.

The irony is some of the highest-impact deployments we’ve seen came from teams that almost walked away. What turned it around wasn’t more hype - it was narrowing in on one ugly workflow that was buried in unstructured data and fractured processes. We did not start from scratch – we brought in AI Agents already specialized in supply chain processes, built to handle nuances from day one. That hyperspecialization made the early wins possible. Once they saw what “fitted AI” actually looked like, the momentum shifted.

Grounding the conversation: what AI Agents really deliver

1. AI Agents excel at repetitive, high-volume, rules-based tasks – not at magic or universal reasoning. But that doesn’t mean they’re rigid.

Even with repeatable processes, there’s nuance – variations in how data shows up, in approval logic, in how exceptions are handled. Our agents, for instance, don’t flatten that complexity. They manage it by treating each nuance as a distinct workflow pattern. That’s how they stay versatile, by recognizing patterns, not just rules. Rules are guardrails to guide the workflow, ensure control, and let the system stay reliable even as the inputs shift.  

2. AI Agents augment, not eliminate, teams.

The best deployments keep expertise in the loop, automate the repetitive, and let people handle ambiguity or exceptions.

3. Measurable progress is palpable in days, sometimes just hours – on the condition that the customer has time and focus, i.e. starting with a well-defined business problem, clear KPIs, and joint work between tech and business leads.

From boardroom lessons learned

Ask for use cases, not magic fixes

Press startup founders for details - what processes, under what data conditions, with what guardrails?


The fastest way to cut through the fog is to stop asking, “Can it do X?” and start asking, “Where has it already done X – and how?” A simple checklist of real-world criteria can help you separate working solutions from wishful thinking.

When a founder says their AI agent can handle invoice matching or warranty claims, don’t just nod. Ask what inputs it’s seen before. Ask how it handles exceptions. Ask what breaks it and what best practices they recommend based on past deployments.

The difference between a prototype and a production-ready agent usually shows up in the details: Does it work with PDFs and emails? What happens when the data is missing? Do we need to change our processes? How long does it take to adapt to a new supplier or format? Real answers here are way more valuable than slide decks or aspirational roadmaps.

Celebrate “boring” success

The AI agents that quietly process thousands of invoices, normalize data across languages and formats, or cut error rates by 30% - these fuel measurable ROI.

One of my favorite examples is a PO maintenance use case. We weren’t doing anything headline-grabbing – just making sure supplier confirmations were captured accurately and reflected in the ERP in real time. But it cut manual tracking by over 90%, freed up the team, and actually helped improve delivery timelines. No heroics, just a smoother week.  

Empower champions, not skeptics or cheerleaders

Real change comes from those who dive into pilots, process hurdles, and shape the strategy as the solution matures.  


We saw this play out in a warranty recovery project. The team lead just quietly started pulling together edge cases and feeding them into the system. Over time, we surfaced a ton of missed reimbursement opportunities. That person didn’t just test a tool – they helped shape it. And in the process, they unlocked real revenue and optimization opportunities that had been slipping through the cracks.

AI agents aren’t Harry Potter, nor are they “just more chatbot hype.” They’re powerful new digital teammates - if you cut through exaggeration and focus on genuine business fit, patient iteration, and close partnership between humans and AI.

The companies who win are those who temper excitement with realism, and skepticism with open-minded experimentation. That’s where real innovation happens.

In a nutshell

This article explores the pitfalls of the AI agent hype cycle from the perspective of a founder deeply embedded in enterprise AI deployments. It challenges two common extremes in the market: the "magic seekers" who expect instant transformation, and the "burned skeptics" who’ve given up after failed pilots.

The author breaks down two major overpromises – instant team replacement and unlimited adaptability – and uses real-world supply chain implementations to show why success depends on specialization, context, and human alignment.

Key takeaways include:

  • AI transformation is incremental and grounded in practical wins, not sweeping overhauls.
  • Domain-trained, AI-powered agents can handle complexity, but only when workflows are well-understood and context is captured from teams.
  • Generic, plug-and-play solutions often fail because they ignore cultural blockers and fragmented processes.
  • Companies that succeed treat AI agents as digital teammates - starting small,  iterating fast, and celebrating "boring" success.

Ultimately, real innovation comes not from hype, but from strategy, trust, and well-designed AI-human collaboration.

ABOUT THE AUTHOR
Tomaz Suklje

Tomaz is the Co-founder and Co-CEO of Nordoon, a company building AI Agents to automate and optimize non-EDI transactions across supply chains. He holds a PhD in Mechanical Engineering and has lectured at academic institutions including MIT. Prior to Nordoon, he held leadership roles such as CRO at Qlector, CEO & Cofounder of Senzemo, and Co-founder of AgriSense.

Enjoyed this read?

Subscribe to our newsletter, and we will send AI automation insights like this straight to your inbox on a regular basis.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.