All blog articles
Your AI doesn't need to get smarter, it needs to remember

Your AI doesn't need to get smarter, it needs to remember

Everyone talks about “model performance.” Few talk about context. Yet, MIT’s “State of AI in Business 2025” report found that context, not algorithms, separates the 5% of companies seeing transformation from the 95% still stuck in pilots.

Tomaz Suklje
November 6, 2025
TIME TO READ:
MINUTES

If it feels like every company around you is “doing something with AI,” you’re not wrong. The tools are everywhere, the excitement is real, but most of that activity hasn’t translated into real transformation.

Everyone talks about “model performance.” Few talk about context. Yet, MIT’s “State of AI in Business 2025” report found that context, not algorithms, separates the 5% of companies seeing transformation from the 95% still stuck in pilots.

One way or another, most companies have been testing all sorts of AI tools. 80% have tried ChatGPT or Copilot, but enterprise-grade systems, custom or vendor-sold are being quietly rejected. We’ve blamed it on bad algorithms, slow IT, or missing use cases. In truth, it comes down to AI systems that don’t learn the way companies need them to. They don’t remember what matters, who the user is, how the workflow runs, or what success looks like. MIT calls this gap between output and understanding the GenAI Divide.

When good models still fail

A well-trained model can write, summarize, and analyze faster than any human. But in enterprise operations speed without understanding quickly becomes noise.

An LLM (large language model) can generate the right paragraph, but it doesn’t know whether that paragraph fits your procurement approval flow, your customer policy, or your supplier terms.  

It's like hiring a genius intern who never remembers what you told them yesterday. Every morning, you start over and explain the same steps, sharing the same context, correcting the same mistakes. After a week, the intern still writes good sentences, but still misses the point of the job.

That’s the difference between model performance and contextual adaptation.


Why learning beats intelligence


The MIT research found that 95% of enterprise AI projects fail to scale, even when model performance is excellent. It’s all due to something that many AI solutions lack consistently - learning and memory.

In real workflows, value doesn’t come from what the AI can do once. It comes from what it remembers and improves over time. So, it’s less about how smart the model is and more about how fast it learns from us.

MIT’s research show this clearly:

- Enterprises with learning-capable systems reached deployment twice as fast as those relying on static tools.

- Two-thirds of executives said they want AI that learns from feedback and retain context.

- And companies that built such systems cut external spend by up to 30%  - not by downsizing, but by replacing costly outsourcing with internal capabilities that improved week by week.


In simpler terms, systems that learn get cheaper, faster, and smarter the longer they run. Systems that don’t learn decay, as their value erodes with every exception and change in process. That’s why context is a compounding value mechanism.

Three building blocks of contextual AI


Contextual AI rests on three simple but powerful principles: memory, feedback, and fluency.

#1 Memory – the foundation of trust


When an AI remembers past interactions, it earns trust. Think of it like a colleague who recalls last month’s discussion about supplier terms. You don’t have to repeat yourself, you can pick up where you left off.


That’s what business users mean when they say “it just works.” They’re not praising intelligence; they’re praising continuity. Continuity builds confidence, and confidence is the currency of adoption.

MIT calls this the “learning gap”, the biggest barrier to enterprise scale. Tools that forget lose users. Tools that remember, retain them.


#2 Feedback loops - an engine for improvement


Feedback loops are how AI learns from the business,  not just from data, but from people.

Every correction, approval, or rejection tells the system what “good” looks like. Without feedback, AI stays generic. With it, the system starts reflecting your company’s DNA, i.e. your terminology, your exceptions, your workflows.

As MIT’s research notes, organizations that embed feedback loops into their deployments move from pilots to full-scale rollouts in 90 days instead of 9 months.


#3 Workflow fluency - the language of the enterprise


AI doesn’t create value in isolation. It creates value when it moves inside the process,  where the data lives, where approvals happen, where results are measured.


That’s what the MIT report calls workflow integration. Tools that live outside the flow (like chatbots or separate dashboards) rarely scale, because they demand constant human translation.

Imagine a financial controller copying numbers between systems just to get an AI answer. They’ll stop using it by week two. But embed the same intelligence directly into their ERP or Outlook, and it becomes invisible, useful, and repeatable.

That’s workflow fluency: when AI speaks the same language as your process.

When context becomes capability

IDC estimates that 90% of enterprise data is unstructured, a “gold mine routinely wasted”  because context is missing. Clean data without context still leads to wrong decisions.

That’s especially clear in supply chains, where the right number at the wrong time can still derail production or delay shipments. Quality matters, but context makes data actionable.

That’s why systems designed to learn must also be designed to remember. In Nordoon’s case, each AI Agent contributes to a shared Memory, i.e. a living knowledge base that grows with every workflow, review, email, and file the Agent touches. From that memory, Agents connect information inside and across departments - procurement, logistics, and finance, operations, reason across threads, validate data, and act in context.

Instead of restarting from zero, these AI Agents build on what the organization already knows, turning scattered data into continuously improving operational intelligence.

From static tools to living systems

The next generation of enterprise AI won’t win by better text generation or higher benchmarks. It will win by constantly remembering, adapting, and improving. By being part of the workflow, not a visitor to it.

That’s what the MIT researchers meant when they said, “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning.” The systems that cross the GenAI Divide are the ones that remember who you are, adapt to how you work, and learn from what works.

That memory is the beginning of judgment. The future of AI is about systems that develop judgment the way people do: by remembering, adapting, and learning from experience.

ABOUT THE AUTHOR
Tomaz Suklje

Tomaz is the Co-founder and Co-CEO of Nordoon, a company building supply-chain specialized AI Agents that automate exception-heavy processes from inbox all the way to ERP, cleaning data on the go. He holds a PhD in Mechanical Engineering and has lectured at academic institutions including MIT. Prior to Nordoon, he held leadership roles such as CRO at Qlector, CEO & Cofounder of Senzemo, and Co-founder of AgriSense.

Enjoyed this read?

Subscribe to our newsletter, and we will send AI automation insights like this straight to your inbox on a regular basis.