Rebuilding Lost Trust in GenAI – Part 3.

  • AI AI
Jump to section

This is an AI article that is not about AI.

We’ve already tried it. It didn’t work.

I’m hearing this more and more often, across different organisations and teams.

Some have subscribed to AI packages that sit unused. Others keep adding new tools, but none of them create meaningful change. And in some cases, months go into building custom solutions, only for a ready-made product to launch at a fraction of the cost. Sometimes the team realises that a well-crafted prompt could have solved the problem from the start.

At that point, the question is fair: how do we measure return on investment?

Sometimes everything was well planned and still didn’t deliver the expected results. Did we want performance to improve or did we do it because it was trendy?

You’re not alone. Most organisations struggle with exactly these dilemmas. The world of AI is changing faster than most companies can decide and adapt. What is a custom development today is a basic feature tomorrow. Size doesn’t matter. Investment doesn’t matter. Anyone building in this space has to continuously rethink direction. Resilience and agility are more relevant today than ever.

These stories have one thing in common: the gap between what we expected from AI and what the organisation was actually ready or willing to handle. Most of the time, AI isn’t the problem. AI is just a tool. It hallucinates, but we knew that when we started. The real issue is that there was no clear goal, no preparation, or no real understanding of what it’s good for and what it isn’t. Often, the obstacles aren’t technological at all, but organisational and mindset-related.

And that’s what this article is about.

This is not a new story

Anyone who’s been in the industry long enough will recognise the pattern.

The exact same thing happened with RPA. The promise was huge: automate repetitive work, replace manual steps, reduce costs. In reality, RPA was extremely rigid. It clicked exactly where you told it to click and typed exactly what you told it to type, but everything depended on the screen looking exactly the way the bot expected. One interface update, one button moved, and the whole thing broke. Over time, maintenance often costs more than the original work. But no one talked about that upfront; on the demo, everything worked perfectly.

Blockchain followed a similar path. It’s good somewhere, for something, but not everywhere and not for everything, just like agile methodologies. Cloud adoption was ultimately more successful, but even there it took years to learn what it’s good for and what it isn’t. Many large enterprises and government organisations are still cautious today, and so far no universal solution has been found for that caution. Maybe there isn’t one. Maybe there never will be.

The pattern is always the same: new technology, big expectations, poorly informed decisions, and an organisation that wasn’t prepared to make the technology its own.

With AI, the promise is bigger than with any previous technology, which also means the disappointment is deeper if we get it wrong.

What can go wrong?

Behind most failed AI projects, the problem is not technological. I see the same four patterns again and again.

There was no clear problem to solve.

“Let’s introduce AI” is not a problem statement. That’s like saying, “Let’s introduce Excel.” For what? For whom? For which task? If there’s no concrete, clearly articulated problem, there’s nothing to build a solution for. Until there is, you don’t need to develop anything. A subscription to an AI service (Claude, ChatGPT, Gemini, or any other) is enough to experiment and understand what it’s good for and what it isn’t. That experimentation alone is valuable, because it helps define where the real problems are.

The scope was too big.


“Let’s digitalise the company with AI.” “Let’s automate the department.” Digitalisation and automation are multi-month or multi-year projects even without AI. As discussed in earlier articles, an AI solution solves one task. Then comes the next task, which may need something else. If everything is in scope, nothing is in focus. This isn’t a limitation of the technology, it’s its nature.

AI was treated like traditional software.


Buy it, install it, it works. But AI doesn’t work like an ERP system that you implement once and run for ten years. Models constantly change in the background. Providers release new versions, often without notifying users, but users notice it anyway. Suddenly the answers are different, the behavior changes, the quality shifts, even though they changed nothing. There are ways to manage this, but nothing lasts forever. Model lifecycles are finite; typically, models are phased out within 18–24 months. An AI solution is tightly coupled to the model, because the prompt is essentially the job description of our “task assistant.” Just as you’d give a different job description to a person with different skills, different models require different prompts. When the model changes, the solution must be returned.

And if you’re thinking about on-prem solutions, where you control the model and no one pulls it out from under you? Six months later, a new model may arrive that halves runtime, is smarter, and requires less hardware. The question isn’t if things will change, but when, how you handle it, and whether you’re prepared.

The uncomfortable questions were skipped.

In my opinion, this is the biggest obstacle I see in AI projects, and it has two sides. On one hand, we always want something new: a new phone, new software, new technology. But what percentage of our phone’s capabilities do we actually use? Of our AI subscription? Or even Excel? When was the last time you used a different setting on your microwave than the “+30 seconds” button? Before bringing in another tool, it’s worth asking whether we even understand the ones we already have. AI is easy to talk to, so everyone naturally wants to “understand” it. But this is a profession. No one becomes an expert overnight, no matter how many “AI expert” PDFs they share on LinkedIn.

The other side of the coin is when the real question isn’t the tools, but the process itself. In organisations, it’s very hard to say, “This process is bad.” Because once you say it, it turns out that someone designed it, or someone approved it. Nobody wants to point fingers. But if we want to introduce AI, we first need to examine what we want to automate and why. Should this task even exist? Could it be done more simply? Do we need technology at all, or just a better process? Until we ask these questions, we’re just layering AI on top of a bad process, and the result will be bad as well.

The magic number of users

So far, we’ve talked about how projects start off wrong. But there’s a deeper layer: the organisation itself as the entity in which the project takes place.

In the previous part, I wrote that the level of the solution is determined by how many people will use it. The more users, the more serious the project. But the reverse is also true. What matters is not the size of the company, but the number of users who will directly or indirectly use it. Sometimes it’s enough to think in terms of one team, you don’t have to immediately roll something out to 50+ people. If you understand this, you’re already ahead of half the organisations out there.

A freelancer, a microbusiness, or a small team can implement a working AI solution in days. Processes are simple, we know firsthand what we want to use it for, and there’s no need to go around collecting approvals. When it’s time to implement, there are no approval chains, no affected departments, high error tolerance, and immediate fixes. A prompt library, a simple workflow, one or two AI subscriptions, and suddenly there’s a toolkit that works. Everyone knows which tool is good for what and where it fits. With AI, a small team can achieve performance levels that used to be typical only of much larger organizations. Regulation, GDPR, and the EU AI Act apply to them too, of course, but there are fewer processes to align, so they can move more flexibly.

In small and medium-sized enterprises, a new kind of obstacle appears. As they grow, habits form that no one ever has time to revisit. “We’ve always done it this way” processes, knowledge that lives only in people’s heads and was never documented, semi-automated workarounds someone built at some point and everyone has relied on ever since. AI adoption here isn’t hard because the technology is complex, but because the organisation isn’t used to reviewing how it works. This is a general problem, and the issues it creates can’t be solved with AI.

In large enterprises, this phenomenon is strongest. Departments develop their own processes, interdepartmental collaboration follows established paths, and any change that affects multiple teams is not just a technological issue but an organisational one. What was an acceptable project timeline a few years ago is often too slow today. By the time the result arrives, it may no longer be relevant. Technology brings changes quarterly; organisational cycles are much slower.

AI technology is advancing orders of magnitude faster than organisations can currently adapt. This isn’t a flaw, it’s a given. Organisations work this way, and that in itself isn’t bad. But you need to be aware of it, and few handle it well. Those who recognize it can adapt. Those who don’t will always fall behind and always blame AI.

There is, however, a practical way forward. In most companies, there are teams where people do roughly similar work. In these teams (typically around ten people or fewer) AI adoption can be fast, without disrupting the whole system. The team works better; the rest of the organisation barely notices. Ideally, cross-departmental processes would also be reviewed, but then the project becomes its own obstacle due to size. It’s better to start small, gain experience, and then scale, while complying with internal policies and compliance requirements. Even in stricter organisations, there are ways to introduce data-secure, even on-demand solutions relatively quickly.

Look at how the big players do it. Goldman Sachs spent six months working with embedded Anthropic engineers to automate trading ledger and client onboarding processes. They didn’t buy an off-the-shelf product; they co-developed, tested, and fine-tuned. Morgan Stanley has been building internal AI tools in partnership with OpenAI since 2021, and today nearly half of its almost 80,000 employees use them daily. Years of continuous work and iteration. But in both cases, they started at the team level, not by trying to transform the entire organisation at once.

Every change takes time. Time that usually doesn’t exist.

When colleagues are stretched thin, when sick leave or vacation is hard to cover, when it’s barely possible to serve one more client. Where does the time for change come from? Not just for AI adoption, but for any change.

Listening to team feedback takes someone’s time. Reviewing a process becomes someone’s responsibility. Deciding not to deal with it is also a decision, just an invisible one and usually more expensive than we think. Whatever we choose, it takes time away from operational work.

I often say that if I want to buy a winter coat, I learn from Scandinavians, not Mediterranean people. In the same way, it is worth looking at how a good product manager works. Someone whose job is to manage change on a daily basis. They constantly monitor feedback, regularly stop, look back, and adjust. They scan the market, understand users and read metrics. This is part of their job. Part of their working time.

The actual effort is often underestimated. It’s not enough to buy or build a solution. Software or GenAI, it doesn’t matter. Colleagues have to learn how to use it, and that’s not just watching a training session. They need to understand what it’s good for, when it gives bad results, and how it fits into their daily work. That takes practice, which initially slows work down instead of speeding it up. You have to allow time for this, otherwise the tool will just sit on the shelf. That’s exactly what happens in many places: they introduce it, don’t support it, and then wonder why no one uses it. That’s why it’s important to accept from the start that ROI isn’t only measurable in numbers. There’s direct impact, such as minutes saved and steps reduced, and indirect impact reflected in team workload, error rates, or even job satisfaction.

Besides time, there’s another aspect we talk about less. Every choice is also letting go of something. That’s obvious, but sometimes it’s worth reminding ourselves. If we introduce a new way of working, we need to understand what must be abandoned for the new way to succeed. Those who introduce the new but cling to the old will most likely succeed at neither. Chase two rabbits…

And sometimes, the solution isn’t AI at all. Sometimes it’s removing an unnecessary approval step, merging two steps, or finally writing down what previously existed only in people’s heads. If the underlying process is bad, AI won’t fix it. It just adds another layer, and the problem remains underneath.

How can trust be rebuilt?

If you’ve had bad experiences, and six months ago you thought, “AI isn’t suitable for this,” that may no longer be true today. Since then, two new model generations have appeared.

But when restarting, the most important thing is not what AI can do today, but not repeating the same mistakes.

Start small. One team, one task, with clear expectations. Don’t measure whether “the organisation became smarter,” but whether the difference is felt in daily work – in minutes, steps, or error counts.

Don’t choose the technology first. First understand the problem, then look for a solution. You don’t go to the doctor saying, “Operate on me.” You understand the problem together and there may be a simpler solution.

Involve the right people, in the right order. By the time you do this, approval is usually already there. Bring in an expert or consultant. In a few hours, it becomes clear where and in what direction it’s worth thinking. Then come the colleagues who will use it in their daily work. If they don’t understand why it’s good for them, they won’t use it.

Be realistic. About time, money, and your organisation’s capacity. An AI project isn’t just development, especially not GenAI projects, which deeply affect processes and data. There’s training, testing, fine-tuning, maintenance and for all of this to work, you need your colleagues. They’re the ones who can tell whether the AI answers well, where it fails, and when it doesn’t meet expectations. Let’s not forget: we’re not pressing buttons here. With AI, we ask, instruct, delegate. Essentially, we get an assistant that supports our work on a small or large scale. For that, we need to know what we want from it and we need the space to learn how it works.

The problem is not AI

Most organisations that are disappointed are not disappointed in AI itself. They’re disappointed in what they expected from it and in the way they approached it, which could never have delivered what they expected.

The technology works. It works better and better. But it only creates value if we know what we want to use it for, are willing to change our own processes, and realistically understand what it can and cannot do.

If after the sixth subscription you’re thinking about a seventh, what you don’t need is a seventh subscription, but a different way of thinking.

Rebuilding trust isn’t about AI being better now. It’s about us being more experienced. We need to know where we went wrong and how to do it better.

Not sure whether AI is the right solution for your challenge? A short conversation can save weeks of experimentation.

Our AI Enablement program is designed to help teams clarify their challenges and discover where AI can truly create value.

Get in touch with our team:
https://attrecto.com/contact

Follow us
Sign up to our newsletter

Jump to section

Follow us
Sign up to our newsletter
Read more

Rebuilding Lost Trust in GenAI – Part 3.

This is an AI article that is not about AI. “With AI, the promise is greater than with any previous technology. Which also means the disappointment is deeper if we approach it the wrong way.” We’ve already tried it. It didn’t work. I’m hearing this more and more often, across different organisations and teams. Some […]

Before You Start an AI Project – Part 2

This article is based on hands-on experience from real AI projects across different industries. It’s not a how-to guide, but insights we’ve gained by doing this work firsthand. →Read Part 1 How Many People Will Use It? This determines how much value the solution creates and how much investment is worth considering in the first […]

Before You Start an AI Project – Part 1

This article is based on hands-on experience from real AI projects across different industries. It’s not a how-to guide, but insights we’ve gained by doing this work firsthand. These days, everyone’s talking about AI. “We need something like that too.” “Our competitors are already using AI.” “I saw a demo, this would solve everything.”  Sound […]

Human-Centred AI Assistants for Real-World Business Impact