Why 95% of AI Projects Fail, And What Day One Should Actually Look Like

This article originally appeared on Edge Signals – Bart Lehane’s LinkedIn newsletter on customer experience, analytics, and AI. Follow for future insights like this!Most AI projects fail not because the technology doesn't work, but because organisations set the wrong expectations and build the wrong way. This article examines why 95% of AI pilots deliver zero…

Table of contents

This article originally appeared on Edge Signals – Bart Lehane’s LinkedIn newsletter on customer experience, analytics, and AI. Follow for future insights like this!

Most AI projects fail not because the technology doesn’t work, but because organisations set the wrong expectations and build the wrong way. This article examines why 95% of AI pilots deliver zero measurable revenue impact, why building your own AI makes failure three times more likely, and what genuinely effective AI implementation looks like from day one.

You’ll learn:

✔ Why the AI failure rate has quietly reached crisis levels, and what the data actually says.
✔ How the ease of building prototypes creates a false sense of progress that stalls real deployment.
✔ What separates specialist AI vendors from generic models when it comes to delivering fast, measurable value.
✔ The one question you should ask every AI vendor before signing anything.

A software company came to us not long ago with what looked, on the surface, like a healthy support operation. Metrics were strong across the board, response times, resolution rates, customer satisfaction scores, even first contact resolution rates. They had a slight suspicion that ‘product activation’ requests were running a little high, but that was it.

On the first day of going live on EdgeTier, our auto-tagging clustered and labelled every product activation-related conversation. We pointed our Spotlight feature at that group and it surfaced a clear pattern: a manual step required to activate the product was generating a high volume of contacts, and it simply shouldn’t have existed. The day-one output gave the company the evidence it needed to make a simple process change, and that change resulted in thousands fewer contacts.

The insight wasn’t hidden in some obscure corner of their data. It was sitting right there, loud and clear. It just needed something that could actually read the conversations, not just count them. When we quantified for the team how much collective time was disappearing into something entirely fixable, they could act. And they did.

That’s what day one should feel like when deploying an AI tool.

The Numbers Are Not Kind: AI Implementation Is Failing at Scale

Most companies don’t get that experience.

MIT’s NANDA initiative analysed over 300 AI deployments and found that 95% of generative AI pilots delivered zero measurable impact on revenue. Not modest returns. Zero. S&P Global surveyed over 1,000 companies and found that 42% had abandoned most of their AI initiatives, up from 17% the year before.

Sit with those numbers for a second. In the space of a year, the share of companies quietly shelving their AI programmes nearly tripled.

Closer to home, research published last month found that Irish enterprises have collectively written off an estimated €720 million on AI projects that delivered nothing usable. The average large Irish organisation has lost around €770,000 on initiatives that simply went nowhere, and nearly every one of them, 99%, had experienced at least one failed AI project. Most of the people running those projects knew something wasn’t working. A lot of them just didn’t feel they could say so.

Very little of this is because AI is a fraud. The technology works. The problem is something more fundamental, and more avoidable.

Building Your Own AI Makes Failure Three Times More Likely

MIT’s research made this clear: when companies bought AI from specialist vendors, they succeeded roughly 67% of the time. When they built their own, that rate dropped to around a third as often. And yet almost everywhere the researchers looked, enterprises were still building their own AI.

The reason the failure rate is so high is straightforward. A generic AI model arrives knowing nothing about your business. It doesn’t know your customers, your language, your edge cases, or why a spike on a Tuesday in October means something completely different in your world than it does anywhere else. Someone has to spend the next 12 to 18 months teaching it all of that, and by the time it’s actually useful, half the organisation has lost faith in it.

Spinning up a prototype has also never been simpler. In a few weeks you can create a convincing demo that will impress a roomful of stakeholders. But a convincing demo and a production-ready tool are two very different things. Prototypes don’t maintain themselves, scale themselves, or handle the hundred edge cases that inevitably emerge. Data scientists want to move on to the next project, and so the impact disappears.

What Day-One Value Actually Looks Like in AI Implementation

At EdgeTier, we have a simple standard: on day one, we’ll tell you something about your customers that you didn’t know. But that’s only possible because EdgeTier already understands what it’s looking at — not “customer experience” in the abstract, but the way your customers actually talk.

Sonar, for example, doesn’t need to be told what to look for. It detects emerging topics by developing a real-world understanding of your customers’ language: the phrases they use, the patterns that recur, the signals that mean something in your operation versus the noise that doesn’t. The difference between a spike worth escalating and one that’s expected isn’t found by counting contacts — it’s found by reading and understanding them.

Most CX teams can spot the obvious fires over time. What tends to slip through are the smaller, persistent anomalies sitting just below whatever threshold anyone is watching. Those are usually the ones that quietly compound into something far more expensive. Would you trust a generic AI, one that arrived knowing nothing about your customers, your language, or your operation, to catch them on day one?

Hold Your AI Vendors to a Higher Standard

EdgeTier sits at an interesting place in the market. We’re a scaling company without the decades of brand recognition that large enterprise vendors have built up, which means we don’t have 12 months of brand credit to burn while a client waits to see results. When we go live with a customer, we need to deliver something meaningful immediately, not because it makes for a good story, but because our business depends on it and our customers deserve it.

That is the standard you should hold all your AI vendors to, whether large or small.

Think about the pilots that never reach production. The tools that were supposed to be transformative, bought by people who are still waiting to feel transformed. The 95% of AI projects that never impacted revenue. Most of it not because the technology or its potential is lacking, but because the bar for what counts as “good enough” has been set embarrassingly low.

That needs to change. In the CX space, if an AI tool can’t tell you something meaningful about your customers on day one, it isn’t ready. And you shouldn’t have to wait for it to be.

So when you’re evaluating your next AI vendor, ask them one simple question: what impact will you have on day one? If the answer is vague, the results are likely to follow suit.

Continue Reading & Subscribe

🔗 Read the full article on Bart Lehane’s LinkedIn post for examples, stories, and community discussion.

Bart Lehane is the Co-founder & CCO at EdgeTier and a PhD engineer who’s spent 20+ years building and delivering advanced technology. His background spans applied research, software development, and product management. His interests lie at the intersection of CX, AI, and tech.

Customer-Focused Leaders Trust EdgeTier

  • Berlin_Brands_Group_logo

    "I specifically liked the flexibility. I liked the can-do attitude. I always felt supported. There hasn’t been any single point in our journey where EdgeTier has said no."

  • kaizengaming-logo

    "EdgeTier is really shining when it comes to responsible gambling. We can proactively track critical issues and take actions, reducing human error."

  • codere logo

    "We now have highly detailed understanding of agent performance, not just on key agent metrics, but also on how customers react to our agents and the emotions of our customers feel when talking to our team."

Employees avatar purple
Employees avatar yellow
Employees avatar blue

Ready to see results?

Let us help your company go from reactive to proactive customer support.

Unlock AI Insights