Agent QA at a glance: Heat-map analytics
We've just shipped a scorecard heat-map to the Reviews screen in Coach. This release will give you a team-wide view
We’ve shipped a set of improvements to Spotlight that significantly increase the number of conversations it can analyse in a single run — helping you get a more representative view of what’s happening. Alongside this, we've also fine-tuned our experience score.

Spotlight is a core AI feature on the EdgeTier platform. Instead of reviewing interactions one by one, Spotlight Summaries let you analyse multiple conversations at once, highlighting recurring issues, agent performance trends, and customer frustrations, without the manual effort.
The AI processes message content in two steps:
If you’re interested in more information in how Spotlight works, please refer to our existing documentation page.
Previously, Spotlight typically analysed a sample of 100 interactions. Spotlight now analyses ~1,500–2,000 interactions per Spotlight, so you can capture insight across far more conversations without needing multiple runs.
To support the larger sample size, we rebuilt parts of Spotlight’s processing pipeline to handle more data efficiently and reliably. As a result, you may see a small increase in processing time in some cases — around 4–6 seconds — depending on the size and shape of the dataset.
We’ve also improved the prompts behind Spotlight to surface more detailed and nuanced root-cause information — helping Spotlight better explain the “why” behind trends.This works best when Spotlight is applied to a focused subset of interactions, such as when you’ve already narrowed down by a topic, spike, or trend.
Spotlight still performs best on smaller, targeted sets of conversations — and now it can cover many more interactions within those focused views. If you have any queries on this update, or optimising Spotlight, please reach out to Customer Success.
You can find more about our experience score calculations here.
We’re making a small update to how Experience Score is shown at the individual interaction level:
Important: Averages stay the same across the product — we’ll still calculate the same way and display to 1 decimal place (e.g. 7.2), so team/agent aggregate reporting retains granularity and is easier to compare.Why: we’ve had customer questions about meaningfully interpreting small decimal differences on single interactions (e.g. “what’s the difference between 7.23 and 7.66?”), and in practice it’s not significant. Whole numbers align better with how NPS/CSAT-style scores are typically understood.
We've just shipped a scorecard heat-map to the Reviews screen in Coach. This release will give you a team-wide view
We've had a busy start to 2026! We wrote a full roundup, click the button below to read more.
Up until now, any new phrase tags were only added to interactions that occurred after the tag was created or
"We thought at the time that we were putting the customer at the fore. We thought we were doing things right. But in hindsight, we really weren’t because we had no real-time insights whatsoever into customer issues."
"It has reduced the time for the quality assurance process as it provides clear data and a very robust direction on where to look and what matters the most."
"You’ve got an issue, but you don’t know how many people are affected. You don’t know the scale. You don’t even know if it’s real."



Let us help your company go from reactive to proactive customer support.
Unlock AI Insights