The overlooked risk of using LLMs to make important decisions

It’s tempting to solve the problem of messy text by dropping it into an LLM. Need to digest a 50-page submission pack, or a pile of market research transcripts? Just summarise it. But there’s a risk here most teams overlook. Summarisation is, by design, a process of reduction. It flattens out nuance, context and — critically — the rare, outlier signals that often matter most.

An LLM might give you a neat bullet list of themes. But what gets left out? Maybe it skips the cautious phrase from a broker that correlates with litigation later. Or the subtle shift in HCP language that precedes adoption hesitancy.

Large language models are trained to reflect back human expectations. They’re optimised to sound plausible, even pleasing. That makes them fantastic for drafting content, but risky if you want to uncover signals that challenge assumptions or reveal what’s truly predictive.

If you’re only summarising, you’re compressing the very data that could differentiate your strategy. The better move? Use models that statistically test language patterns against real outcomes. That way, instead of getting a surface-level digest, you get evidence on which words, phrases, and tones actually change your results.

Because sometimes, what matters most is what an LLM would have smoothed away.

Continue reading