“Most organizations today have shadow AI — they just haven’t found it yet”
That’s Greg Thompson’s warning. A veteran in cybersecurity and risk, ex-CISO for Manulife Insurance and ScotiaBank, he’s seen what happens when employees rush to use unapproved AI tools in the name of productivity. “AI is just the newest face of shadow IT,” he says. “It’s not malicious — it’s driven by the same old forces: speed, cost, and convenience.”
And yet, those good intentions can open the door to data leakage, intellectual property exposure, and unreliable outputs that quietly shape decisions and reports.

Thompson points to one striking case: “We saw a government-commissioned report in Australia that had to be refunded because there were AI-generated errors which made their way into official documents. That’s the kind of reputational hit no firm wants.”
But the real risk, he says, lies in what no one talks about — AI platforms that act as new ingress points on corporate networks. “Unsecure APIs, third-party connections — these can create vulnerabilities that traditional controls aren’t yet tuned to catch.”
How to Spot It Before It Hurts You
When Thompson’s previous firm decided to map its own AI usage, they didn’t start with fancy detection systems. They used their existing outbound web-filtering tools to capture traffic patterns and see what AI sites people were using.
“It gave us a snapshot — where AI was being used, how, and by whom,” he says. “We found not all of it was unsanctioned — many were licensed tools that had just never been managed as AI.”
That early discovery helped the team move from reaction to control.
“We didn’t say we were blocking AI,” he adds. “We said we were controlling access to it. That made a huge difference in how people responded.”
Governance Without Fear
For Thompson, AI governance isn’t about adding more red tape — it’s about bringing legal, procurement, and security together early.
“Most of the risk is actually managed by your third-party legal and procurement teams, not by security,” he says. “They’re the ones who make sure the contracts and licenses protect you.”
But that only works if you’re asking the right questions.
“Your third parties will only answer the questions you raise,” he adds. “If your due diligence forms or vendor questionnaires don’t include AI-specific prompts, you’ll never surface those risks.”
His team addressed that gap by updating governance templates to explicitly flag AI use — everything from whether a vendor uses generative models to how they handle model outputs and data retention.
They also built a cross-functional intake process to review each AI use case — not to block it, but to understand the exposure points, especially around intellectual property.
The language mattered. Instead of “you can’t,” the message was “show us what you need — we’ll make it safe.”
Balancing Innovation and Control
When business leaders accuse IT or risk teams of “getting in the way,” Thompson flips the question back:
“How much risk are you introducing by moving ahead without oversight? If you’re aware of it and empowered to accept it, go ahead — but most leaders realize they haven’t thought it through.”
It’s a framing that changes the dynamic. “We’re not here to block innovation,” he says. “We’re here to help you innovate safely. Sometimes that means a bit of friction — but sometimes friction is required.”
What Leaders Should Do in the Next 90 Days
Thompson’s advice for executives trying to get ahead of shadow AI is pragmatic:
- Use what you already have.
“Leverage your existing monitoring and data-loss prevention tools to see what’s happening. You’ll likely find more AI use than you expected.” - Bring legal and procurement into the loop.
“Shadow AI isn’t just a security problem — it’s a contracts and liability problem. Make sure your third-party risk questionnaires and governance docs actually ask the right AI-specific questions.” - Communicate and normalize.
“Run awareness campaigns. Encourage self-reporting instead of punishment. You’ll uncover more — and build trust in the process.”
The Bigger Picture
For Thompson, the shadow AI challenge mirrors a larger societal problem: trust.
“We’re already seeing AI-generated content everywhere — it’s eroding our ability to know what’s real,” he says. “That’s true in enterprises too. If you don’t understand how AI is used, you can’t trust your own outputs.”
Still, he remains optimistic. “AI’s power is enormous — but so is the need for discipline. The goal isn’t to block it. It’s to harness it safely.”
