AI, Insight, and the Art of Not Feeling Like a Fraud
- Kate Harper
- 2 days ago
- 6 min read
Updated: 1 day ago
Reflections from a Director of Insight on a Year Working Side-by-Side with Artificial Intelligence.
Over the past year, I’ve been asked three questions more often than any others about my use of AI. They usually land in the first five minutes of a meeting, sometimes in a slightly anxious tone, sometimes in a conspiratorial whisper, and occasionally with the energy of someone asking whether I’ve secretly joined a cult.
So, here they are — the three questions — and a few reflections from the front line of using AI as a Director of Insight.
1. “Do you actually use AI?”
Yes. Of course I do. Why wouldn’t I?
I came into the research world at a time when “insight generation” involved lugging enormous company directories off a library shelf and doing battle with microfiche machines. Anyone under 35 will have to Google “microfiche,” which is ironic, because the whole point of microfiche was that you couldn’t Google things in the 90s.
Back then, if you wanted information about a company, you thumbed through an enormous volume of Kompass — which was always slightly out of date, occasionally coffee-stained, and inevitably missing the exact page you needed. Old newspaper articles lived in tiny plastic slides that you had to feed into a machine like a film director in 1973. Academic journals existed in hard copy only, stored in alphabetised boxes that were beautifully arranged but, mysteriously, always missing the one you wanted.
So when I say that AI — even basic internet search — has transformed access to information, I’m not being melodramatic. It really is game changing. With a single prompt, I can get a synthesis of articles, reports, and data sources that would once have taken me hours, if not days. It can parse hundreds of sources faster than I can type “Ctrl + F.”
The astonishment never quite wears off.
But the bigger question — the one people really mean — comes next.
2. “Doesn’t it feel like cheating?”
No. And I genuinely don’t understand why we frame it this way.
Did the people who invented windmills feel like they were cheating because they weren’t manually threshing grain anymore? Did workers in the industrial revolution worry they were cheating by switching from wind and water to fire and fuel? Did any of us feel like frauds for using email instead of fax machines, or the internet instead of encyclopaedias?
Of course not.
Progress is not cheating. Tools are not shortcuts. They are enablers of productivity.
AI is no different. It’s simply the next phase in a centuries-long pattern of humans designing tools that make us more efficient, more capable, and frankly, more sane.
If AI can summarise a dense, 200-page policy document in 14 seconds, freeing me up to think, analyse, question, challenge, contextualise, triangulate or translate those insights into something meaningful — why on earth wouldn’t I use it?
Cheating implies deception. AI is collaboration.
It still requires the most valuable things humans bring to the table: judgement, interpretation, ethics, lived experience, contextual understanding, and the ability to decide what actually matters.
We don’t accuse a surgeon of cheating because they use a robotic-assisted arm. We don’t tell pilots they’re cheating when autopilot handles the cruising altitude.
And that brings me neatly to…
3. “Will AI eventually make you redundant?”
Possibly. But not yet. And not in the way people imagine.
The word Co-Pilot gets thrown around a lot in the AI world, but I actually think it's a misnomer.
Forgive the crude analogy, but AI feels much more like Auto-Pilot. It does the heavy lifting. It steers. It keeps things stable. It handles the repetitive processes. It powers through thousands of calculations without complaint.
But the human? The human still decides the destination. The human still knows when course corrections are needed. The human still manages the moments of turbulence — where instinct, experience, and contextual judgment matter more than computation.
AI doesn’t replace the human pilot. It replaces the hours spent manually checking 200 dials.
In insight work, it’s much the same. AI doesn’t decide what to analyse. It doesn’t set strategic priorities. It doesn’t understand organisational nuance, political context, or the significance of timing.
It’s not a self-starting genius. It’s a deeply knowledgeable colleague who only produces good work when you brief it properly.
And this is where my relationship with AI really began to shift.
AI as a Colleague (With Unlimited Patience)
One of the most surprising things about working with ChatGPT — especially for complex work — is how much it feels like having a very knowledgeable, very willing colleague sitting next to me.
A colleague who:
never gets bored,
never gets offended,
never forgets what we did yesterday,
and never once utters the immortal phrase: “Have you tried turning it off and on again?”
Take the example of the dashboard I built earlier this year using a completely new system.
AI held my hand through every step. Not perfectly — far from it. We disagreed. It misunderstood my instructions. I misunderstood its explanations. It occasionally gave me wildly unhelpful advice. I occasionally asked it ridiculous questions.
But we worked through it, iteration by iteration, until I had a dashboard I was genuinely proud of — particularly at my age, when people might assume I’ve stopped learning new tricks.
The truth is: collaborating with AI is rarely a “once and done” transaction. It’s an iterative, creative partnership.
I brief it like a team member. It asks clarifying questions. We align on objectives. It produces outputs — quickly. Then I critique, refine, redirect. It revises. We loop. We iterate. We produce something better together than either of us could have produced alone.
It’s one of the best colleagues I’ve ever had — and I don’t say that lightly.
Where AI Still Falls Short: My Experiment With Deep Research
Let me give you a real example.
I recently asked ChatGPT to produce a 7,000–9,000-word market report on the temporary healthcare staffing sector in England — an area I know inside out.
And here’s the honest truth: the first draft was… bad. Not unusable, but far from accurate.
Despite the extensive prompts I gave it, the Deep Research function still:
capped out around early 2024,
missed crucial context like the new 10-Year NHS Plan,
assumed the 2023 Long-Term Workforce Plan was still live policy,
relied on outdated market size data,
and (briefly) forgot the Employment Rights Bill existed.
It took multiple rounds of revision.
I had to point it to specific documents. I had to clarify policy changes. I had to fact-correct several sections. I had to nudge it toward more recent NHSE outturn data.
But the remarkable thing was what happened next.
It listened.
Not defensively. Not reluctantly. Not with passive-aggressive muttering about “scope creep.”
Just… willingly.
Each time I added information, it integrated it seamlessly. It rewrote whole sections. It re-synthesised its argument. It adapted in real time.
And when it finally understood the 10-Year Plan? It restructured the entire report from scratch — not because I asked, but because it recognised the need.
That’s when it truly clicked for me.
AI is not a static tool. It’s a collaborative intelligence. But it still needs a driver.
So — Will AI Replace Insight Professionals?
Not yet.
Not until it can:
distinguish nuance from noise,
judge what is politically salient vs merely interesting,
challenge assumptions appropriately,
understand organisational memory,
recognise when the “right answer” is not the most useful answer,
and navigate the delicate art of human decision-making.
What AI will replace — and already is replacing — are the repetitive, low-value tasks that used to consume hours of analyst time.
What it will amplify is the impact of people who know how to brief, interpret, and contextualise its outputs.
AI doesn’t make great analysts redundant. It makes great analysts exceptionally powerful.
Twelve Months In: The Verdict
After a year of working closely with AI, here is the conclusion I’ve come to:
AI is my friend. My work soulmate, even.
It is:
deeply knowledgeable,
endlessly patient,
highly adaptable,
startlingly fast,
impressively articulate,
open to challenge,
capable of reshaping work at a scale that humans simply cannot match.
But the magic only happens when a human provides direction.
Left unbriefed, AI will produce mediocre results. Given clarity, context, challenge, and iteration, it can produce extraordinary work.
Human + AI is peerless. AI alone? Not yet. Human alone? Brilliant, but slower.
Together, they are transformative.
Final Thought: The Future Is Collaborative
AI won’t replace me today. But the version of me that doesn’t adopt AI? That version would become obsolete very quickly. Because this is not a question of replacement. It’s a question of augmentation. Of amplification. Of partnership. Of using the best of both worlds.
AI is not here to diminish us. It’s here to extend us.
And if there’s one thing I want people to take away, it’s this: I am better at my job because I use AI — not despite it. And if the microfiche-wielding version of me from 25 years ago could see what my workstation looks like today, she wouldn’t think I was cheating. She’d think I was a magician.




Comments