top of page
Search

What Happens When Your Interviewer Asks to See Your ChatGPT Personality Report?


A LinkedIn post stopped me mid-scroll last week. A candidate, part way through an interview, was asked whether they use ChatGPT. When they said yes, they were invited to type a prompt into their phone there and then. The interviewer wanted to review the output together so they could “objectively understand the candidate’s thinking patterns and interests.”


Let’s just pause there.


I understand the curiosity. We are all trying to work out how AI fits into recruitment, assessment and performance. If candidates are using these tools at work, perhaps we should understand how they use them. Fair enough.


But asking someone to open a deeply personal, longitudinal record of their intellectual experiments and press “show me who I am” on demand in a job interview feels problematic on several levels.


For a start, we all use tools like ChatGPT differently. Some people use it purely for drafting board papers or summarising research. Others use it to explore personal reflections, health anxieties, career doubts or creative experiments. The boundary between professional and personal use is porous. Inviting someone to surface that record in a high stakes interview environment could easily stray into territory that feels intrusive or compromising.


That is not my main concern though. My deeper concern is this: even if we treat the output as “objective”, it is not. To test my own reaction, I typed the same prompt. I asked ChatGPT to analyse my behavioural tendencies based on past conversations. The result was flattering. Very flattering.


Apparently, I am a strategic systems thinker who integrates narrative and data. An iterative perfectionist in a constructive way. Authority independent. Future oriented. A long term builder. A synthesiser of complexity into structured clarity without losing nuance.


I will not pretend I did not enjoy reading that. It felt like me on a good day. It captured the patterns I would like to believe are visible in my work. It highlighted qualities that I have tried to cultivate over a long career across research, workforce planning and governance. Even the blind spots were gently framed. I may over engineer sometimes. I might get impatient with shallow thinking. I may underestimate how threatening intellectual clarity can feel to others. Hardly devastating.


It was insightful, well structured and affirming. If an interviewer had read that across the table, I suspect we would have had an interesting conversation. It would have showcased my ability to build frameworks, integrate evidence and tell a coherent story. In that sense, it works beautifully as personal branding.


And that is precisely the problem. AI models like ChatGPT are pattern recognisers trained to respond in ways that are useful, coherent and socially calibrated. They are not forensic psychologists. They do not have access to dissenting peer reviews of my behaviour, moments of pettiness, misplaced stubbornness or projects that quietly went nowhere. They are trained on my prompts and my style of working. If I tend to ask structured, strategic questions, I will receive structured, strategic reflections back. The mirror is polished because I am holding it.


There is an inherent positive bias in how these systems respond. Not because they are designed to flatter in a simplistic way, but because they are designed to be constructive. They look for strengths in the data provided. They synthesise recurring themes. They resolve ambiguity into coherence. The result is almost always more elegant than the messy human reality underneath.


If we start treating outputs like this as objective assessment data in recruitment, we risk confusing self presentation with independent evaluation. We may also privilege candidates who are articulate, reflective and deliberate in how they use AI over those who use it differently or not at all. There is also a subtler distortion. My ChatGPT “profile” reflects the version of me that shows up in these conversations. It reflects my strategic thinking about NHS workforce reform, my interest in governance defensibility, my habit of building repeatable systems, my tendency to ask for version two. It does not capture what colleagues might say about how I behave under pressure, how I handle conflict, how I recover from error, or whether I can let go when needed. Those are not promptable traits.


And if we are being properly honest, a brutally direct version of my profile might read slightly differently. It might say that I can become overly invested in intellectual rigour and lose patience with what I perceive as woolly thinking. That I sometimes build the cathedral when a garden shed would suffice. That my instinct to synthesise complexity can occasionally tip into over structuring. That independence of thought can, in the wrong context, look like contrariness.


Those edges matter. They are human. They are not always visible in an AI generated summary that is working with curated inputs and a cooperative tone. None of this means AI has no place in recruitment. Far from it. Used well, it can help candidates prepare more thoughtfully and employers clarify what they are really seeking. It can surface themes for discussion. It can even prompt useful reflection. But it should never be mistaken for an objective window into someone’s mind.


An interview is already a performance of sorts. Adding an AI generated personality report risks creating a hall of mirrors where self narration, algorithmic pattern recognition and interviewer interpretation bounce off each other until everyone feels they have glimpsed something profound. Sometimes they have. Sometimes they have simply co created a particularly articulate story.


The deeper question is not whether someone uses ChatGPT. It is how they think when there is no script, no prompt and no helpful synthesis engine tidying their thoughts.

If I were in that interview chair, I would be more interested in asking a candidate to talk me through a messy decision they got wrong, how they changed their mind in the face of evidence, or how they navigated a value conflict in a boardroom. I would want to see judgement in motion, not judgement distilled.


So yes, I enjoyed reading my AI generated behavioural analysis. It felt affirming and coherent and reassuringly on brand. It probably does capture something real about me.

But as a recruitment tool, it is a curated reflection. And curated reflections, however elegant, are not the same thing as evidence.


In a world increasingly seduced by the apparent objectivity of algorithms, we would do well to remember that sometimes the most revealing insights still come from uncomfortable questions asked by another human being, not from a very articulate machine that quite likes you.

 
 
 

Comments


bottom of page