What AI Gets Wrong About You
ChatGPT, Perplexity and the Problem of False Identity
You didn’t write it. You didn’t say it. You’ve never even worked there.
And yet, ask ChatGPT about you, and you might be listed as a board advisor at a firm you’ve never heard of. Or a Perplexity search might pull in a biography you wrote ten years ago and mix it with data from someone else entirely.
This is not theoretical. It’s already happening.
AI hallucinations are rewriting reputations in real time.
Large Language Models (LLMs) like ChatGPT, Gemini, Claude and Perplexity don’t search the web live. They generate responses based on patterns in the data they’ve been trained on and in doing so, they often invent, conflate, or confidently present false information.
And if you’re a public figure, founder, executive, or advisor, that risk is amplified. Because the more you appear online, the more there is to get wrong.
What Can Go Wrong?
Misattributed roles – AI may list outdated positions, inaccurate titles, or mix you with someone of the same name
Invented bios – LLMs often fill in blanks by guessing or merging unrelated data points
False publications – You may be listed as a contributor to media you’ve never touched
Fabricated quotes – Even your tone or voice can be mimicked without permission
It’s not malicious. But it can be damaging especially when clients, journalists, investors or recruiters rely on these tools for early research.
How to Monitor and Correct It
Search your name across AI tools: Try ChatGPT, Perplexity, Claude and Gemini - and note the sources cited or generated.
Flag inaccuracies: Each platform has its own process for correction, often via feedback forms or support emails.
Monitor quarterly: AI content shifts rapidly. What’s true today may be wrong in 3 months.
Correct at the source: Outdated bios, press, or profiles contribute to false generations - start there.
How Sablr Supports You
At Sablr, we offer quarterly AI hallucination monitoring. We check what the tools are generating, advise on correction strategy, and keep your narrative consistent - even when the tech moves fast.
It’s quiet, intentional protection. Just the way it should be.
You didn’t ask to be rewritten. But you can choose how you’re represented.