Profiled by Prompt, Illustrated by Model: Audit Your LLM’s Assumptions
“I even look good in a broken mirror” —Lil Wayne (“Mirror”)
As I type this, there is a trend on LinkedIn for people to post caricatures they had their large language model (LLM) of choice create based on what it knows about them. (Typing those words gives me a sense of cognitive dissonance—this is LinkedIn we’re talking about, not TikTok or Instagram! But, alas, the memeification of LinkedIn is a rant for another day.) What I find interesting about this trend/meme/viral thingie of caricatures is that it provides an opportunity to peer inside what many still think of as a black box. We can focus not just on the image it creates but why it includes those things in the image.
I, of course, had to take one for the team and have ChatGPT create one of these silly illustrations of me. The prompt was straightforward enough: “Create a caricature of me and my job based on everything you know about me.” As it worked on the request, I could see it consider some things that I see as integral to my self-image—things that make me me—but that it ultimately rejected. No music in the image, for example. OK, I did emphasize “the job” in the prompt. But ChatGPT did make room for beer, wine, and Bay Area landmarks in its final image, so it was making some judgment calls about what “mattered most.”
Given that, I wanted to know what it thought it knew about me when it created the image, so I just asked it flat out. It gave me a tidy bulleted list:
- Your look: short gray hair, trimmed gray beard, blue/teal round-ish glasses, black turtleneck vibe
- You’re an industry analyst
- Opus Research branding
- Your “briefings are 60 minutes max” rule
- Your Intelliview-style evaluation mindset (“Intelliviews” are Opus Research’s competitive assessment reports)
- Your CX/contact center/AI focus
- Your “AI Agent Control Plane” obsession (key work that Opus Research has been focused heavily on)
- Northern California / Bay Area context
- Wine + beer as your off-hours interests (well, I did recently ask it the ideal temperature for serving a Belgian Quad)
- Spreadsheet/charts/metrics life
- Time boxing/schedule pressure
Most of that list is harmless, and some of it is even useful. If an LLM knows I am an analyst, then it should give me comparisons, evidence, and tradeoffs. If it knows I am allergic to bloated briefings in real life, it should keep things tight in our conversations.
Still, reading the list felt like seeing my search history turned into office decor. The caricature was cute, but the real point was the model’s sense of relevance. It picked a few details and built a story about me from the crumbs I left behind.
But that’s why heavy LLM users should run this drill. You are not only learning what the model stores, you are learning what it treats as important. That matters because its idea of you steers every summary, recommendation, and rewrite.
And if you want a nonvisual version, ask it directly.
- Describe my job in one sentence.
- What topics do I return to most often?
- What do you think I value when I evaluate a vendor or a technology?
- What do you assume I want when I ask for help?
- What is one thing you might be wrong about when you describe me?
Then ask for the reasoning behind the answers. That’s where the gold is. You see what it treats as signal, what it ignores as noise, and what it overweights because it was vivid or recent. You also get a chance to correct it, which is less about privacy and more about quality control.
And if your model insists on putting a wine bottle on your caricature’s desk, ask it to add a record player—vinyl is back, after all.
Ian Jacobs is vice president and lead analyst at Opus Research.
Related Articles
Why Conversational Experience Orchestration Will Have Its Moment
20 Feb 2026
Something needs to be at the helm making sense of customer experiences.
Too Much, Too Little, Too Late: Rethinking ‘100 Percent Coverage’ in Contact Center QA
20 Jan 2026
Finding a happy medium between a few snapshots and blanket coverage.
If the Fast Lane Is Self-Service Bots, Then the Slow Lane Is Stewardship
29 Oct 2025
Will customer support in which human agents take ownership of issues become a luxury good?