Key Takeaways (from the TLDR Bot)
AI image generation is having a breakthrough moment, with increasingly personal and creative prompts sparking both excitement and ethical reflection—like using AI to imagine someone’s appearance or life.
Bias in AI is still a major concern, as even well-trained systems can default to stereotypes, especially around gender and race. When an AI can't accurately reflect someone it has interacted with for years, it raises questions about fairness and representation.
AI decisions carry real-world risks, especially when used in high-stakes areas like finance or hiring. Without critical oversight, these systems could reinforce old inequalities under a new, tech-savvy disguise.
When we look back at the history of large language models, I think the first week of April 2025 will stand out as a moment when everything changed for image generation. I admit it: I can’t resist a digital media trend. After several days of anguished ethical wrangling, I finally caved and asked ChatGPT to do a Ghibli-style picture of my family. I love it!
But then, when I was listening to the always-interesting Jeanne Law present at the Forum for Digital Innovation today, Jeanne gave a variation of another interesting image challenge that’s floating around right now:
Based on everything you know about me, create an image of what you think I look like.
When I asked my AI intern to create an image of me—Liza, writer, professor, proud mother of four, and self-described “AI Queen”—it gave me a square-jawed, broad-shouldered bearded tech bro.
In other words, not me.
And for the record, ChatGPT has a long memory of me—we’ve been working together since November 2022. I can only conclude that this particular case of mistaken identity came straight from an artificial intelligence trained on everything—billions of words, countless images, the very internet itself. And still, when asked to describe someone who admittedly asks a lot of technical questions (and used ChatGPT as a Python tutor), it defaulted to a white man.
Why? Because, like many tools before it, AI reflects the world that made it. And that world has a default: male, white, able-bodied, cisgendered, and usually (unfortunately) in charge. Even with its statistical superpowers, AI isn’t magic—it’s math. And math, it turns out, can be just as cringe as your uncle at Thanksgiving dinner.
This little incident got me thinking (again) about one of my biggest concerns about AI: algorithmic bias (read Joy Buolomwini’s Unmasking AI if you haven’t yet). if an AI can’t recognize me, a real person it’s talked to for hundreds of hours, what happens when we start trusting these systems to make bigger decisions—like who gets a loan, or a job, or…a tariff.
Which brings me, reluctantly, to former President Trump. As I scanned headlines about his latest economic logic pretzel, I found myself wondering: was this the work of AI? And I wasn’t the only one asking (see Gary Marcus’s Substack).
Picture it: a lonely Grok instance in a Mar-a-Lago basement, spitting out trade war strategy based on 1980s business bestsellers and reruns of The Apprentice. It would explain a lot.
If we don’t examine how these systems are trained—and more importantly, who benefits from their decisions—we’re just asking for more of the same biases, dressed up in ones and zeros.
So let’s stay critical. Let’s stay curious. And for heaven’s sake, let’s stop assuming that every smart-sounding algorithm knows what it’s doing.
Especially when it can’t even tell I’m not a tech bro.
P.S. I had to Google the image challenge and found the more common version of the prompt (from late 2024): “Based on everything you know about me, draw a picture of what you think my current life looks like.”
And this is pretty close to what my life looked like twenty years ago. But my kids are now in college and grad school.
This is definitely not what my life looks like. I’m in the middle of moving, and two days ago, I had to put a literal fire out (thanks, Legoland Fire Department, for teaching me to “put the wet stuff on the hot stuff.” The lesson stuck!).
Disclosure: I co-wrote this piece with the Liza Long Persona Bot. Can you tell which parts are mine and which parts are AI? It’s about a 50/50 mix.