AI

When Seeing Is No Longer Believing

When Seeing Is No Longer Believing

When Seeing Is No Longer Believing

I watched a video recently that made me stop longer than I expected.
It showed Neil deGrasse Tyson saying the Earth is flat.
The statement itself was obviously wrong. But what caught me off guard was who was saying it. Neil is one of the most recognizable scientific voices of our time. For a moment, your brain tries to reconcile what you’re seeing with what you know to be true.
Then you realize it’s AI.
And that realization brings a different kind of discomfort. Because the video is convincing. Convincing enough that even people trained to be skeptical might hesitate before questioning it.
That hesitation is the signal we should be paying attention to.
We often talk about deepfakes as a future problem. Something we’ll address once the technology matures. But this isn’t theoretical anymore. The tools already exist, and they’re improving quickly.
Most of our faces, voices, and personal information are already online. That part isn’t new. What is new is how easily that information can now be used to fabricate events, spread misinformation, or undermine trust at scale. The barrier to misuse has dropped dramatically.
And trust, especially in healthcare, is not optional.
I spend a lot of time advocating for AI as a tool in medicine. I believe strongly in its potential. But supporting AI does not mean supporting it without limits. In fact, the more powerful it becomes, the more intentional we have to be about how it’s used.
What’s important to remember is this: AI may change how we work, but people are mostly the same.
Patients still need to be heard. Clinicians still want to do right by the person in front of them. The problem is that too much of a clinician’s time is spent on work that could be done by a machine, while human connection gets squeezed into smaller and smaller moments.
That’s where AI should come in.
Not to replace doctors. Not to simulate humans. But to take on the background tasks, the administrative weight, the invisible labor that pulls clinicians away from care. Used correctly, AI doesn’t change human behavior. It gives us room to be more human.
That’s also why guardrails matter so much.
Privacy cannot be optional. Consent cannot be assumed. Accountability cannot be vague. Security cannot be something we promise to figure out later. Especially in healthcare, where patients trust us with their bodies, their histories, and their most vulnerable moments.
Platforms like OpenAI, Sora, and others are shaping tools that will influence how information is created and shared across society. With that influence comes responsibility. Clear boundaries around what these systems should and should not do are not barriers to progress. They’re what make progress sustainable.
Innovation without responsibility isn’t innovation. It’s just risk.
I remain optimistic about AI’s role in healthcare. But that optimism depends on us being honest about both its power and its limits. If we guide it thoughtfully, AI can free clinicians from the work that never required a human in the first place and give that time back to patients.
When seeing is no longer believing, trust becomes our most valuable asset. And protecting that trust should be the starting point for every conversation about AI.

Join the Mission

Stay Ahead in Healthcare

Join the Mission

Stay Ahead in Healthcare