Robotics

What Surgical Robots Can't Feel

What Surgical Robots Can't Feel

What Surgical Robots Can't Feel

What Surgical Robots Can't Feel — And Why That Should Worry All of Us
There is a moment in every operation that no camera captures and no monitor displays.
It happens when a surgeon's fingers find a vessel that shouldn't be there, or feel a tissue plane that resists in a way the imaging didn't predict, or sense — through nothing more than the pressure traveling up through a pair of forceps — that something is different from what the picture showed. It lasts less than a second. It changes everything.
That moment is called tactile feedback. And in the most advanced surgical robotic systems in use today, it is almost entirely absent.
What We Gained. What We Gave Up.
The da Vinci Surgical System, CMR, Medtronic's Hugo system, amongst others, have defined robotic surgery for a generation. These machines are a genuine marvel. They offers tremor filtration, motion scaling, instrument articulation beyond the range of the human wrist, and a stereoscopic 3D view of the operative field that no unassisted human eye can match. Surgeons who trained on this tech will tell you it changed what they believed was possible inside the body.
It will also tell you nothing about what you're touching.
The console sits away from the table. The instruments are extensions of the machine. And while the visual information flowing back to the surgeon is extraordinary, the tactile information: the pressure, the resistance, the texture, the tension, all of that has been almost entirely severed in the translation. What the robot's instruments feel, the surgeon does not.
This was, in many ways, an engineering trade-off made out of necessity. Recreating the sensation of touch through a robotic interface is an extraordinarily complex problem. Haptic feedback systems are technology that simulates the sense of touch through mechanical forces, vibrations, or motions. They exist in consumer devices, in rehabilitation medicine, in simulation training environments. Translating that fidelity into a sterile, precise, real-time surgical instrument at the scale and reliability required for the OR has remained one of the field's most stubborn unsolved problems.
So the field moved forward without it. And we adapted.
How Surgeons Compensate
Ask any experienced robotic surgeon how they manage without haptic feedback and they will give you a version of the same answer: you learn to use your eyes differently.
You watch for visual cues; the way tissue blanches under pressure, the way a structure deforms before it tears, the way bleeding begins at the edges of a dissection plane. You develop a mental model of tissue behavior based on years of open and laparoscopic surgery that you then translate, imperfectly, into the robotic context. You become, in effect, a translator between two sensory worlds.
This works. Experienced robotic surgeons achieve extraordinary outcomes. The adaptation is real and the skill is genuine.
But it is a workaround. And workarounds have costs that don't always show up in the data.
They show up in the learning curve, the significantly longer period a surgeon needs to develop competency in robotic surgery compared to what the visual interface alone would suggest. They show up in the specific error patterns of robotic procedures, which differ from open surgery in ways that researchers are still mapping. And they show up in the subtle, difficult-to-quantify moments where a surgeon's experience and intuition are working overtime to compensate for information that simply isn't there.
What's Coming
The haptic feedback problem is not unsolved because nobody is working on it. It is unsolved because it is genuinely hard — and because the field has been moving fast enough in other directions that the absence of touch has been tolerable.
That is beginning to change.
Research groups at institutions including Stanford, Imperial College London, and ETH Zurich have developed prototype haptic feedback systems for surgical robotics that can transmit meaningful tactile information back to the operating surgeon in real time. Some of these systems are entering early clinical evaluation. The engineering challenges around latency, sterilization, miniaturization, and signal fidelity are being addressed with tools, including AI, that didn't exist a decade ago.
Perhaps more significantly, AI-assisted force estimation is emerging as a parallel solution. Rather than physically transmitting haptic data from instrument to console, these systems use machine learning to predict tissue behavior and force parameters based on visual data, instrument telemetry, and pre-operative imaging, then present that information to the surgeon in a usable form. It is not the same as feeling. But it is moving in that direction.
The teams working on this problem are doing some of the most consequential work in surgical technology right now. When haptic feedback reaches clinical-grade robotic surgery at scale, it will not be an incremental improvement. It will be a fundamental expansion of what robotic surgery is capable of, and a restoration of something that should never have been removed from the surgeon's toolkit in the first place.
The Larger Question
The haptic feedback gap is worth understanding not only for what it tells us about surgical robotics, but for what it tells us about how we build medical technology in general.
We move fast. We optimize for what we can measure. We accept trade-offs that are invisible in the data because the people absorbing those trade-offs are skilled enough to compensate for them. And we sometimes call the result progress without fully accounting for what was left on the table.
Touch is not a luxury in surgery. It is a primary information channel that evolution spent millions of years developing for exactly the kind of high-stakes, ambiguous, real-time decision-making that an operating room demands. The fact that we built surgical robots without it was a practical constraint. But treating that constraint as permanent, or as acceptable, is a choice we should examine carefully.
The most sophisticated surgical tools ever built cannot tell a surgeon what they are feeling. The researchers working to change that are not adding a feature. They are closing a gap that should have been a priority from the beginning.
What else did we leave behind?
——————————————————————————————————————

Dr. Rafael Grossmann is a trauma surgeon, digital health innovator, and global keynote speaker. He speaks on AI in medicine, surgical robotics, physician burnout, and the future of patient care. To book Rafael for your next event, visit rafaelgrossmann.com

Join the Mission

Stay Ahead in Healthcare

Join the Mission

Stay Ahead in Healthcare