The Smart Tissue Autonomous Robot (STAR) autonomously performed laparoscopic surgery in a live animal for the first time in 2020.
Here’s a scene from the not-too-distant future. In a bright, high-tech operating room, a sleek robotic arm stands poised next to the operating table. The autonomous robot won’t operate completely alone, but it will assist in the upcoming procedure, performing key tasks independently with enhanced precision and reduced risk.
Its patient is one of more than 150,000 patients diagnosed with colon cancer in the United States alone each year. The only curative treatment is to remove the diseased part of the colon—ideally in a minimally invasive laparoscopic procedure, performed with surgical tools and a thin camera inserted through small incisions. But the surgery tends to be challenging. The surgeon’s skills, experience, and technique are the most important factors influencing surgical outcomes and complications, which occur in up to 16 percent of cases. These complications can diminish the patient’s quality of life and increase the risk of death. The hope is that an autonomous surgical robot will improve these odds.
See the Smart Tissue Autonomous Robot (STAR) in action in this video demonstrating how the system laparoscopically sutures a piece of small intestine.
During surgery, this robot will perform tasks that require the utmost accuracy. The surgeon will first control its motions by hand to remove the cancerous tissue, then supervise the robot’s motion as it independently sews the remaining healthy colon back together. Using several forms of imaging and real-time surgical planning, the robot will place each stitch with submillimeter precision, a feat not possible with human hands. As a result, the resulting suture line will be stronger and more uniform, making it less likely to leak, a dangerous complication that can occur when the connection doesn’t heal properly.
While autonomous robots aren’t yet being used to operate on people in the way we’ve just described, we now have the tools capable of this futuristic style of surgery, with more autonomy on the way. Our team, centered around coauthor Axel Krieger’s robotics lab at Johns Hopkins University, in Baltimore, is dedicated to developing robots that can perform complex, repetitive tasks more consistently and accurately than the best surgeons. Before too long, a patient may expect to hear a new version of the familiar greeting: “The robot will see you now.”
Robot-assisted surgery dates back to 1985, when a team of surgeons at Long Beach Memorial Medical Center, Calif., used an adapted industrial robot arm to guide a needle into a brain for a biopsy. Although the procedure went well, Westinghouse, the robot’s manufacturer, halted further surgeries. The company argued that because the robot was designed for industrial applications, it lacked necessary safety features. Despite this hitch, surgical robots continued to evolve. In 1994, U.S. regulators approved the first surgical robot: the Automated Endoscopic System for Optimal Positioning (AESOP), a voice-controlled robotic arm for laparoscopic camera positioning. The year 2000 saw the introduction of the da Vinci robot, a teleoperated system that enables surgeons to have fine control over tiny instruments.
The first version of STAR sutured a piece of small intestine pulled up through an incision.Ryan Decker
Surgeons are a cautious bunch, and so were initially slow to adopt the technology. In 2012, less than 2 percent of surgeries in the United States involved robots, but by 2018, that number rose to about 15 percent. Surgeons found that robots offered clear advantages for certain procedures, such as the removal of the prostate gland—today, more than 90 percent of such procedures in the United States are robot-assisted. But the benefits for many other surgeries remain uncertain. The robots are expensive, and the human surgeons who use them require specialized training, leading some experts to question the overall utility of robotic assistance in surgeries.
However, autonomous robotic systems, which can handle discrete tasks on their own, could potentially demonstrate better performance with less human training required. Surgery requires spectacular precision, steady hands, and a high degree of medical expertise. Learning how to safely perform specialized procedures takes years of rigorous training, and there is very little room for human error. With autonomous robotic systems, the high demand for safety and consistency during surgery could more easily be met. These robots could manage routine tasks, prevent mistakes, and potentially perform full operations with little human input.
The need for innovation is clear: The number of surgeons around the world is quickly decreasing, while the number of people who need surgery continues to increase. A 2024 report by the Association of American Medical Colleges predicted a U.S. shortage of up to 19,900 surgeons by the year 2036. These robots present a way for millions of people to gain access to high-quality surgery. So why aren’t autonomous surgeries being performed yet?
Typically, when we think of robots in the workplace, we imagine them carrying out factory tasks, like sorting packages or assembling cars. Robots have excelled in such environments, with their controlled conditions and the relatively small amount of variation in tasks. For example, in an auto factory, robots in the assembly line install the exact same parts in the exact same place for every car. But the complexity of surgical procedures—characterized by dynamic interactions with soft tissues, blood vessels, and organs—does not easily translate to robotic automation. Unlike controlled factory settings, each surgical scenario presents unexpected situations that require making decisions in real time. This is also why we don’t yet see robots in our day-to-day lives; the world around us is full of surprises that require adapting on the fly.
Developing robots capable of navigating the intricacies of the human body is a formidable challenge that requires sophisticated mechanical design, innovative imaging techniques, and most recently, advanced artificial-intelligence algorithms. These algorithms must be capable of processing real-time data in order to adapt to the unpredictable environment of the human body.
2016 marked a major milestone for our field: One of our team’s robotic systems performed the first autonomous soft-tissue surgery in a live animal. Called the Smart Tissue Autonomous Robot, or STAR, it sewed together tissue in the small intestine of a pig using a commercially available robot arm while supervised by a human surgeon. The robot moved independently between suturing locations along the tissue edge and waited for the surgeon’s approval before autonomously placing the stitches. This control strategy, called supervised autonomy, is commonly used to make sure surgeons stay engaged when automating a critical task.
STAR’s suturing was the first time a robot had demonstrated autonomous surgical performance that was objectively better than the standard of care: Compared with the performance of human surgeons, STAR achieved more consistent suture spacing, which creates a stronger and more durable suture line. And a stronger stitch line can withstand higher pressures from within the intestine without leaking, compared with sutures done by the manual laparoscopic technique. We consider this a groundbreaking achievement, as such leaks are the most dreaded complication for patients receiving any kind of gastrointestinal surgery. Up to 20 percent of patients receiving surgery to reconnect the colon develop a leak, which can cause life-threatening infections and may require additional surgery.
The 2016 STAR system sutures the small intestine with a single robotic arm. Behind the robot, a screen shows near-infrared and 3D imaging side by side. Ryan Decker
Before this 2016 surgery, autonomous soft-tissue surgery was considered a fantasy of science fiction. Because soft tissue constantly shifts and contorts, the surgical field changes each time the tissue is touched, and it’s impossible to use presurgical imaging to guide a robot’s motion. We had also been stymied by the state of surgical imaging. The best cameras that were compatible with surgical scopes—the long, thin tubes used to view internal surgeries—lacked the quantifiable depth information that autonomous robots need for navigation.
Critical innovations in surgical tools and imaging made the STAR robot a success. For instance, the system sutured with a curved needle, simplifying the motion needed to pass a needle through tissue. Additionally, a new design allowed a single robotic arm to both guide the needle and control the suture tension, so there was no risk of tools colliding in the surgical field.
But the most important innovation that made STAR possible was the use of a novel dual-camera system that enabled real-time tracking of the intestine during surgery. The first camera provided color images and quantifiable three-dimensional information about the surgical field. Using this information, the system created surgical plans by imaging the intestinal tissue and identifying the optimal locations for the stitches to yield the desired suture spacing. But at the time, the imaging rate of the system was limited to five frames per second—not fast enough for real-time application.
To solve this limitation, we introduced a second, near-infrared camera that took about 20 images per second to track the positions of near-infrared markers placed on the target tissue. When the position of a given marker moved too much from one frame to the next, the system would pause and update the surgical plan based on data from the slower camera, which produced three-dimensional images. This strategy enabled STAR to track the soft-tissue deformations in two-dimensional space in real time, updating the three-dimensional surgical plan only when tissue movement jeopardized its success.
This version of STAR could place a suture at the correct location on the first try a little more than half the time. In practice, this meant that the STAR system needed a human to move the suture needle—after it had already pierced the skin—once every 2.37 stitches. That rate was nearly on par with how frequently human surgeons have to correct the needle position when manually controlling a robot: once every 2.27 stitches. The number of stitches applied per needle adjustment is a critical metric for quantifying how much collateral tissue is damaged during a surgery. In general, the fewer times tissue is pierced during surgery (which corresponds to a higher number of sutures per adjustment), the better the surgical outcomes for the patient.
For its time, the STAR system was a revolutionary achievement. However, its size and limited dexterity hindered doctors’ enthusiasm, and it was never used on a human patient. STAR’s imaging system was much bigger than the cameras and endoscopes used in laparoscopic surgeries, so it could perform intestinal suturing only through an open surgical technique in which the intestine is pulled up through a skin incision. To modify STAR for laparoscopic surgeries, we needed another round of innovation in surgical imaging and planning.
In 2020 (results published in 2022), the next generation of STAR set another record in the world of soft-tissue surgery: the first autonomous laparoscopic surgery in a live animal (again, intestinal surgery in a pig). The system featured a new endoscope that generates three-dimensional images of the surgical scene in real time by illuminating tissue with patterns of light and measuring how the patterns are distorted. What’s more, the endoscope’s dimensions were small enough to allow the camera to fit within the opening used for the laparoscopic procedure.
The autonomy afforded by the 2020 STAR system allows surgeons to take a step back from the surgical field [top]. Axel Krieger [bottom] takes a close look at STAR’s suturing. Max Aguilera Hellweg
Adapting STAR for a laparoscopic approach affected every part of the system. For instance, these procedures take place within limited workspace in the patient’s abdomen, so we had to add a second robotic arm to maintain the proper tension in the suturing thread—all while avoiding collisions with the suturing arm. To help STAR autonomously manipulate thread and to keep the suture from tangling with completed stitches, we added a second joint to the robot’s surgical tools, which enabled wristlike motions.
Now that the intestine was to be sutured laparoscopically, the tissue had to be held in place with temporary sutures so that STAR’s endoscope could visualize it—a step commonly done in the nonrobotic equivalent of this procedure. But by anchoring the intestine to the abdominal wall, the tissue would move with each breath of the animal. To compensate for this movement, we used machine learning to detect and measure the motions caused by each breath, then direct the robot to the right suture location. In these procedures, STAR generated options for the surgical plan before the first stitch, detected and compensated for motion within the abdomen, and completed most suturing motions in the surgical plan without surgeon input. This control strategy, called task autonomy, is a fundamental step toward the full surgical autonomy we envision for future systems.
While the original STAR’s method of tissue detection still relied on the use of near-infrared markers, recent advancements in deep learning have enabled autonomous tissue tracking without these markers. Machine learning techniques in image processing also shrank the endoscope to 10 millimeters in diameter and enabled simultaneous three-dimensional imaging and tissue tracking in real time, while maintaining the same accuracy of STAR’s earlier cameras.
All these advances enabled STAR to make fine adjustments during an operation, which have reduced the number of corrective actions by the surgeon. In practice, this new STAR system can autonomously complete 5.88 stitches before a surgeon needs to adjust the needle position—a much better outcome than what a surgeon can achieve when operating a robot manually for the entire procedure, guiding the needle through every stitch. By comparison, when human surgeons perform laparoscopic surgery without any robotic assistance, they adjust their needle position after almost every stitch.
AI and machine learning methods will likely continue to play a prominent role as researchers push the boundaries of what surgical jobs can be completed using task automation. Eventually, these methods could lead to a more complete type of automation that has eluded surgical robots—so far.
With each technical advance, autonomous surgical robots inch closer to the operating room. But to make these robots more usable in clinical settings, we’ll need to equip the machines with the tools to see, hear, and maneuver more like a human. Robots can use computer vision to interpret visual data, natural-language processing to understand spoken instructions, and advanced motor control for precise movements. Integrating these systems will mean that a surgeon can verbally instruct the robot to “grasp the tissue on the left”or “tie a knot here,” for instance. In traditional robotic surgery systems, by contrast, each action has to be described using complex mathematical equations.
Specialized imaging enables STAR’s laparoscopic suturing. The purple dots here show the system’s proposed suture locations. Hamed Saeidi
To build such robots, we’ll need general-purpose robotic controllers capable of learning from vast datasets of surgical procedures. These controllers will observe expert surgeons during their training and learn how to adapt to unpredictable situations, such as soft-tissue deformation during surgery. Unlike the consoles used in today’s robotic surgeries, which give human surgeons direct control, this future robot controller willuse AI to autonomously manage the robot’s movements and decision-making during surgical tasks, reducing the need for constant human input—while keeping the robot under a surgeon’s supervision.
Surgical robots operating on human patients will gather a vast amount of data and, eventually, the robotic systems can train on that data to learn how to handle tasks they weren’t explicitly taught. Because these robots operate in controlled environments and perform repetitive tasks, they can continuously learn from new data, improving their algorithms. The challenge, however, is in gathering this data across various platforms, as medical data is sensitive and bound by strict privacy regulations. For robots to reach their full potential, we’ll need extensive collaboration across hospitals, universities, and industries to train these intelligent machines.
As autonomous robots make their way into the clinical world, we’ll face increasingly complex questions about accountability when something goes wrong. The surgeon is traditionally accountable for all aspects of the patient’s care, but if a robot acts independently, it’s unclear whether liability would fall on the surgeon, the manufacturer of the robotic hardware, or the developers of the software. If a robot’s misinterpretation of data causes a surgical error, for example, is the surgeon at fault for not intervening, or does the blame lie with the technology providers? Clear guidelines and regulations will be essential to navigate these scenarios and ensure that patient safety remains the top priority. As these technologies become more prevalent, it’s also important that patients be fully informed about the use of autonomous systems, including the potential benefits and the associated risks.
A scenario in which patients are routinely greeted by a surgeon and an autonomous robotic assistant is no longer a distant possibility, thanks to the imaging and control technologies being developed today. And when patients begin to benefit from these advancements, autonomous robots in the operating room won’t just be a possibility but a new standard in medicine.