Artificial Intelligence and its Role in Surgical Creativity

Darryl S. Weiman, M.D., J.D.
Associate Program Director, General Surgery Residency
Baptist Memorial Medical Education
Baptist Health Sciences University

Stephen W. Behrman, M.D.
Professor of Surgery and Chair
Baptist Memorial Medical Education
Baptist Health Sciences University

Department of Surgery, Baptist Medical Center, Memphis, TN

“It is tough to make predictions, especially about the future.” Yogi Berra, hall of fame catcher for the New York Yankees

              Based on numerous recent news accounts and several publications in the surgical literature, it is clear that “artificial intelligence” (AI) is positioned to make significant contributions in surgical care and training. AI uses algorithms which then allow computers to make predictions i.e., solve problems based on recognized words, clues seen on images, data collected, and applied statistics.

              Several companies are investing billions of dollars to solidify their spot in the AI market. Microsoft, Apple, Nvidia, Google (Alphabet Inc.), OpenAI, Amazon (Alexa), IBM, and xAI (Grok) are just a few of the companies that have made significant investments as the potential financial return is tremendous.

              The programming of the algorithms used in AI is not known and it is unlikely that they will be made available for review as this intellectual property is very valuable. Absent a change in the United States Constitution, these algorithms will be protected for some time.[1]

              What do we know about AI? It seems that the powerful computers in the AI realm absorb data from huge databases and use these databases to formulate predictions. But sometimes, the answers provided (predictions?) are not true. It may be that the databases used are faulty and thus the computers predictions are faulty. Garbage in, garbage out, so to speak.

              There are several articles that confirm AI platforms are making mistakes. Hiltzik described AI that provided lawyers with false precedents which the lawyers used in supporting their cases. When the judges found the precedents had not been checked by the lawyers, fines and other punishments were levied.[2] Also, a Texas professor recently flunked his whole class when an AI program erroneously accused all the students of plagiarism.[3]

              AI has already proven useful in diagnostic specialties where it can learn to recognize patterns and detect things by analyzing vast libraries of visual images and videos. Specialties such as radiology, pathology, and dermatology have shown that AI can review images and pick up on things that the physician may have missed. Could AI eventually be used to replace human physicians?

              This question was recently raised with a retired IBM executive. This executive assured us that Watson, the IBM AI representative, is meant to help us do our job better.[4] In the diagnostic specialties, this seemed to be a reasonable answer. We then asked if a computer could ever affect a surgeon’s creativity due to an overreliance on the AI direction? She was reluctant to make predictions on AI creativity.

              We then asked this question to Google, and this was the answer we were given:

                             “Concerns that AI could stifle a surgeon’s creative training by removing complex problem-solving are valid, though current research suggests a more nuanced outcome. AI is expected to serve as a supplementary tool in surgical training, automated standard tasks and providing realistic simulations, which can help accelerate skill acquisition. However, the human aspects of surgical creativity—including the ability to respond to unexpected intraoperative events and innovate new techniques—will remain critical for developing a surgeon’s full expertise.”[5]

              This issue of AI harming the knowledge, creativity, and skills of a surgeon was recently raised by Abiodun Adegbesan et. al. In this letter to the editor, this group states, “there is a danger that surgeons may become passive operators which can potentially lead to a reduction in their surgical dexterity, clinical expertise and overall problem-solving abilities.”[6] 

              ChatGPT is an advanced AI language model developed by OpenAI. It is a Generative Pre-trained Transformer that “learns” from internet data to perform tasks such as answering questions, summarizing information, and writing papers.

In a recent article by Keith Naunheim and Mark Ferguson, four popular chatbots were tested against 21 board-certified thoracic surgeons on ten clinical scenarios. The surgeons performed at a significantly higher level than the chatbots. In this study, the authors concluded that “[a]lthough they are becoming increasingly sophisticated, chatbots do not yet perform at the level of a practicing thoracic surgeon when faced with complex clinical scenarios.”[7] It would be interesting to see how the chatbots perform against thoracic surgical residents who have not yet garnered the experience of the certified surgeons.

In a world which has already seen computers beat human opponents at Jeopardy (IBM’s Watson)[8] and Grand Masters at Chess (IBM’s Deep Blue)[9], it is somewhat surprising that several chatbots were not able to outperform the board-certified thoracic surgeons in vignettes relating to well-known clinical scenarios. It is just a matter of time before the computer can surpass surgeons in making diagnoses and formulating treatment plans. But can the computer work with a robot to do operations independent of human control?

At this time, it is unlikely that a robot can be programmed to do operations as well as surgeons because robot arms and graspers are limited in their physical ability. Human hands are superior to any known robot platforms, but this difference is being challenged. At Northwestern University’s Center for Robotics and Biosystems, researchers are working on improving tactile sensing and flexibility of robotic hands.

Kevin Lynch, who leads Northwestern’s team working on robotic hands says, “the team has set a 10-year goal to achieve dexterity sufficient for basic humanlike tasks.”[10]

Engineers at Tesla are also working to improve their humanoid robot, Optimus, so that it will be capable of “performing the small, precise motions that define most skilled labor.”[11] As Elon Musk told the Wall Street Journal, “In order to have a useful generalized robot, you do need an incredible hand.”[12]

But what about the creativity element that is essential for any surgeon who may face a rapidly changing and challenging environment in the operating room? Can creativity be programmed into the AI platform?

Surgeons are not the only ones worried that AI may be harmful in training people whose creativity is paramount for job performance. In a military context, war gaming is essential in training intelligence professionals. A quote by President Dwight Eisenhower is on point, “Plans are worthless, but planning is everything.”[13]

              In a recent article from the Combating Terrorism Center at West Point, Nicholas Clark raised the issue that artificial intelligence may result in overreliance by Special Operators who need to be creative and quickly responsive to sudden changes on the battlefield. “While generative AI may assist in automating routine tasks, it lacks the capacity for nuanced judgment, uncertainty quantification, and dynamic responsiveness critical to effective CT work.”[14]

              “The use of generative AI for operational planning may, in fact, make our planners worse by removing the real benefits of the planning process and limit the CT forces’ ability to respond dynamically to branches and sequels.”[15]

              A recent study looked at brain activity when ChatGPT was used.  The study found that users of ChatGPT for helping to write papers became more dependent on the computer as the study progressed. As the users of ChatGPT became more dependent on the computer, the final papers became a copy and paste exercise.[16] There was no more creative input by the humans in writing the papers.

              Surgeons are very much like Special Operators. They must be studying and training constantly to keep up with the specialty; it is a learned profession. The main difference between Special Operators and surgeons is the surgeon knows he is likely to go home alive later in the day.

              But what about doing operations without human control. Could the surgical robots with AI platforms be programmed to do operations by themselves? Robotic operations are being done by humans around the world daily. The operations are being done with surgeons at a console, controlling the robot arms. So far, the critical difference is that the surgeon controls the robot arms and has hands which the robot does not. If things go bad, the surgeon can abort the robotic procedure and can open the patient and do the operation in the conventional way. But that difference may be changing.

              Can the computer be programmed to learn when it is over its head and abandon the robotic procedure? If faced with circumstances that are not answerable with the database provided (i.e. aberrant anatomy, arterial bleeding, hollow viscus injury, etc.), could the computer be creative and provide a solution? How can creativity be programmed? This is a difficult question because we do not know how to define “creativity”, and we do not understand the process of being creative in the first place.

              “ChatGPT runs on something called an artificial neural network, which is a type of AI modeled on the human brain. Instead of having a bunch of rules explicitly coded in like a traditional computer program, this kind of AI learns to detect and predict patterns over time…[But] because systems like this essentially teach themselves, it’s difficult to explain precisely how they work or what they’ll do. Which can lead to unpredictable and even risky scenarios as these programs become more ubiquitous…[AI is] trained…by basically doing autocomplete.”[17]

              When circumstances in the operating room change, the surgeon (at least now) generally has the knowledge, education, experience, and skills to adjust appropriately. He may need to call in a colleague and that is part of being a professional. Could AI act professionally and be creative if the circumstance calls for it? At our present state of knowledge, if creativity is required, it is unlikely that a computer can replace a human surgeon. However, as AI platforms continue to improve, they may enhance simulation exercises, but this should be extrapolated to live surgery with caution. As the retired IBM executive stated, AI computers are meant to help us, not replace us.

              Medical education and surgery are growing at a rapid pace. Being creative and using judgment to adapt to rapidly changing circumstances is often the difference between life and death. AI should only be used when its’ strengths outweigh its weaknesses. We must continue to train our surgeons to be creative and resourceful to better help our profession grow and keep us, at least one step ahead of AI and robots.

              The only part of this article that was AI generated was the answer to the question asked of Google above.



1 United States Constitution, Article I, Section 8, Clause 8. The Congress shall have Power…To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries[.]

[2] Runco MA. AI can only produce artificial creativity. Journal of Creativity 33 (2023) 100063.

[3] Id.

[4] Personal communication with IBM executive (anonymous).

[5] Response given by Google AI to the question on surgeon’s creativity being hampered by overreliance on AI.

[6] Adegbesan A, Akingbola A, Aremu O, et. al. From Scalpels to Algorithms: The Risk of Dependence on Artificial Intelligence in Surgery. Journal of Medicine, Surgery, and Public Health 3 (2024) 100140.

[7] Bryan DS, Platz JJ, Naunheim KS, Ferguson MK. How soon will surgeons become mere technicians? Chatbot performance in managing clinical scenarios. The Journal of Thoracic and Cardiovascular Surgery. Volume 170, number 4 1179-1184, 2025.

[8] Watson beat Brad Rutter and Ken Jennings to win a $1 million prize in 2011.

[9]  Deep Blue beat Gary Kasparov in 1997. Kasparov felt that cheating was involved since some of the computer’s moves were non-sensical. It turned out there were flaws in the programming which have since been fixed. Even with the programming errors, the computer still won.

[10] Jacobs S. Engineering the perfect robotic hand could unlock a $5 trillion humanoid market. Wall Street Journal, October 26, 2025.

[11] Id.

[12] Id.

[13] Eisenhower, D. Remarks at the National Defense Executive Reserve Conference, November 14, 1957.

[14] Clark N. Commentary: The Dangers of Overreliance on Generative AI in the CT Fight. CTC Sentinel, p. 15-19, August 2025.

[15] Id. p. 16.

[16] Nataliya Kosmyna et al., “Your brain on Chat GPT: Accumulation of cognitive debt when using an AI assistant for essay writing task,” arXiv.org, June 10, 2025.

[17] Runco MA Id. p. 5.

Leave a Reply

Your email address will not be published. Required fields are marked *