Shan Jegatheeswaran, Johnson & Johnson’s global head of medtech digital, said surgeons are excited about technology but frustrated with how it works in practice.
Citing a survey commissioned by J&J of nearly 700 clinicians across 15 countries, Jegatheeswaran said most believe better surgical software would improve care, but have trouble accessing the information they need due to fragmented data.
J&J’s survey found that hospitals that have deployed surgical software currently only use it in about half of cases due to “cumbersome experiences and insufficient connectivity.”
The results highlight the importance of data standards, interoperability and security, Jegatheeswaran said.
MedTech Dive spoke with Jegatheeswaran about what the results mean for J&J’s approach to AI and software in medical devices, and what types of solutions the company envisions being used in the operating room.
This interview has been edited for length and clarity.
MEDTECH DIVE: What is the data fragmentation problem with surgeries?
SHAN JEGATHEESWARAN: We wanted to understand, how do we accelerate the innovation of AI into the OR and make it used ubiquitously in a trusted and consistent fashion? Because we weren’t seeing the uptake and the speed that we expected.
Based on feedback from respondents in an OR, there’s, on average, about seven software applications that are running. There’s about four device manufacturers that are operating within any given OR and about five different unique data streams. So, all of that coming together has to be simplified, consistent and interoperable for us to come up with analytics that support better outcomes from a surgery perspective.
The analogy that I’ll use is your living room. Fifteen years ago in your living room, you probably had a sound system that had its own remote control and its own connectivity. Then you had a TV that had its own remote control, and then you had a DVD player. So on your coffee table, you had 10 remote controls and everything doing its own thing.
Fast forward to today, you likely have your remote control on your iPhone. Everything seamlessly connects out of the box with technologies like Bluetooth. And that’s the vision for us from an OR perspective — out of the box, seamless integration.
How can AI be used in an operating room?
We just came back from the largest robotics surgery conference, SRS [the Society of Robotic Surgery’s annual meeting]. The starting point of a lot of AI today is surgical video. We see a lot of use cases there, starting with education.
There’s a lot of value in learning from really complex cases, learning from mistakes that happen, or learning from where things went well and you want to replicate that. This can be a very manual exercise, and today, a lot of the time, it is human driven, but where we see the industry going is AI managing all of that from a pattern recognition perspective and from a learning perspective.
Another theme is interoperatively: during a procedure, giving a surgeon real-time guidance. Identification of critical structures, identification of go/no-go zones as you’re within procedure. If there is a bleeding event, it can provide you best steps for mitigation.
What was also exciting is not just laparoscopic video, but ambient data from the OR itself. That’s where you can unlock use cases around workflow efficiency: the number of movements it took during the procedure, how long procedures took.
So now you start to get variables that AI can pattern recognize on efficiency and actions that relate to good outcomes, and actions that relate to not so good outcomes. Now you start to get a data set that’s really rich, that has impact, not only clinically, but also operationally.
What kind of feedback are you getting from surgeons? Are they excited about the use of AI in surgery or more skeptical?
I think the nature of your question is, is there a concern around too much oversight or data being used in the wrong way? Of course, there’s always going to be a diversity of opinion, but generally speaking, surgeons are enthusiastic about using video and other data sets in the OR from a performance perspective.
Surgery is a team sport, right? What did the patient do before the procedure that helped or hurt the outcome? What were the clinical teams doing during the procedure? What did the surgeon do? Post-procedure, how is that handled? That longitudinal data set is really valuable. Almost all surgeons that I speak with recognize the value of that.
We are developing the solutions at J&J with input and feedback from surgeons themselves. And so what I see is, when you involve folks in the solution, there’s obviously a trust and credibility to that solution. And when you don’t, and you show up with a black box of an AI algorithm, then of course, there’s going to be some level of concern or anxiety around usage.
Are you building these features into J&J’s surgical robots?
For Ottava, we haven’t publicly shared what will go live in the robot. The way I would characterize it is, the devices that we build will be connected and intelligent. Arguably, there’s going to be software that’s running on the robot that drives the intelligence and its performance. And then there’s going to be software that’s running on Polyphonic [a digital surgery feature that J&J announced last year] that is pre- / post-procedure. That unification of data sets, unification of insights, is going to be the most valuable for the teams. We want to arm the teams with as much analytics and insights as possible, and let them choose what’s effective and what’s important for them.
The robot — any device aside from the robotic instruments that we sell and capital equipment that we have — is a very important part of that value chain, but not the only part of that value chain.