Skip to content
395- When AI Makes Healthcare Mistakes Lives Are Lost w/Sandeep Shenoy

Phil Howard & Sandeep Shenoy

395- When AI Makes Healthcare Mistakes Lives Are Lost w/Sandeep Shenoy

THE IT LEADERSHIP PODCAST
EPISODE 395

395- When AI Makes Healthcare Mistakes Lives Are Lost w/Sandeep Shenoy

20
1 X
20
00:00 | 00:00

Sandeep Shenoy

ON THIS EPISODE

Sandeep Shenoy has spent seventeen years integrating AI into medical devices at Viant Medical. He's not anti-AI. He builds it every day. But he's watched the industry rush to deploy systems trained on data sets that were never designed to be fair.

The problem isn't the technology. It's the history baked into the data. "When AI makes a mistake in finance, you lose money. When it makes a mistake in healthcare, you could lose a life." Early fitness trackers failed to read female heart rate patterns because of biased training data. That already happened.

We get into bias audits as continuous process, fairness by design principles, and accountability frameworks. Plus why ethics needs to move out of the compliance box and into business value.

The uncomfortable truth: patients have zero say in how these devices work. They just know a doctor told them they need it. By then, discrimination might already be built in.

Show Notes

Episode Show Notes

Navigate through key moments in this episode with timestamped highlights, from initial introductions to deep dives into real-world use cases and implementation strategies.

[00:00:09] Introduction — Sandeep's role at Viant Medical

[00:00:30] Seventeen Years in Healthcare — Digital transformation in medtech

[00:01:27] AI in Medical Devices — Learning from data to support doctors

[00:02:33] Real World Examples — FDA approved autonomous AI systems

[00:04:05] Ethical Considerations — When AI mistakes cost lives

[00:05:42] Gender Bias Problem — Historical data overrepresents men

[00:06:44] Fitness Tracker Failures — Female heart rate pattern misreads

[00:07:06] Cybersecurity Concerns — Protecting patient safety and trust

[00:10:13] Regulatory Frameworks — FDA's good machine learning practice

[00:11:47] Ongoing Validation — Continuous testing requirements

[00:13:32] Bias Detection Methods — Fairness aware algorithms and audits

[00:15:36] Real World Failures — Oncology AI unsafe treatment recommendations

[00:18:36] Global Standards — EU AI Act and regulatory alignment

[00:20:31] Organizational Implementation — Building ethics into deployment stages

[00:23:30] Business Case for Ethics — Moving out of compliance box

[00:26:05] Future of AI in Healthcare — Intelligence, trust, and collaboration

[00:27:58] AI Amplifies Humanity — Not replacing humans but assisting them

[00:29:47] Closing Thoughts — Building responsible intelligent machines

KEY TAKEAWAYS

Bias audits must be continuous process, not one-time checkbox
Ethics protects innovation by building trust and avoiding regulatory fines
AI accountability requires defining who owns decisions upfront
395- When AI Makes Healthcare Mistakes Lives Are Lost w/Sandeep Shenoy

TRANSCRIPT

395 -  Sandeep Shenoy (Host Michael Moore)

00:00:09 Michael Moore: Hi I'm Michael Moore hosting this podcast I'm here with Sandeep Shenoy, IT manager for Viant Medical. Welcome to the program, Sandeep.

00:00:17 Sandeep Shenoy: Thanks, Michael. It's great to be here.

00:00:19 Michael Moore: Awesome. let's talk a little bit about you and your experience and, Viant medical. So you're in healthcare? Yep. how long you been in healthcare?

00:00:30 Sandeep Shenoy: Yeah, so I have been in healthcare for over seventeen years. I'm a digital transformation leader in healthcare for seventeen years, specifically in healthcare and medical device manufacturing.

00:00:42 Michael Moore: It's tough enough to be in healthcare. you've been there for seventeen years. And on top of that, you've been doing digital transformation. Tell us a little bit about the digital transformation.

00:00:53 Sandeep Shenoy: Yeah. I specialize in integrating AI, blockchain and internet of things in medtech environments. so we, manufacture medical devices, and then, our job is to make sure that, all these systems are connected, and we build, Algorithms based on the data. and then we secure them through blockchain and everything. So it's all about innovations, in, medical devices. So that's what we do. and manager so I look after a certain team members from the quality perspective and, digital transformation perspective.

00:01:27 Michael Moore: Well, let's dive into this a little bit. AI and medical devices, let's break that down and figure out what that actually, means

00:01:35 Sandeep Shenoy: Yeah. So in simple terms, AI in medical devices enables machines to learn from data like supporting doctors, engineers and manufacturers in making faster and more accurate decisions. So it could mean, for example, an image system that detects tumor, that even a trained specialist might miss, or it could be a wearable device that predicts heart failure or something before the system appears. So in our work, the recent work, which is again in medical device manufacturing, I've seen AI models improve precision manufacturing by predicting what might go off the track before the devices are shipped to the patients. Right. So that's what AI in medical device in simple term is, it's all about learning from the data, historical data and making more meaning out of it, and implementing them in these devices that support the clinical industry.

00:02:33 Michael Moore: Wow. I mean, like, there's so many different ways racing around in my brain right now that you could use that, to transform business and transform healthcare. Right? Yeah. Give us some more examples of.

00:02:47 Sandeep Shenoy: Yeah, I can point out some major developments in this area. So the first FDA approved autonomous AI system, that detects diabetes without doctors inputs, and an AI in, MRI imaging, that accelerated the scan time by thirty to fifty percent that improve their efficiency and person throughput. patient throughput. Right. So, there are some real world use cases in which AI has been, really transformative

00:03:17 Michael Moore: Wow. that's huge. it's not just from the business angle, but also from the healthcare angle. merging them all. Technology, healthcare and business. I like it. Yeah. Yeah. it does bring up a good question, though. AI in itself, can be pretty controversial in some regards and definitely on how it's used, and especially with some of the safety, that may be missing from it right now. especially in healthcare, which has some of the strictest, guidelines around cybersecurity, and healthcare itself having strict, guidelines itself, not even talking about technology. Let's talk about the ethical considerations a little bit. that are kind of floating around, when it comes to medical devices and, AI.

00:04:05 Sandeep Shenoy: Yeah, sure. the world is turning towards AI. So AI in healthcare. As the world knows, is truly transformational. but it forces us to ask some uncomfortable yet necessary questions about the fairness, privacy and accountability. So I believe, the implications of healthcare, AI, are extremely significant. So when, for example, when AI makes a mistake in finance, you lose money. Likewise, when it makes a mistake in healthcare, you could lose a life, right? So the conversation around AI ethics really started gaining a lot of momentum when we saw real world failures like, biased algorithms in hospital systems, and privacy breaches in health app. and now there is a social dimension to this issue as well. And as you know, Michael, AI works based on historical data or trend. and if the history is biased, like if the history historically the data we collect reflect human biases, right? So for instance, healthcare data set often, overrepresent men or males in the past. So due to the historical job roles or research participation or the dominance of men in the past. So if the AI builds a trend or a pattern based on history, and if the history is biased, it's going to give a biased outcome as well. So that imbalance means algorithm can be unintentionally, perform worse for women or underrepresented groups. and when it comes to healthcare, it's a big concern. Right?

00:05:42 Michael Moore: So that's a great point. I can't underscore that enough. I mean, that's already a big problem right now. it's been a big problem for a long time. and all the training material that you're going to use for AI is built on these books that were all, built around males. That is a huge thing to point out. And and it's accurate, the data in is, it was sustained. Garbage in, garbage out. Right.

00:06:08 Sandeep Shenoy: Exactly, but. The gender bias in healthcare monitoring devices. So due to the bias training, early, fitness, trackers fail to accurately interpret female heart rate patterns. So things are changing now because, now there are synthetic data feeding that tries to balance this out. But, AI ethics has become a discussion point. And there's a lot of pullback in AI because of, initial push. and as you clearly said right. It's based on the data. So history is bad. The future is going to be bad as well unless you carefully implement it.

00:06:44 Michael Moore: I think you're right. And I think there's other concerns as well too, right? I mean, you're in the healthcare field. I said right off the bat, cybersecurity is a huge deal. HIPAA is a huge deal. patient confidentiality is a huge deal. And, we've seen recently that AI, has been failing in this regard, right? Yep. What are your thoughts on that?

00:07:06 Sandeep Shenoy: yeah. So cyber security, as I said, right. one of the major concerns, with, AI or ethical part of AI is a privacy. so that's really important topic, Michael, because in healthcare, cybersecurity isn't just about protecting data, it's about protecting patient safety and their trust because, you're getting so much information from them. AI cannot work without data. So you need data. And as all these data get dumped, it's about securing the data. So the attack is massive. Hospitals, clinics, production lines for medical devices. All of it is connected now, because for all these systems to work, they are all connected, right? So legacy system, medical devices, all these IoT sensors and they are all not designed on all these modern security in mind. so we see a lot of ransomware attacks, data breaches. They are in, rising fast. and it has become an easy target because we have so much data. and the cyber, attacks are like a common thing these days.

00:09:10 Michael Moore: you nailed it. And I think that, security is like the foundation of a house, right? if you don't build it at the beginning. Right. It's really, really difficult next to impossible to, truly get security working. just like it would be to build a foundation, on an already existing house. Right. So I think that, you've brought up a good point here, trying to, inject a security afterwards and makes it very difficult and very open to vulnerabilities. I'm hoping that in the future here that, they'll improve the foundation a bit, when they come out with products and stuff like that are geared towards AI. I do want to double back on the medical devices. the medical devices themselves, have strict guidelines, and how they're built and how they're tested. Are there any rules around, implementing AI into these devices? do they follow the same, medical, device testing guidelines? are they able to skirt them? I don't know the answer to that.

00:10:13 Sandeep Shenoy: Very good question. So now we have, frameworks, all these regulators pushing hard on, several acts like FDA has, gmlp. It's called, good machine learning practice, which is a guide for developers to ensure that transparency and human oversight throughout the AI lifecycle. It's not like in the past where it was, relaxed. So AIML action plan for software as a medical device emphasizes ongoing validation, not just a pre-market approval. It expects that you, recurrently or, repeatedly, review your outcomes. and in Europe, I just want to highlight that AI act, classifies medical AI as a high risk requiring fairness testing and documentation of model logic and real world performance monitoring. So there's a lot of pushback. And we are also seeing, in the medical device industry, the ISO frameworks have begun, to integrate AI governance into the quality and risk management. meaning, ethical compliance is now, a part of or embedded in the product life cycle. so we in the companies have added, in our production line, added AI quality check or AI audit as an operation sequence in our routings in making something. So it goes through, An AI, audit team, to make sure that it goes through certain, validations before. we make the product out. So there is a lot of, push from, global regulators. And also within the companies these days.

00:11:47 Michael Moore: great answer to the question, and I'm actually glad to hear that. They are starting to implement this. I was struck by the ongoing, testing. And, I think that is a wonderful idea and actually not just a wonderful idea. I think security would fail if it wasn't done that way. Right? I mean, these things change so quickly. And the fact that it's evolving means you have to have an evolving process, to get it to work. So. that to me, that makes sense. I think they were absolutely right. And I'm glad you pointed it out that, a key difference between other security measures is that this one needs to be constantly watched. So, good observation. And, I think we're definitely in for, all the cybersecurity, folks of the world, are going to be, wrestling with this for the foreseeable future as it only gets quicker and harder to maintain.

00:12:44 Sandeep Shenoy: I agree.

00:12:45 Michael Moore: we've already gone over so much stuff, and there's so much more to go over here. I'm struck by this, I really kind of want to dive back into the bias for a minute. and, I'm just going to circle back to it because as I was thinking about, all the different bias that can happen in the medical industry, that just happens anyway. Now. Right. And then I was thinking about AI, and I was thinking about, the constant validation checks that you mentioned. I was thinking, have they implemented, ethical validation checks continuously, like, continuously ethical evaluation checks. Is that a thing? have they implemented any continuous bias checks, regulations?

00:13:32 Sandeep Shenoy: That's a good question. So it all starts with data diversity, right? So, how do we detect or reduce the bias as an ongoing process? Right. So the first thing that, we should ask is who is representing the data set and who is missing. So in my work, we use fairness aware algorithms that can detect imbalances in the data modeling training. for example, if a heart rate data set underrepresents older women, we retrain the model using synthetic data balance techniques. So we call it bias audits, where we test model outcome across gender, ethnicity, geography and all that. And it isn't just a one time mix it's a continuous process. It's not just a set of Instructions. it's an improvement. much like quality assurance in, manufacturing. So, that's what we do. I can give you a real world example where. Google Health. dermatology. I was retrained, with, over sixty five thousand diverse images to improve their fairness, accuracy or fairness across, skin tones and conditions. so it's an ongoing process. it's all about, conducting repetitive, audits. and, FDA now requires demographic performance reporting for AI, ML based medical devices, which is also pushing for greater transparency and equality. So, yeah, I mean, we are seeing some thorough, steps as a continuous improvement to make this better.

00:15:12 Michael Moore: I'm glad they're moving on this so quickly though. This I actually didn't expect we'd have, some of the stuff, going so far. So I'm actually feeling a little bit better about about this. I know there's a long way to go. can you share some examples for us of, maybe some more real world? I know we talked a little bit about the Google health, but, I think if you could share some more examples like that, I think that would be super helpful.

00:15:36 Sandeep Shenoy: Yeah, I can share some examples, from, my case studies and work, that, impacted, the world because of the AI, in medical devices in particular. So, an early oncology AI program, provided are organized by one of the, leading, company. so that taught us one of the most important lessons in, healthcare, AI. it recommended unsafe treatment because it was trained on synthetic data, not on clinical data, which is a major ethical oversight. another example is AI imaging algorithm that performed well in lab settings but failed in the actual hospital settings or environments. and later, regulators halted the approval until the developers could explain how the model reached these decisions. another example, is, again, another major health chatbot which misdiagnosed women due to gender bias in training data. and then another in a parallel world, not in the healthcare is a major credit card provider. which was accused of gender discrimination in its credit limit algorithm and all that. So there are a lot of failed, real world examples, of, AI because they were not ethically implemented.

00:16:58 Michael Moore: I'm glad that we're talking about this because, that this happens more often than it's caught.

00:17:05 Sandeep Shenoy: Yeah.

00:17:06 Michael Moore: And, my concern is that, there are, companies that are fixing this only when it's caught, rather than trying to be proactive about it. and I think it's a really big, Point as to why we need these regulations and why we need this set up this way. Because there's no advocacy here from a patient aspect. they're going to use this device because a doctor prescribes it and or, says that they need it and they don't have any say over, this device. they don't understand how it works. They just know that they need to have it because of a medical reason.

00:17:42 Sandeep Shenoy: Yep. You're right.

00:17:44 Michael Moore: And they will never know if they were discriminated against, by the, AI. Right. unless, it's pointed out and by that time, it's kind of too late, right?

00:17:56 Speaker 3: So you're right. You're right. Yeah.

00:17:58 Sandeep Shenoy: So yeah, I think that's where all these global regulators are playing a great role now. instead of being more reactive now there are a lot of proactive initiatives to protect things before they happen.

00:18:13 Michael Moore: what I will say, like you mentioned, these global regulators really got a lot on their plate. Yeah. do you think that, they are going to end up arriving at standards, or do you think it's going to end up kind of like, privacy where it's, not one rule for everybody, but, depending on your region, depending on they interpret it. And so we're going to end up with a patchwork of different AI, regulations.

00:18:36 Sandeep Shenoy: yeah, there are a few, regulatory, moves that we could see the UAE act expected to be enforced by twenty twenty six, which, set a new standard for the fairness and explainability and healthcare or Health Canada and the UK's MHRA regulatory board are also drafting aligned framework to ensure global consistency. and also another important thing is AI devices now require algorithmic transparency reports. during their, five ten K submission process. before it get launched. they need to, make sure that there is a transparency report. So documentation must also, prove the absence of demographic bias, ensuring, the equality, and all that. So there is already a lot of push from the auditors or from the global regulators. And they also have a lot of changes in the framework, upcoming

00:19:33 Michael Moore: I hope they're able to adopt a global standard, but I do feel like even if they do, in places where there are stricter privacy, you're going to have different, interpretations of that rule. Or you might even have stricter guidelines, regionally, that are put on top of those rules. whereas in, some places you would not have that. Right.

00:19:55 Speaker 3: Yep I agree. Very cool.

00:19:56 Michael Moore: Interesting. Well, let me ask you a question. obviously, while we wait for, the global standard to come out and for everything to kind of fall into place and, everyone knows that, this is a journey. It's not a destination where it's going to be constantly change and update, especially because AI moves so fast. but what can specific organizations do, to make sure that they're making in, AI ethics, into their, their organizations, and, don't fall prey to certain, problems like that credit card company ran into.

00:20:31 Sandeep Shenoy: yeah, outside of these regulators pushing all these, there are, organizations that are taking, proactive steps. it all starts with, intention. ethics needs to be built into, every stage of an AI deployment, not added later as a compliance checklist, which is what used to happen in the past. so The recommendation is, some practical layers. Fairness by design, which means, using diverse data set, and applying fairness metrics from day one to avoid bias and then use some transparency tools. like, I don't know if you have seen, but in the past when you used to use any chatbots like ChatGPT, it never used to tell you why, but now there is a reasoning behind it. It gives you why it is giving you that answer. So transparency, tools like Lime and Sharp, these are technical terms. These are techniques used to make model reasoning visible and understandable to humans. So when it gives you an answer you need to know why it gives you that answer or why it gave you that result. And, another, point is the accountability framework, When AI gives you an answer or AI gives you a result, if it doesn't work, you don't know who to blame, right? So it's important to clearly define who owns the decision. Is that the doctor? Is that the coder? Is that the manufacturer? It cannot be nobody. It cannot be AI. AI cannot just be the owner of everything. So the accountability, becomes a, part of it. And then, of course, the privacy protection methods that ensure sensitive health data remains secure and decentralized so that people can trust all these innovations. Right. So, organization culture, that's where it all stands. So we encourage teams to create AI ethical committees, within their research and development, group or team that is responsible for, all these deployments and, put some ethical KPIs such as bias reduction rate and some transparency score and all that to measure the progress of their, initiatives, right. So, it's about trying and it's about proactively putting all these practices in place and also putting a KPI to measure how tightly we are aligned with the, ethics cycle.

00:22:54 Michael Moore: So how do we get, companies to get on board, though? Right? I mean, this is the same problem you have with cybersecurity and, investing into it, and implementing things. Because, cybersecurity, when implemented correctly, can be an innovator. AI is certainly an innovator, but, ethical AI can be an innovator. And as well, you can do amazing things if you're getting the right data and putting it out there. So there's lots of reasons, to implement it. But how do you get the businesses to get around that?

00:23:30 Sandeep Shenoy: That's a great question, Michael. And honestly, it's one that I asked a lot, to myself. So I think the first step is to move AI ethics out, of, the compliance box. Right? it should be built into the business value. so companies adopt ethics faster when they realize it's not slowing innovation, it's actually protecting it. For example, a transparent AI model builds a, clinician trust. A fair algorithm, avoids damage of the brand name, and avoids regulatory fines and a penalty and everything. and a secure data pipeline protects the brand. So ethics isn't just the right thing to do. It is a smart risk management, and smart business to avoid liability. and second, leadership has to set the tone. it's the CFO or the CTO, makes ethics part of the innovation culture, not just an audit step. then I think it, naturally flows down to the engineers, the data scientists and the product managers and all that. So it's small wins. I believe, if it is part of the principle, I think it's going to get, eventually, part of the company policy and everything.

00:24:43 Michael Moore: So I love that answer. We gotta get it out of the compliance box. that rings well for, cybersecurity, too. I mean, this one of the things that I truly believe is that when people think that they have to do something because they have to do it right. And, it's just like, this is something you need to do but if you're able to actually, explain the reasons, like you mentioned, to protect the brand. It's a risk management piece. And that's when people want to do those things because they realize the benefits, and, I think that's a great answer. we got to get it out of the compliance box. That's right. it's true. It's very true. I think that's a good way to do it. so we've reached a point where we talk about, the IT Crystal ball. this segment is meant to go over the future of it. and I think that, in true fashion here, I think we should mold this question, and mold this to be What's The future of AI in medical devices. I know that AI is already there, but I would like to see your interpretation of what the future holds, for AI in medical devices and, see where that conversation goes.

00:26:05 Sandeep Shenoy: yeah. it's pretty closing, Michael. I think the future of it, especially in healthcare, will be defined by intelligence, trust and collaboration, not just innovation. it's going to be built based on several factors. So AI will become more, human aware, not just, predicting numbers or outcomes, but actually understanding the context. The why question, the why behind every medical decision or every results that we get. So we will see systems that explain themselves, learn ethically and adapt responsibly, almost like digital colleagues, not just digital tools. so in medical, it, AI will move deeper into, preventative care, which is already happening, like spotting disease risks early, optimizing hospital operations, ensuring that every patient gets timely and personalized treatment. But the real future isn't just a smarter technology. It's about responsible technology. As I mentioned earlier, it's going to be built based on intelligence, trust, ethics principles, collaboration. more than just innovation.

00:27:23 Michael Moore: Yeah. you're painting a very rosy picture of the future, and medical it. I hope that is.

00:27:29 Sandeep Shenoy: And I also would like to answer something else. Like, I would say AI is not replacing humans. it's going to just amplify humanity. Just where a real revolution, will happen. So it's just going to assist us. It's not going to replace us. So there is a lot of fear in, people that's going to replace it's going to just help people out. And we just need to reskill and adapt. and, be more accountable, to the results. Right. So, yeah.

00:27:58 Michael Moore: I like that, you said that mainly because I think of this message needs to go out to a lot of, business leaders right now. And that message is, if you're investing in AI, right, to replace people, what you don't understand is that you're going to need people to, help with that AI, right? Yeah. I mean, AI is not something you could just put in and let it go. Unfettered and and unmonitored. Right? Yep. AI is something that needs to be monitored, that needs to be adjusted, that needs to be tweaked. and I think you're right about the reskilling.

00:28:36 Sandeep Shenoy: Right, right.

00:28:38 Michael Moore: It's a big deal. because, there are lots of people working towards this. and if they don't do it, the consequences are going to be dire.

00:28:47 Sandeep Shenoy: You're right.

00:28:48 Michael Moore: That's right. Right. I mean, that's what we talked about, this entire thing. I mean, the consequences, the this is all about risk management. This is all about, fairness and, if you don't build that into your brand, then your brand is going to suffer. And we've seen this happen time and time and again, not just with AI, but with, people that don't build this culture, into their, brands. I think that's huge. I do, marketing on the side as well. So, I know all about brands, and I know all about, protecting them, and, it really would scare me if, someone told me that my brand could go belly up and, by not doing the right thing. and it should scare companies, I think.

00:29:33 Sandeep Shenoy: Yep, yep. Totally agree.

00:29:35 Michael Moore: So I guess as a closing part on this, you stand by this rosy picture and you think that, we're going to be able to make it through and, build this into our systems and our culture?

00:29:47 Sandeep Shenoy: Yep. I truly believe that the real AI innovation isn't just about building intelligent machines, it's about building responsible ones. So I truly believe in it. As long as we understand AI and the ethical part of AI together.

00:30:04 Michael Moore: I'm Michael Moore. I've been hosting this podcast I've been talking with Sandeep Shenoy, who is the IT manager of a giant medical, talking about all about AI and medical devices and the future of what's going to be a rosy picture? It looks like. Sandeep, thank you so much, for, joining the program and bringing on one of the coolest, talks and in-depth talks we've had, regarding AI.

00:30:30 Sandeep Shenoy: Thank you. Michael, it's been a great discussion.

395 -  Sandeep Shenoy (Host Michael Moore)

00:00:09 Michael Moore: Hi I'm Michael Moore hosting this podcast I'm here with Sandeep Shenoy, IT manager for Viant Medical. Welcome to the program, Sandeep.

00:00:17 Sandeep Shenoy: Thanks, Michael. It's great to be here.

You've Been Heard - IT Leadership Podcast logo

You’ve Been Heard

You’ve Been Heard is where IT leaders stop being sidelined and start being amplified. We’re the triple-threat platform: podcast, community and vendor-neutral advisory that elevates your voice, your value, and your influence because when IT leaders rise, so does everything else.

© 2025 You've Been Heard. All rights reserved.