Ep. 199: AI Demystified: What It Is - and Isn’t
Show notes
Moderator: Raphael Bernard-Valnet (Lausanne, Switzerland)
Guest: Roland Wiest (Bern, Switzerland) and Monica Moroni (Trento, Italy)
In this episode, Raphael Bernard-Valnet speaks with Roland Wiest and Monica Moroni about the fundamentals of artificial intelligence in clinical neurology and its practical implications for neurologists. They discuss key applications such as imaging analysis, diagnostic and decision-support tools, and emerging use in wearables, while addressing interpretability, standardisation, and current barriers to routine clinical implementation.
Show transcript
00:00:00: Welcome to IA&Cast, your weekly source for education research and updates from the European Academy of Neurology.
00:00:17: I'm very honored to have as guest Professor Roland Wiest and Dr.
00:00:40: Monika Moroni, Professor of Advanced Immnoreimaging at the University of Bern.
00:00:47: He has an interest in AI in radiology including analysis of stroke AI MS lesion segmentation brain tumor imaging and epilepsy.
00:00:59: Monica Moroni is a senior researcher at the Data Science for Health Unit, at Brinocasula Foundation in Trento.
00:01:05: Her work focuses on projects to integrate AI for predictive preventive and personalized medicine in Parkinson's disease and multiple sclerosis as well machine learning models that can predict falls in Parkinson patients.
00:01:22: And today we'd like to answer very simple questions What should a neurologist actually know about AI and how to use it?
00:01:31: So maybe my first question will be to Monika.
00:01:35: When we say AI in medicine, what are you talking about ?
00:01:39: Can you explain this to us or colleagues who hasn't taken any computer science course
00:01:46: yet?
00:01:47: Yes sure!
00:01:49: I would say that the term AI in Medicine is like an umbrella term which includes many different things.
00:01:57: So from a technical point of view, we talk about AI in medicine every time.
00:02:02: We have a system or solution that has learned autonomous leaf some patterns and structures on data And uses these patterns to make sense for other information.
00:02:15: In other words, an AI system has learned and is able to provide a suggestion or support without being explicitly told step-by-step instructions on how do that.
00:02:30: From the practical point of view if we want think at some examples, AI in medicine can refer many different things.
00:02:37: for example We have support tools that assist in technical tasks Long notes summarization, but also for example images or volume segmentation.
00:02:50: We can have diagnostic or prognostic models that give support to make some diagnosis based on the data that we have and information that we had ordered prediction of risk or progression of some disease.
00:03:05: All these are just some examples of implementations of AI in medicine.
00:03:11: Okay Thank you so much.
00:03:14: maybe a more specific question to you Roland, do you think how this MRI software can explain all these segmentations that Monika mentioned?
00:03:25: How it works and sometimes we have the impression that could be black box.
00:03:30: It really works!
00:03:32: What is the mat behind and system behind ?
00:03:36: Thank you Monica.
00:03:37: first because I've already elaborated on top again but its indeed great questions.
00:03:42: I think we need to ask two different questions.
00:03:44: The first is what's happening in the machine and then, What Is Happening There After?
00:03:50: Because there are two major domains at a moment where AI is implemented that one of key applications currently is acceleration of imaging.
00:03:59: That means making techniques faster In order have an earlier diagnosis but also make it happen so more patients can be examined.
00:04:09: And there, we simply have AI tools to try solve a problem.
00:04:13: To generate the high resolution image from low-resolution input.
00:04:17: There are several technologies that don't want too much into details about what is done usually if there's a sparse sampling because not all data points in an image contain information relevant and this one is reconstructed.
00:04:35: the opportunity that we can accelerate this images faster.
00:04:38: but it also generate some challenges.
00:04:41: and these challenges they are related to art because you know that artifacts get pronouns because by disfiltering reconstruction techniques.
00:04:51: Art effects that already in the image, suddenly visible.
00:04:55: That means we need to have some control techniques that enable us to be sure these added information, so-called informations are really artifacts or not.
00:05:05: So this is one of the challenges We currently face.
00:05:08: The other one and it has been nicely elaborated already by Monika It's a completely different topic because This is mainly the domain of learning Imaging features In a data set that had been trained on large either annotated or not annotated data sets.
00:05:27: And you have mentioned already, Monica the decision support tool.
00:05:30: so they give us some advice how we can read these images for example if there is a lesion that is spotted on.
00:05:37: not.
00:05:38: There are other technologies that I called opportunistic screening techniques So they're usually running in the baseline and order to inform The radiologists were also the clinician If there isn't certain abnormality in the images and this can then be controlled by the reader, a surgeon.
00:05:56: but also very important domain that is quantitative imaging.
00:05:59: So we get information like for example from volumetric FMS lesions.
00:06:05: you got information about brain volumetry And it helps us not only to make pre-assunctions of certain diagnosis based on an atrophy pattern It shows how our disease evolves over time.
00:06:19: So overall, I think there are many very interesting applications that already used in the clinics.
00:06:26: Okay but maybe following on that we sometimes have impressions of fancy papers and it's using neurology But sometime you have an impression that is not implemented into daily practice.
00:06:40: so could you comment?
00:06:42: Yes i love this topic because all the time talking about, well AI helps us to make let's say procedures faster or maybe also too.
00:06:54: To make them work easier.
00:06:56: but I think this is not a key element.
00:06:58: AI.
00:06:58: we should understand that AI as a complementary reader and maybe you should get back up into how humans usually reason because under daily pressure We have to analyze many data all the times.
00:07:12: so on average for every slice in an image of radiologist has roughly one to two seconds.
00:07:18: So what do we use?
00:07:19: We use heuristic decision-making, that is...we are creating shortcuts and profit from what you have learned in the past.
00:07:27: but we don't go completely into analytical thinking processes anymore.
00:07:32: All of you can read a newspaper in the morning when you're in a hurry then just read it part by part.
00:07:43: And this is what we usually do in our daily work, and only if you have some kind of divergence in the analytics or the outcome.
00:07:51: We activate our complex analytic and systematic thinking system.
00:07:56: there I see the main role for riots just like a complimentary second reader that provides us with information.
00:08:03: it helps to prevent overlooking or let's say not spotting something maybe either obscured somewhere at the images at the borders of the images.
00:08:15: So, the key element and the key added value is definitely this kind of complementary reading as a second reader in the world.
00:08:25: But just to precise for instance it at the inter-spital.
00:08:29: do you have some of these tools that are already implemented?
00:08:33: Yes so thank you for that question.
00:08:36: we're really pretty much advanced.
00:08:38: they have our platform system where we have integrated fourteen different AI systems that are running in the back.
00:08:45: So, it means a human reader does not need to activate these AI tools.
00:08:49: they're... We've structured and systematically implemented this kind of so-called relevancy rules.
00:08:56: It's automated detected which protocol needs which kind of AI processing to be solved.
00:09:03: then this is automatically sent into clouds The data analyzed in an anonymized way, and the clouds are sent back.
00:09:11: Then this data and post-processed data they appear... ...in a reading window to the reader.
00:09:17: And then there is human decision making.
00:09:19: so you have to accept or reject results of this post processing In order if you determine whether it's proper or improper Decision Making.
00:09:29: But everything runs in the bag.
00:09:31: It's fully automated.
00:09:33: We're currently processing roughly three thousand AI cost per month.
00:09:38: Okay, that's really great!
00:09:40: Maybe stepping back from neurology and maybe a question for you Monika where do you see that AI could be applicable to clinical neurology?
00:09:51: Maybe I don't know in wearable EG reading or anything else... Where does the most promising application of AI
00:10:01: be now?
00:10:01: Thank you this is very good questions.
00:10:04: so i would say Imaging is probably the field that is more mature, so we already have implemented solutions of AI.
00:10:15: Another very promising fields I think are wearables and sensors both for processing data as well because of the amount of data available with these tools.
00:10:30: With AI you could think like a continuous monitoring that is feasible in terms of resources.
00:10:38: And this is like going to open new avenues also for the monitoring of disease or patients, so I would say wearable and sensor-promising data types for AI.
00:10:54: regarding other data types and also integration on different modalities work to be done in the research settings before or the clinical application, because their results that we have now are still not so stable.
00:11:16: So you know they're going to implement them in a clinical practice?
00:11:21: We probably need more of work and robustness but Nevertheless, I think it's fundamental also in these more research settings to involve clinicians and the development of AI tools.
00:11:35: So you know that what we are developing as models or type data integration is something could be meaningful for clinics and hospitals.
00:11:50: Okay, but from what I hear on your side is still that it's such a research step and not really implemented.
00:11:59: But maybe where do you see...I would be interested if both of you can comment about the limitation in NeuroGIS using that tool everyday practice?
00:12:15: So If i may start.. I think The key element and just try to touch previously is really the level of standardization.
00:12:23: You need to develop concepts that clearly standardize all your quotes, but it needs to be noted which kind of data shall be fed into their algorithm?
00:12:33: How do they are composed and how does a whole line processing is described?
00:12:39: It's not enough for you have tool so if computer in another room nobody will use this tool!
00:12:48: whether people are doing their work, they're reporting the analytics.
00:12:52: That means you need to provide access through this system.
00:12:55: You also have to secure that output of these results is available To the distinct reader so without having extra efforts and extra tasks to do.
00:13:07: And then of course, and this is the key element it needs to be monitored.
00:13:11: The quality need more monitor.
00:13:14: that needs a quality assurance and governance process in place so everybody can use if he uses these kind tools.
00:13:23: That's how we can integrate them.
00:13:27: We also hand out certain integration guidelines for our residents How they should deal with results that are produced by the AI, so we really have a structured and complete reporting standardization.
00:13:45: Yes I totally agree!
00:13:47: And i would also add another aspect to which you should pay attention is how people will use these tools.
00:14:01: So regarding the output of those tools.
00:14:07: It's suitable for the application.
00:14:11: I try to explain better, so we don't want that support given by a decision-support system is too effective in convincing people.
00:14:24: So it should be something used by clinicians to integrate, be integrated into their clinical reasoning.
00:14:33: So the way we present this result is going to affect how these integrations are working and it's something that is crucial in usability fairness and proper use of the tools.
00:14:50: And this is double-sided, so both in terms of development of a tool and solution... ...and also in terms education for people or users to interpretation on that tool.
00:15:03: So it's something which should be key in these introduction and use of AI medicine How we present our results?
00:15:16: And to teach users how to interpret the tools and, uh...to teach them where they should pay more or less attention.
00:15:26: Okay thank you so much!
00:15:28: And maybe along with your point Monika I know we work with wearables also?
00:15:35: Do you think that patients will agree to use these new tools because NeuroGC is one side but also patience?
00:15:46: Yeah,
00:15:47: that's also a good question.
00:15:49: So I think if the advantages have been evident so there will be no strong opposition to use of these tools.
00:16:05: we should also think in terms of fairness and possibility using this tool with some patients or all complex framework, but so results in the research settings are promising.
00:16:24: So I would say that if we find a way to ensure they're integrated into their workflow
00:16:32: and
00:16:32: can be used safely also from fairness and regulatory frameworks then patients will not make any problem for using
00:16:44: them.
00:16:46: Okay, thank you so much.
00:16:48: And maybe regarding the limitation we spoke about a gap between research and clinic but one other gap would be how we neurologists understand what is behind these tools?
00:17:03: I know that we will have another episode on The Black Boss question But i'd like still to touch it briefly and ask both of us What do we do if neurologist has difficulty to understand?
00:17:15: Why is the AI suggested?
00:17:17: this or something else?
00:17:19: Maybe Roland, maybe you can comment on what you do when... You describe that you have these feedback from AI in your neuroimaging tool.
00:17:27: That would be interesting to say.
00:17:30: What's happening if you disagree with AI?
00:17:32: Yeah
00:17:33: I think there are two different aspects we need take into account.
00:17:37: So If it really comes through a treatment decision then interpretability is a key element.
00:17:43: So that means we should understand how did the machine come to its decision?
00:17:47: If it's just about, let say spotting a lesion or spotting an abnormality than they still handle through the principles of... It's a term from US legal called vicarious liability.
00:18:01: this simply the principle.
00:18:02: at the end human has to decide if somethings true and not.
00:18:06: the same for a senior consultant, if you have a resident at the end.
00:18:11: You need to judge whether this information is reliable or not and as I said there are certain things where it doesn't play an important aspect so that if you had large vessel occlusion detector then this detector alerts have to check it.
00:18:32: You don't have to understand how a complex CNN has come to the decision, It's simply your task to verify or to falsify.
00:18:41: if you get into a therapy recommendation then its much more difficult because I'm personally still we've seen these examples in literature several times that instead rely on something very simple like knowledge base or rule based implementation acting according to guidelines.
00:19:01: You can nicely combine these things for spotting and normality.
00:19:05: We can use a complex system that you must not understand in all details, but if it comes down to treatment recommendation then everything needs to be
00:19:16: understood.".
00:19:18: Okay thank you so much!
00:19:20: And maybe coming from that along with that we know that AI models are trained on population.
00:19:31: How do you think that it's induced by us and maybe could we really compare between US and Europe, for instance some type of software?
00:19:42: I don't know if he can comment both on this.
00:19:45: Happy to!
00:19:46: We learn a lot at the moment.
00:19:48: Something common sense in Rima is AI never behaves as an AI in studies.
00:19:56: so there are many confounders related data drift related to scanner differences, and I would even stress that it's not something.
00:20:05: It is not an easy difference between us in the Europe because we will have different is between losan and burn for example because if the scanners have a different design they're having different field strengths there are different protocols.
00:20:18: then you know this may heavily impact on their quality.
00:20:22: can just given example for segmentation for example they made differ if your switch between one point five And the seventh Tesla scanner, then there are differences maybe as large.
00:20:33: There is a difference between let's say healthy twenty years old person and an elderly person suffering from severe Alzheimer's disease.
00:20:41: so we need to be aware about that.
00:20:45: We come back again through this mantra or through really key element standardization is key.
00:20:51: So if you do this kind of analytics Then make sure these patients examine at the same field strength, in best case with the same scanners and protocol.
00:21:05: So I think what's really important is that we need to know about data drafts or rejection rates.
00:21:12: if they increase in your own center then you should get alerted – you should be concerned!
00:21:18: The best thing of course would have been more than one tool to control for the deviation of another one and if you have some converging results.
00:21:28: This can be, of course two different C eMark tools but it's also possible... For example, If we just want to invest into one C e Mark tool that do have others that are publicly available in order to check your data But You need To Have Some Kind Of Quality Insurance Process That monitors post-declimbing, Post-Declimate Efficacy of Your System.
00:21:52: Yes, if I can further comment on this from the technical perspective what can be also done is like integrate these systems with some confidence scores or some hints that are used by users to know how much they can trust their results and support.
00:22:18: And in case the system already knows that their confidence is too low or that they specific image of a specific sample, seen at this moment it's not similar to training cohorts.
00:22:36: so data distribution where it performs well.
00:22:41: In some cases could be also useful.
00:22:43: if the system isn't giving a solution, but just saying this data I cannot comment or give you support for the specific type of data.
00:22:55: So that's something could be integrated into these systems and as was said continuous monitoring is also fundamental.
00:23:06: so after during the deployment, they success rate or their usefulness of this specific tool in that specific solution should be checked and monitored.
00:23:21: Yeah but sometimes maybe it's more difficult to when you monitor rates may be more difficult for some questions I don't know especially for clinical diagnostic.
00:23:31: are something like that To find a real gold standard out?
00:23:34: what would have been the answer?
00:23:37: yeah That's true.
00:23:39: In those cases, maybe already having an estimate of the quantitative uncertainty or confidence in model could be a solution to know how much I can trust that support.
00:24:02: Thank you so much!
00:24:04: I think we could conclude here as there will be all the months or the podcaster on this station that would go a bit deeper in, In the different question with touch base rapidly today.
00:24:16: Maybe just at final point maybe you have one sentence and comment each what you would like to, that neurologist let go as misconception of AI.
00:24:30: What is for use the biggest misconception regarding clinical use of AI?
00:24:37: It's clear from me and so it's this notion of replacement of human specialists.
00:24:42: So This Is Not The Case!
00:24:45: We will reach an intensified human machine cooperation, there are nice models out like for example the Kentaric systems where.
00:24:55: The algorithms and humans do that tasks they can solve best.
00:24:59: so it means final decision making is with humans analytical processes.
00:25:04: is this algorithm?
00:25:06: My take home message is that so.
00:25:10: AI is not something magic.
00:25:12: It's something that can be comprehended also by non-technician.
00:25:18: And in order to bring value to medicine and to avoid that, we have this gap between research and clinical application.
00:25:28: We need to overcome these ideas of something magic or non-comprehensible.
00:25:34: but you'd like to work together to ensure those solutions are developed in a collaborative manner which is useful also for end users on the solution.
00:25:47: Okay.
00:25:47: Thank you so much and thank again for your comments, we will have another opportunity to discuss that in detail.
00:26:17: You can also listen to this and all of our previous episodes on the EAN campus, E&Cast weekly.
00:26:48: neurology is your unbiased and independent source for educational, research-related neurological content.
00:26:54: Although all the contents are provided by experts in their field it should not be considered official medical advice.
New comment