Ep. 171: AI: A Virtual Elephant in the Room or a Game-Changer in Neuroscience?

Show notes

Moderator: Alice Accorroni (Geneva, Switzerland)

Guests: James Teo (London, UK), Giuseppe Jurman (Trento, Italy)

In this special episode, Dr Alice Accorroni is joined by Professor James Teo and Professor Giuseppe Jurman to discuss the impact of artificial intelligence in neurology: They analyse the actual improvements provided by its adoption, the factors that are hindering this same adoption, especially in a clinical setting, the potential future landscape clinicians and data scientists will be facing and how neurologists' attitude can be oriented more favourably towards new AI solutions.

The Task Force on Artificial Intelligence in Clinical Neurology is cordially inviting listeners to take our survey: Artificial Intelligence in Clinical Neurology Survey.

Show transcript

00:00:00: Welcome to EANcast, your weekly source for education, research and updates from the European Academy of Neurology.

00:00:15: Hello and welcome to EANcast Weekly Neurology.

00:00:18: I am Alicia Coroni, Consultant Neurologist at the Geneva Memory Center and the co-chair with Dr.

00:00:23: Malaguti of the EAN Task Force on Artificial Intelligence in Clinical Neurology.

00:00:29: Artificial intelligence is rapidly transforming medicine and neurology is no exception.

00:00:34: AI

00:00:34: holds great promise from improving diagnosis to predicting disease trajectories.

00:00:39: But adoption in daily practice remains limited, with barriers ranging from evidence gaps to ethical and regulatory concerns.

00:00:47: Through these challenges, the EAN has created a dedicated task force on AI and clinical neurology, bringing together clinicians, data scientists, ethicists and regulators.

00:00:57: And this podcast series is one of our educational initiatives.

00:01:01: So we recently launched our first episode, AI Does It Really Concern Neurologist, that was moderated by Michael Lee Graffel Wurm, who is also part of the task force.

00:01:10: And today we continue the conversation with our second episode, AI A Virtual Elephant in the Room or a Game Changer in Neuroscience.

00:01:18: I'm delighted to welcome our two guests today.

00:01:21: Professor James Thio is a consultant neurologist at Kins College and is also the chief medical officer at the London AI Center.

00:01:29: Welcome, James.

00:01:31: Pleasure to be here.

00:01:34: We also have Professor Giuseppe Jürman.

00:01:36: He's a mathematician currently associate professor at Humanitas University in Milan, Italy.

00:01:42: And he's the head of the data science for Health Unit at Bruno Kessel Foundation in Trento as well.

00:01:48: Thank you for joining us, Giuseppe.

00:01:49: It's

00:01:50: a pleasure for me too.

00:01:53: So let's get started.

00:01:55: We know that AI neurology is already moving beyond pilot projects, which with applications that range from stroke, epilepsy, dementia and neuro-incology.

00:02:06: However, its integration into daily clinical practice remains still quite variable in the different sub-specialties.

00:02:13: So I have the first question for James from a clinical perspective.

00:02:18: What are, in your opinion, the most significant ways AI has already influenced practice in neurology?

00:02:25: And I don't know whether you have any specific examples from your experience in your practice.

00:02:31: Thanks, Alessia.

00:02:32: I guess the first thing you say is that the AI revolution, actually, we're actually in the third generation.

00:02:38: of AI products and systems.

00:02:40: and there were you know in the early twenty tens there were all the machine learning algorithms and such and then in the late twenty fifteen onwards there's already all GAI and now it's all chatbots and LLMs and there will be fourth generations and all beyond that.

00:02:56: I think what we're seeing now being implemented are things which are from the previous generation.

00:03:01: so the first two generations and how they are integrated within the systems will really be determined by which systems are most mature, which infrastructure is most mature.

00:03:12: Radiology AI is quite is relatively digitalized across most organizations in most countries and so a lot of neurology which interacts with radiology.

00:03:23: are the first to use this also at essentially stroke bit being one of the classic examples in which there are many many stroke AI products out there for reading scans for the for perfusion imaging for various angiography and such.

00:03:40: so these are what we would call a diagnostic support and they obviously relay the information back to the clinician and they are changing modes of care.

00:03:50: So situations where there isn't enough neuro radiologists or people to support interpretation, then these AI systems can make them a lot faster.

00:04:00: I think most people nowadays, when you think about AI, they think about LLMs and GPT models.

00:04:05: Many of those are at various stages of maturity.

00:04:09: I think that regulatory space is still evolving for those.

00:04:13: And so many of them should belong as a software as a medical device because of their role in decision making and decision support.

00:04:22: And so even if there is a human in loop, so that there is a whole piece of work around there about how much of that is integrated and how much of that is driven by industry and how much of it is driven by clinicians.

00:04:37: Yeah, absolutely.

00:04:38: And we're going to talk later on about the need to communicate to have close connection with different stakeholders in the field of AI and its integration in clinical practice.

00:04:50: But before that, I would like to move a little bit more to the technical side.

00:04:55: So we'll address the next question to Giuseppe.

00:04:59: We know that neuroscience and neurology presents both incredible opportunities, but there are also a lot of challenges related to the complexity of brain data, multimodal imaging, and the need for interpretability.

00:05:11: So from a technical standpoint, what makes neuroscience and neurology such compelling or difficult fields for AI applications?

00:05:23: Pretty much there are three main reasons that makes... neurology, a challenging field for AI.

00:05:32: One is the complexity and the heterogeneity of all diseases.

00:05:37: We have a wide spectrum of motor symptoms, especially in Parkinson's that impact both the quality of life of patients, the clinical management.

00:05:47: And so the standard diagnostic criteria lack the granularity need for individualized care.

00:05:53: So they create a need for a more advantageous process.

00:05:59: On the other side, there are also the need for early diagnosis and personalized treatment.

00:06:04: This is a very big challenge and they are critical for improving the patient outcomes.

00:06:11: Nevertheless, they are very difficult to achieve because of the nature of diseases I mentioned before.

00:06:18: And AI may offer the potential for earlier and more precise diagnosis and personalized choices, but... we are still on the way for reaching this goal.

00:06:31: And last but not least, the heterogeneity of data that we have to face.

00:06:36: I mean, the field generates diverse data set from imaging, clinical data, or mix, and also digital data coming from wearable sensor, the patient-generated data, if you want.

00:06:49: And the integration and the analysis of this complete data set may allow to enhance the predictive accuracy, but also they come at the cost of developing very stronger, more robust models that we are along the way now.

00:07:09: Absolutely.

00:07:10: And they also come with some ethical questions related to recording many data and continuously on a daily basis.

00:07:20: James mentioned before the development of like LLMs, like charge GPT, but we also have foundation models.

00:07:27: So always again, a question for Giuseppe, what are the rest?

00:07:31: recent technical advances that have made AI particularly relevant to neuroscience and that could help maybe in the future to overcome The challenges that you started to mention were playing to my previous question.

00:07:44: The real breakthrough comes through generative AI for sure.

00:07:49: So generative AI is multifaceted and there are many ways you can help neurology in implementing this kind of solution.

00:08:03: For instance, the generation of synthetic data.

00:08:07: to supply the fact that to cope with the fact that sometimes we have to face with data poverty.

00:08:16: So having the possibility of generated synthetic data that are similar but not identical to real data may offer the possibility of maintaining the data set we are working with and enhancing the training in our model.

00:08:34: This not only for instance omics or whatever but also for instance for all the data represented by the LLMs and other foundation models which are the other side of the coin if you want for generative AI.

00:08:53: And last but not least all the opportunities provided by for instance the new brain computer interfaces.

00:09:00: Okay, these are representing the new frontier, both in terms of hardware and software, and the embedding of AI in this solution may represent a real breakthrough in what are provided to the patients at the moment.

00:09:20: So I would say the challenges of AI, obviously the articulations.

00:09:27: But the technical challenges of AI are actually challenges of data.

00:09:32: AI is just one product in their pipeline.

00:09:36: The problems of data are, as you just described, heterogeneity.

00:09:41: And they're heterogeneous because the data is what we would call semantically not interoperable.

00:09:47: The meaning doesn't mean the same thing between different data sets and between different health systems.

00:09:54: What that also means is that when you make something in a lab, it's very difficult to scale, to reuse in a different environment or to generalize, because the infrastructure is not ready.

00:10:05: And finally, of course, there's bias, inherent bias in the data, because they're missingness.

00:10:11: It's usually not an issue of overrepresentation, but it's an issue of underrepresentation.

00:10:18: And most of neurological information, obviously there's radiological and omics information, but a law is in free language, free text.

00:10:26: Because that is how neurologists and clinicians think.

00:10:29: We think in language and in free narrative.

00:10:31: And that narrative needs to be converted into clean data, interoperable data, European health data space is working on that.

00:10:41: I think synthetic data will be one way of addressing some of this missingness.

00:10:45: But a big element is that a lot of these consumer large language models are not designed for health care.

00:10:53: They've been trained on social media and internet data, which have its own inherent biases and missingness.

00:11:02: And obviously the multilinguality is a factor as well.

00:11:06: most of them obviously are classically Anglo-Saxon and that affects how they behave and how they represent information.

00:11:16: And so what you end up with a scenario is that AIs, which we are making now, whether they're foundational or not, will be very deceptively... perform okay when you test them on your own phone or whatever, but when you actually do it in populations, you will find out these biases scale out.

00:11:39: And I think the hope will be the next generation of AI where AI spends more time on curating the quality of the data that goes to train it.

00:11:52: Absolutely, I totally agree with you and we don't have to, I mean, we cannot forget that it's still a tool and that we need to be aware, as you mentioned, of the biases, the missingness that we already have in the data, especially when we talk about clinical data, because when we talk about research, maybe in that context, it's easier to have more homogenous data and indicators.

00:12:14: And whereas that is quite different, as you mentioned, when we speak about clinical data, where you have more of that narrative style, in general, as neurologists, we write long reports rather than having, like, in mind when we write something, the possibility to use.

00:12:30: that information for potential studies or to use it for AI analysis, for analysis in general.

00:12:38: And I'm interested in what you said about the LLMs, because you know, chatbots are available everywhere, basically.

00:12:45: And patients sometimes, they come to the clinicians saying, I've already discussed things and asked Dr.

00:12:52: Chargipiti and I have this information.

00:12:54: But as you mentioned, these models are trained on data that could be biased and that comes not really from guidelines or wrappers or recommendations from clinicians or scientists.

00:13:07: So how can we overcome this issue and trying to maybe also inform the population, the general population and patients on the possible problems that the they can encounter?

00:13:21: and we can encounter because also clinicians they use sometimes charge gpt for clinical questions.

00:13:28: So how can we improve that?

00:13:29: and what are the issues related to for example bias?

00:13:32: you already started to mention that data privacy and so how can we improve those and what are the the key to improve this situation and to really go all together?

00:13:43: so towards this ai revolution.

00:13:47: The privacy aspect I think is actually a semi-solve problem, actually, because you can have local LLMs that don't send data anywhere.

00:13:59: And it's all run on device, on either a computer in front of you or wherever, but the per-general public are aware of that because a lot of the larger tech technology companies give the impression that the only way to use AI is to use their cloud service.

00:14:17: So I think privacy in many countries and many hospitals have sovereign clouds or closed environments which will run AI.

00:14:25: So I don't think that's necessarily an issue.

00:14:29: But the public may not fully appreciate that.

00:14:32: In terms of how do you address a patient coming forward to you with an answer already provided by a chatbot?

00:14:40: Well, I think this is a problem that we faced before.

00:14:45: when Google came out, when Wikipedia came out.

00:14:48: And this is obviously a much more sophisticated version of it.

00:14:54: But the same issue, you address it in the same issue way.

00:15:00: You highlight the failings, you highlight the accuracies, and you expose people to it.

00:15:07: And so the clinicians need to be a lot more capable of reasoning with patients.

00:15:14: about what the chatbots are saying and how they are reasoning with these reasoning models and such.

00:15:22: And also, I think a big element is actually our human connection with the patient, building that trust versus a chatbot.

00:15:31: I think those are the elements and how one would deal with it.

00:15:34: I think the solution in next generation is actually, these chatbots are prone to highlight rare conditions.

00:15:43: They are prone to suggest abnormalities rather than normality, because normality doesn't show up.

00:15:48: That's the form of missingness.

00:15:51: And training AI for commonality, for things, common things are common, is a big element there.

00:15:59: And that requires working on real patient data, not internet data or research data sets.

00:16:08: Yeah.

00:16:08: And I would like to hear also some thoughts from Giuseppe on this topic chatbots and building trust and how to face challenges like data privacy bias and reproducibility in terms of the use of real data in AI neurology.

00:16:26: I'm not an expert in chatbots, but I can give my two cents on the other aspects.

00:16:35: Reproducibility is probably the biggest problem that you face in AI neurology so far and we experimented this also seeing what has been published at the moment in the literature.

00:16:49: Many paper lack an independent validation set and this is probably one of the biggest factors preventing AI being so widespread in neurology.

00:17:03: The other part that we need to face is the fact that it's called the dataset shift.

00:17:12: Many studies suffer from the problem that they have been trained with data that belongs to a very specific part of the overall distribution and then they are tested on data that don't belong to the same distribution.

00:17:34: implies that this model perform much worse in real data.

00:17:42: Because they are training on a different kind of things.

00:17:47: So until we solve this problem from a specific point of view, so having an independent validation and being able to cover better distribution of the training data, we will not be able to exploit the full potential of AI.

00:18:08: And so it will be harder even to involve clinicians in stimulating them and using these kind of tools for the general public.

00:18:19: I completely agree, Giuseppe, that the issue is the data set.

00:18:24: I think that I think is an inherent issue when the AI is built.

00:18:29: and monitored outside of real-world systems.

00:18:33: In other industries, they are building what we call ML ops or monitoring of AI systems so that the AI is trained and tested and monitored in the same environment.

00:18:46: So a lot of our work is infrastructure related so that when you train the data and then new data arrives or time changes, then there's a pipeline that does it automatically.

00:19:00: or maybe not fully automatically, but in a much more exhaled way rather than having to manually move it somewhere else to validate.

00:19:08: And this process, I think, is where healthcare is going slower.

00:19:13: The neurology is going slower because we haven't built those pipelines.

00:19:17: And if we haven't built those pipelines, then we are essentially carrying coal or firewood up the hill.

00:19:24: without any wheels.

00:19:25: Every time you want to do something, we have to move it up manually.

00:19:29: And these pipelines are how other industries are managing AI through building systems to monitor and track the data flows.

00:19:41: Yeah, sure.

00:19:42: The problem is that for health care sector, this is harder because of privacy issue.

00:19:47: I mean, you can do it by federated learning if you want, or you can do it by anonymization or pseudo anonymization.

00:19:54: But I mean, it makes things even harder if you want, even in terms of MLOPs.

00:20:01: Yeah, and it's true also that now we have also new regulations, like for example, the EA arcs that are looking deeply and trying to protect.

00:20:10: effectively data privacy and making trustworthy the integration of AI into the clinical practice.

00:20:20: But there are also some questions with regard to limitations related to the fact that the act may be too strict in terms of controlling the use of AI and maybe can limit to some extent the development in the sense that there are contrasting voices on this topic.

00:20:41: You were mentioning that there are some issues related, I mean, we discussed some challenges related to the integration of AI, specifically neurology and medical care.

00:20:49: I would like now to ask you to look at the future.

00:20:52: So if we project ourselves in five to ten years ahead, what role do you think that AI could realistically play in everyday neurological care?

00:21:03: and do you think that AI could actually replace clinicians or replace certain aspects of clinical decision-making or will it remain primarily supportive?

00:21:12: So I will start with James and maybe if Giuseppe has also comments.

00:21:17: I think the direction of travel will be, I think that there will be a lot of what we call horizontal integration about the specialties because there isn't enough data within one domain.

00:21:29: for a good clinical reasoning model.

00:21:31: And we all know patients who have more than one disease.

00:21:35: And so at the moment, what I know, we in the UK as well as other countries and companies around the world are building what we call foundation models of patients.

00:21:45: So rather than foundation models of language, where we ought to complete a sentence or writing reasoning, this is models of patients' whole life ahead of them.

00:21:56: And so it combines, it's like a risk score, not for stroke, not for multiple sclerosis, not for osteoarthritis, but for all of them at once.

00:22:07: These act as what we call digital twins, which then would be able to do more predictive stuff.

00:22:14: These twins, I think, will be unique to each country or health system.

00:22:19: And I think that's the direction of travel.

00:22:22: There may be inclusion of more omics data.

00:22:25: more detailed multimodal data as well, i.e.

00:22:27: images and biomarkers and such.

00:22:30: But those, I think, will depend on the infrastructure readiness, which I think we are quite a long way from getting there at this stage, whereas the first bit is a more tractable problem.

00:22:44: In terms of how that will change clinical care, I think that will be a regulatory question.

00:22:49: I think regulators do not know how to regulate a general purpose device.

00:22:54: They are very good.

00:22:55: The software medical advice is very good for regulating narrow use cases.

00:23:00: One tool for one indication.

00:23:03: Very general indications, very hard.

00:23:06: So I think the innovation in this space will be how clinicians influence regulators on how they handle this and whether how much clinicians will be involved with that process.

00:23:19: I totally agree with you and I think that the other key message that is quite important that we continue to communicate and that clinicians get involved also in discussions that are related to areas that maybe they're not historically that haven't been historically involved in, like regulatory practices, connections with data scientists, ethicists.

00:23:43: So the communication will be key, as you mentioned, to ensure that there is a right balance in between ensuring innovation and on the other side, protecting data and protecting patients.

00:23:54: I don't know whether Giuseppe, you wanted to add something more.

00:23:59: the way of constructing models with the clinician in the loop standard, so having the stronger relation with the clinician along all the part of the pipeline from the data collection to the construction of the mobile and to the interpretation of the model.

00:24:18: I mean, adding even more explainability to your model will help clinicians understand more, trust more the model you're building, and transmit this trust to the patient.

00:24:30: Moreover, I mean, another key aspect will be multi modality integration of very different kind of data maybe data we don't even imagine now that in ten years will be available data provided from the patient from wearable that they had they were at home or other things like that I mean will integrate will complement information and help to instruct much better insights that.

00:24:58: Probably at the moment we are not in our capability, but I'm pretty sure that building more trust and providing more data and keeping integrated, the professionals of the clinician will help a lot the development of AI in neurology.

00:25:22: Absolutely.

00:25:24: And this leads to my last question.

00:25:27: So we know that many neurologists and neuroscientists, they remain curious about the use of AI, but they're still quite skeptical about it.

00:25:35: And as you mentioned, it's important to build trust and confidence.

00:25:38: So what is one piece of advice that you would give to neurologists and neuroscientists who are interested, but They are hesitant about incorporating AI into their work and they maybe feel underprepared for that.

00:25:50: So what would you give us an advice?

00:25:54: I'll try to give a short edit.

00:25:56: I would say use it so that you know its weaknesses and its strengths.

00:26:00: Obviously use it with a skeptical mind.

00:26:03: There are many ways that it's very, very helpful.

00:26:07: I've mainly been using for administrative tasks rather than necessarily clinical tasks.

00:26:13: I use it for... because you use it for hypothesis generation.

00:26:17: It's a very useful thing for doing that to help me think through things or think of new ideas that I just blink to blink it to see.

00:26:26: And the last thing is when you're searching to evolve huge amounts of documents, it helps me interact and find bits of information when you're dealing with, you know, a hundred page clinical guideline, for example.

00:26:39: It would take too long to read the entirety and you'd forget the beginning by the time you reach the end.

00:26:44: So you need some way of kind of handling that.

00:26:47: And I would say that many clinicians, even if you're not technical, to actually use it and also to also be aware that this, this is a consumer product.

00:26:58: And so there are many other products.

00:27:00: Everyone hears about ChargerPT, but there are at least a dozen different companies as well as free ones.

00:27:08: So I would encourage people to, as a consumer, to shop around and learn to be skeptical, be a good consumer.

00:27:19: Thank you.

00:27:19: And is that it?

00:27:21: Well, I mean, I would say for the current neurologist being aware of what can be done, what cannot be done.

00:27:30: Okay, so... try to think about AI as a tool, as an ordinary tool, if you want.

00:27:37: And as James said, I mean, keep using as much as you can.

00:27:41: For the future neurologists and stimulate the university and training center to integrate data science and artificial intelligence as much as they can in their learning process.

00:27:55: I mean, that AI and data science should not become something complementary to the medicine, but should become a pillar of the medicine for the future neurologist.

00:28:08: Okay, so they don't see it something external to their profession.

00:28:12: We need to forge a new professional figure of clinician in every area of medicine, but I mean, and as you were saying, neurologist is no last.

00:28:26: Yeah, absolutely.

00:28:27: I agree with you.

00:28:27: And we need to continue to have the clinician, the loop as you mentioned, and to be able to be, as James was saying as well, skeptical and be always aware of biases and difficulties and problems with reproducibility that are somehow inherent to the data, but also related to the models that we're using.

00:28:48: I would like really to thank our guests for these very interesting and exciting EAN casts for sharing their insights on a topic that we find quite exciting for the future.

00:29:01: So in today's episode to wrap up, we've explored the Ives, but also the barriers and challenges related to the integration of AI and neurology and neuroscience.

00:29:11: And our guests are lighted where AI is already making a difference, but also the barriers that clinicians and developers are still trying to overcome and they gave us kind of a future outlook on how AI integration could look like in the future practice.

00:29:26: So thank you so much to our guests for joining us.

00:29:29: Thank you to the listeners of EANcast and stay tuned for more exciting activities from the AI Task Force in clinical neurology.

00:29:40: Thank you so much.

00:29:49: This has been EANcast Weekly Neurology.

00:29:52: Thank you for listening.

00:29:53: Be sure to follow us on Apple Podcasts, Spotify or your preferred podcatcher for weekly updates from the European Academy of Neurology.

00:30:01: You can also listen to this and all of our previous episodes on the EAN campus to gain points and become an EAN expert in any of our twenty-nine neurological specialties.

00:30:12: Simply become an EAN individual member to gain access.

00:30:15: For more information, visit ean.org.

00:30:17: slash membership.

00:30:19: That's EAN.

00:30:23: Thanks for listening!

00:30:25: ENcast Weekly Neurology is your unbiased and independent source for educational and research related neurological content.

00:30:33: Although all content is provided by experts in their field, it should not be considered official medical advice.

New comment

Your name or nickname, will be shown publicly
At least 10 characters long
By submitting your comment you agree that the content of the field "Name or nickname" will be stored and shown publicly next to your comment. Using your real name is optional.