Yasser Iturria-Medina, PhD, is an Associate Professor in the Department of Neurology and Neurosurgery at À¦°óSMÉçÇø and a Principal Investigator at the Ludmer Centre for Neuroinformatics and Mental Health. Based out of the Montreal Neurological Institute and Hospital, Prof. Iturria-Medina leads the Neuroinformatics for Personalized Medicine lab (), where they develop multiscale and multifactorial models for understanding neurological disorders and identifying effective personalized interventions. In this interview, we got the chance to demystify his research and future direction.
What was your initial draw into neuroscience and researching neurodegenerative diseases?
My background is in nuclear engineering. Towards the end of my engineering degree, I attended a presentation on the mathematical challenges in neuroimaging. I was surprised at the gaps in knowledge about the brain and its disorders. Compared to cancer or cardiovascular research, we are decades behind. We have some understanding of correlation and down- or upregulation pathways, but we don’t know the causes with certainty. We don’t even understand the molecular basis of neuronal activity.
Until very recently, most neuroscientists focused on a very specific field, such as behaviour, genetics, or neuroimaging. For someone with an engineering background, that didn’t make any sense. We can’t separate these domains when they all interact with one another. We needed a way to integrate all these modalities and disciplines.
I became very interested in bringing my engineering point of view to study the brain and associated disorders. I was motivated by creating an integrative model of different biological information at the population and the personalized levels. One of the most amazing accomplishments of humanity was sending a spaceship into outer space and bringing it safely back to earth. Managing such a feat means we really understand that system. That spaceship level of accuracy is what I hope we arrive at one day: designing a treatment for someone, knowing that it will work and to what extent it will work.
How does your approach advance our understanding of the brain and neurodegeneration?
Everything that happens at the cognitive and the clinical levels is the result of interactions between many biological processes, including, but not limited to, genes, proteins, the environment, lifestyle, family history. For example, when looking at a specific gene, we must also consider its interactions with other genes, protein synthesis, and epigenetic effects. If our focus is too narrow, we risk overlooking the links between these layers and limiting our understanding. Most treatments are designed to treat the diagnosis but fail to acknowledge that we can get the same disease phenotype with very different biological mechanisms. And just as we see comorbidity in psychiatric and mood disorders, there is a strong overlap between neurodegenerative disorders. In our lab, we analyze the individual particularities and compare the similarities and differences with the population across different biological levels. This allows us to look at similar clinical diagnosis with different underlying biological roots.
How does artificial intelligence (AI) play a role in your computational models?
There are two major branches of computational models: empirical models and mechanistic models.
Empirical models would allow us to make predictions. An example of this is image recognition – with enough input, a model can tell you if the image is of a cat, a dog, a house, and so on. In medical applications, empirical models can use brain images to predict if a person is affected by Alzheimer’s disease, or even if the disease is likely to progress rapidly. This is very useful and, so far, when we talk about AI, we are talking about empirical models. But they cannot yet find the underlying cause—effect relationship.
That’s where mechanistic models come into play. They provide a cause—effect explanation about the phenomenon that is being studied. On the flip side, they are not always very good at making predictions. Currently, everything related to AI is either one or the other, which both have limitations. We need both, and we need to build bridges between the two categories.
What are the biggest challenges in your approach?
One challenge is data accessibility. In Canada, we have access to high-quality data, but there are costly neuroimaging techniques and scanners that are not accessible in low- and middle-income countries. We try to make our models accessible and inclusive so they can be used in all contexts.
Another challenge is translating the sophisticated models from research to clinical settings, to find the relationship between the few data points and brief observations we have from a patient. To come up with these predictive models, we rely on huge datasets, including, but not limited to, post-mortem tissue. Ideally, the best models would be constructed from the observation of several people over multiple years, which we simply cannot do in a clinical setting. At best, we have access to a few peripheral samples and data modalities. With post-mortem data, we build realistic models that can be translated into clinical settings. For that, we’re working on translating our findings into peripheral data, such as blood samples, cerebral spinal fluid, or cognitive data. For example, we try to find a predictive model that can identify specific brain characteristics and disease prognosis and how these are expressed in blood samples or in brain images. It’s feasible, but it’s a big undertaking.
We also need to create bridges between researchers and medical professionals. It’s common to interact with doctors who don’t see the value in convoluted computational models or even on integrating dissimilar types of data.
How can open science and collaborations lead to improvements and refinement of your models?
Collaborations are incredibly important because we receive feedback and become aware of our limitations. We have a toolbox called NeuroPM-box, where we release all our models after they are tested and validated. There are groups that use it independently, and those who work with us directly. Our collaborators help us integrate more data, which leads us to refine the models almost on a weekly basis. Even though the number of users is not very high, the user we interact with share our multivariate view of understanding and targeting diseases.
How does the Ludmer Centre’s Single-Cell Genomic Brain Initiative impact your work?
I’m very happy with this new extension of the Ludmer Centre. Our goal is to integrate as much data as we can into our models to better understand underlying disease mechanisms. We already have models that include single-cell data and propose new techniques for analysis. I’m currently collaborating with Yashar Zeighami, PhD, at the Douglas Research Institute, who works closely with another Principal Investigator at the Ludmer Centre, Corina Nagy, PhD, and there is a lot of potential to work with the new researchers joining this initiative.
Ìý
How do you hope your work will lead to the development of more effective therapeutic strategies?
Upon completion, a clinical trial is declared a success or a failure. But there is so much heterogeneity in the population and disease pathogenesis, which could lead to someone responding or not responding to an intervention. These conclusions are an average drawn from the sample and could mask the effects of the trial. We know that certain treatments work on some individuals and not on others. In my lab, we stratify patients based on biological characteristics to better predict response to a specific treatment. This is already done in cancer research to predict treatment outcomes based on the stratification of tumours and their surrounding areas. Our models try to predict precisely what makes a person more or less likely to benefit from a specific treatment and could significantly improve the planning of clinical trials by selectively enrolling patients. This could also lead us to revisit or repurpose treatments that have been declared as failures and to refine them at a personalized level.
Ìý
What do we know about the misfolded Amyloid-ß (Aß) protein, one of the neuropathogenic hallmarks of Alzheimer’s?
We all have the capacity to produce misfolded proteins. As we age, this happens more and the process is less mediated by our defense systems. There are mechanisms to produce them and defence mechanisms to clear them. In addition, there is a phenomenon called ‘cognitive resilience,’ which is the ability to maintain cognitive function despite the accumulation of misfolded proteins. So, we have these three factors and their combination: the production of misfolded proteins, cleaning or defending from them, and being resilient to them. Using neuroimaging, my postdoctoral research at the (with Alan Evans, PhD) suggested that patients with late-stage Alzheimer’s were cleaning these misfolded proteins significantly less than subjects without this diagnosis.
Although this is an important discovery, neurodegenerative disorders are likely a result of a combination of factors related to aging, DNA reparation failures, production mechanisms of misfolded proteins, vascular and metabolic systems, and deficiency in immune system function, among other causes. We are part of a new movement of researchers investigating the concept of cognitive resilience. The traditional pharmaceutical approach focuses on targeting protein synthesis or the removal of misfolded proteins. In contrast, the cognitive resilience approach aims to identify how an individual with brain damage can maintain their memory and cognitive functions. We shouldn’t care only if the person has misfolded proteins in the brain. Ultimately, all that matters is that we can identify and target the underlying protective mechanism.
What are you most excited about next year?
I mentioned that we are stratifying participants based on different biological pathways leading to the same clinical outcome. Two years after the initial release, we’re already at the second version and incorporating single-cell data. With every release, we make the model more translatable and improve its limitations. I’m excited about the next round of validations. If successful, this will have strong clinical implications.
Single-cell genomics also gives us the possibility of understanding how one supports neuronal activity and how it changes in the case of neurodegeneration. We know about neuronal death and abnormal activity. But we don’t know to what extent neurons are working abnormally compared with other cell types, or the molecular basis for these malfunctions. So far, there were data limitation and accessibility concerns; but now, we have the data that could help us ask these questions. And I’m very excited about that.
What do you want to cultivate in your lab and in the next generation of researchers?
I’ve recruited students from diverse backgrounds: from different cultures, disciplines, and training, but also with different ways of thinking. More than anything, I ask my students to step out of the box and to be integrative in their thinking. I try to motivate them to learn new techniques from other labs. I also apply this to my teaching and encourage my students to think about what is missing in the field, to be creative, and to think robustly by integrating as many sources of knowledge as they can.
I remember when scientists embarked on the Human Genome Project in the 1990s. Speeches from that time promised that once this project completed, we could solve every medical question and be free of disease. How bold we were to make such claims! If there is one thing I hope we’ve learned, it’s that there is no single modality or scale that can give us all the answers or the solutions we seek.