By Charlotte Plestant, Louis Jeay, Suliman Bouizaguen, David Guet - 26 Jul 2021
The small success rate of drug candidates, especially in oncology, has outdated the concept of “one fits all” treatments. Immunotherapy has indeed significantly improved survival in different cancer types, yet its efficacy remains limited to small subpopulations. Every patient is different and has a unique genetic make up: we are now in the era of personalized medicine.
An efficient personalized medicine strongly relies on understanding the tumor microenvironment (TME). TME is defined by the interplay between individual cells in the tumor, immune cells, blood vessels, fibroblasts and the extracellular matrix. Investigation of the TME has revealed that the type, the density, the localisation and organisation of immune cells -
defined as the immune contexture - within solid tumors could help predict treatment response and clinical outcome [1,2].
Establishing the cellular profile of this environment and the cell spatial distribution is crucial in deciphering the immune contexture. The diversity in cell populations and states is translated in the expression of a large panel of biomarkers. For example, PD-L1 is the most known biomarker that has received FDA approval for its use as a companion and complementary diagnostic for two checkpoint inhibitors, pembrolizumab and nivolumab [3,4].
Integrating immune contexture with clinical outcome will help to discover new prognostic and predictive biomarkers.
Multiplex fluorescence IHC (mf IHC) allows the simultaneous analysis of a large number of biomarkers while preserving the spatial distribution information.
Indeed, unlike methods such as flow cytometry, staining is performed on intact FFPE (Formalin Fixed Paraffin Embedded) tissues. After staining and scanning, the generated multiplexed images contain precious information such as tissue morphology, cell morphology and cell phenotypes.
Their analyses offers a visual translation of the molecular cellular processes taking place.
High dimensional mf IHC image, tonsil section [5].
Albeit extremely informative, the process is time consuming and can be prone to errors. For example, lowly expressed proteins can generate weak signal, which will be difficult to detect above the background. Many artifacts, resulting from events such as necrosis or fibrosis, can also hamper signal detection and analysis. Moreover, the technologies allowing the generation of the images, going from tissue sampling to staining and acquisition, are not globally standardized. This means the images used for the analyses are technology-dependent, adding another variable to this
complex equation. This means the images used for the analyses are technology-dependent, adding another variable to this complex equation.
Considering the amount of biomarkers that can be studied simultaneously, it is mandatory to bring performant image analysis tools capable of highlighting crucial information, flooded within the haystack of data. All these challenges cannot be faced with regular analysis tools and cannot only rely on the capability of the human eye to accurately interpret these high dimensional data.
It has been decades that Artificial Intelligence (AI)-automated image analysis has allowed objective quantification of images. By mimicking human behaviour, it allows better and faster data extraction from images. AI technology, such as Machine Learning, requires training the algorithms by extracting handcrafted features, which are fairly complex to define.
Deep Learning (DL) is another category of AI technology, which can aid in overcoming this problem: it is an end-to-end trainable system that includes feature extraction. The direct benefits are to save time and overcome the complexity of defining the right features to train the algorithms. This AI-technology allows the discovery of meaningful features, while combining a better robustness to signal heterogeneity, whether it is intensity or morphology-dependent.
In the context of multiplexed image analysis, the goals are to retrieve complex information such as single cell interaction, distal analysis, expression of biomarkers or cell infiltration within the tumor.
WORKFLOW FOR MULTIPLEXED IMAGE ANALYSIS
How can Deep Learning technology provide such data? After defining every cell location within the tissue, most frequently through DAPI staining, the positivity for each biomarker is assessed. Some cells can co-express several biomarkers and reveal certain phenotypes of interest.
Deep Learning algorithms developed at Keen Eye rely on convolutional neural networks, which achieve state-of-the-art results in most image classification tasks. Thus, the algorithms simultaneously learn the
specific biomarker intensity and shape to identify positive cells.
Indeed, shape is important to distinguish noisy signals from true positive cells but also enables finding low intensity positive cells, which are just above the background and exhibit the right shape.
Deep Learning algorithms give access to high level accuracy for every biomarker, which is mandatory to prevent exponential multiplication of errors during quantification steps, as phenotypes combine several biomarkers.
Examples of spatial biology analysis outcomes. A. Tumour infiltration analysis of CD68+ cells (tumor pink, stroma blue). B. Resolution of spatial localization of biomarkers: tSNE plots for individual biomarkers, cytokeratin and PD-L1. C. Analysis of the probability of biomarkers expression with a phenograph.
References:
Authors:
Charlotte Plestant - Scientific Content Manager
Louis Jeay - Data Scientist
Suliman Bouizaguen - Data Scientist
David Guet - Product Application Specialist
This website uses cookies to enhance your browsing experience. View our Privacy Policy.