Warning: Declaration of My_Walker::start_el(&$output, $item, $depth, $args) should be compatible with Walker_Nav_Menu::start_el(&$output, $data_object, $depth = 0, $args = NULL, $current_object_id = 0) in /home2/ajnrblog/public_html/ajnrdigest/wp-content/themes/ajnr/functions.php on line 258
A Multiparametric Model for Mapping Cellularity in Glioblastoma Using Radiographically Localized Biopsies - AJNR News Digest
March-April 2018
ADULT BRAIN

A Multiparametric Model for Mapping Cellularity in Glioblastoma Using Radiographically Localized Biopsies

Chang Pic

Peter Chang

With current advances in MR technique, rich, high-resolution imaging datasets are now routinely acquired in clinical practice; however, much of this detailed information at the voxel level remains unused, even in research settings where there are many practical and logistic barriers to voxel-level radiographic-pathologic correlation. In this project, we wanted to solve many of these basic problems—from automated whole-slide histologic analysis to biopsy-image coregistration and modeling of MR signal intensity. In doing so, we hoped to demonstrate not only the feasibility of this type of analysis, but also prove that subtle changes in voxel-level MR signal do in fact reflect important underlying histologic processes.

Using the proposed biopsy-calibrated model, we were able to approximate the margins of infiltrative nonenhancing tumors in patients with high-grade gliomas. In the preoperative setting, this type of information can be invaluable in guiding extended total resection, providing a noninvasive adjunct to other in vivo techniques such as intraoperative fluorescein staining. In the postoperative setting, estimates of residual tumor cellularity, especially at the resection margins, can be used as additional biomarkers for assessing treatment response and tumor progression, the determination of which is currently based primarily on the appearance of the enhancing tumor alone.

Machine learning in medical imaging and diagnostics has received tremendous attention in recent years. In this project, a combination of traditional and deep-learning techniques enabled us to develop the rigorous pipeline needed for voxel-level radiographic-pathologic correlation. For histologic analysis, we developed a deep-learning algorithm for cell counting that operates on a single high-power field (HPF). By tiling through hundreds of HPFs per H&E slide, we were able to automatically and quantitatively assess hundreds of tissue specimens. To determine biopsy locations, we used traditional computer vision techniques to coregister 2D crosshairs generated by stereotactic neuronavigational software to 3D MRI volumes. For the final model, a simple multiple linear regression was used to predict tumor cellularity from MR signal intensity. The varied approaches highlight the strengths and weaknesses of the different machine-learning techniques given constraints in dataset size and the end goal. Since the publication of this paper, we have worked with many other collaborators to integrate various portions of the pipeline into existing research protocols.

We are currently finishing a second model to estimate the proportion of radiation necrosis from recurrent tumor in posttreatment high-grade glioma, also calibrated with localized biopsies. Instead of relying on MR signal intensity alone, the model is based on neural networks that also learn to incorporate spatial and textural features from a small field of view (several millimeters) around each biopsy location. In addition to conventional MR imaging acquisitions, we are actively incorporating advanced functional imaging techniques, including resting-state BOLD, to improve model accuracy.

Some of this newer work was presented at annual meetings of the American Society of Neuroradiology (ASNR; Long Beach, CA in April 2017) and the Society for Neuro-Oncology (SNO; San Francisco, CA in November 2017). A follow-up manuscript is currently being prepared.

Read this article at AJNR.org …