0

PathCNN: interpretable convolutional neural networks for survival prediction and pathway analysis applied to glioblastoma

Jung Hun Oh, Wookjin Choi, Euiseong Ko, Mingon Kang, Allen Tannenbaum, Joseph O Deasy

The authors wish it to be known that, in their opinion, Jung Hun Oh and Wookjin Choi should be regarded as Joint First Authors.

https://academic.oup.com/bioinformatics/article/37/Supplement_1/i443/6319702

An illustration of biological interpretation. (A) Grad-CAM procedure to generate class activation maps. The two images on the left bottom represent an example of the class activation maps for a sample in the cohort, which were generated from Grad-CAM procedure; (B) statistical analysis to identify significantly different pathways between the LTS and non-LTS groups. LTS, long-term survival; CNN, convolutional neural network; ReLU, rectified linear unit

Abstract

Motivation

Convolutional neural networks (CNNs) have achieved great success in the areas of image processing and computer vision, handling grid-structured inputs and efficiently capturing local dependencies through multiple levels of abstraction. However, a lack of interpretability remains a key barrier to the adoption of deep neural networks, particularly in predictive modeling of disease outcomes. Moreover, because biological array data are generally represented in a non-grid structured format, CNNs cannot be applied directly.

Results

To address these issues, we propose a novel method, called PathCNN, that constructs an interpretable CNN model on integrated multi-omics data using a newly defined pathway image. PathCNN showed promising predictive performance in differentiating between long-term survival (LTS) and non-LTS when applied to glioblastoma multiforme (GBM). The adoption of a visualization tool coupled with statistical analysis enabled the identification of plausible pathways associated with survival in GBM. In summary, PathCNN demonstrates that CNNs can be effectively applied to multi-omics data in an interpretable manner, resulting in promising predictive power while identifying key biological correlates of disease.Availability and implementation

The source code is freely available at: https://github.com/mskspi/PathCNN.

0

Fourth place Winner on AI Tracks at Sea Challenge

We won 4th place in the Artificial Intelligence (AI) Tracks at Sea Challenge. https://www.challenge.gov/challenge/AI-tracks-at-sea/
This national competition is organized by the U.S. Navy.

VSU TrojanOne Team: Jose Diaz, Curtrell Trott, Advisor: Ju Wang, Wookjin Choi

The $200,000 prize was distributed among five winning teams, which submitted full working solutions, and three runners-up, which submitted partial working solutions. The monetary prize will be awarded to the school the corresponding team attends:

Challenge Winners

Teams participating in the AI Tracks at Sea Challenge spanned collegiate institutions from east to west U.S. coasts, from both public and private colleges and universities. Collectively, the student submissions for the challenge represent various types of STEM research institutions, Ivy League Schools, Historically Black Colleges and Universities (HBCU) and Hispanic Serving Institutes (HSI). Of the challenge teams, 26% were comprised of students from HBCUs and 16% of the teams attend HSIs.

“With 94% of the competitors attending colleges and universities outside of California, this challenge served as an avenue to make broader impacts in STEM,” said Yolanda Tanner, Naval Information Warfare Systems Command (NAVWAR) STEM Federal Action Officer and NIWC Pacific Internship and Fellowship project manager. “It was also a means by which students could further develop their STEM skills while working collaboratively to solve a real-world naval problem.”

Florida, North Carolina, and Texas had the largest population of participating collegiate teams.

0

Automatic motion tracking system for analysis of insect behavior

Darrin Gladman, Jehu Osegbe, Wookjin Choi*, and Joon Suk Lee “Automatic motion tracking system for analysis of insect behavior”, Proc. SPIE 11510, Applications of Digital Image Processing XLIII, 115102W (21 August 2020); https://doi.org/10.1117/12.2568804

*Corresponding author

Abstract

We present a multi-object tracking system to track small insects such as ants and bees. Motion-based object tracking recognizes the movements of objects in videos using information extracted from the given video frames. We applied several computer vision techniques, such as blob detection and appearance matching, to track ants. Moreover, we discussed different object detection methodologies and investigated the various challenges of object detection, such as illumination variations and blob merge/split. The proposed system effectively tracked multiple objects in various environments.

0

Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening

Choi, W., Nadeem, S., Alam, S. R., Deasy, J. O., Tannenbaum, A., & Lu, W. (2020). Reproducible and Interpretable Spiculation Quantification for Lung Cancer Screening. Computer Methods and Programs in Biomedicine, 105839. https://doi.org/10.1016/j.cmpb.2020.105839

Highlights

  • A novel interpretable spiculation feature is presented, computed using the area distortion metric from spherical conformal (angle-preserving) parameterization.
  • A simple one-step feature and prediction model is introduced which only uses our interpretable features (size, spiculation, lobulation, vessel/wall attachment) and has the added advantage of using weak-labeled training data.
  • A semi-automatic segmentation algorithm is also introduced for more accurate and reproducible lung nodule as well as vessel/wall attachment segmentation. This leads to more accurate spiculation quantification because the attachments can be excluded from spikes on the lung nodule surface (triangular mesh) data.
  • Using just our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve AUC=0.82 on public Lung LIDC dataset and AUC=0.76 on public LUNGx dataset (the previous LUNGx best being AUC=0.68).
  • State-of-the-art correlation is achieved between our spiculation score (the number of spiculations, Ns) and radiologists spiculation score (ρ = 0.44).

Abstract

Spiculations are important predictors of lung cancer malignancy, which are spikes on the surface of the pulmonary nodules. In this study, we proposed an interpretable and parameter-free technique to quantify the spiculation using area distortion metric obtained by the conformal (angle-preserving) spherical parameterization. We exploit the insight that for an angle-preserved spherical mapping of a given nodule, the corresponding negative area distortion precisely characterizes the spiculations on that nodule. We introduced novel spiculation scores based on the area distortion metric and spiculation measures. We also semi-automatically segment lung nodule (for reproducibility) as well as vessel and wall attachment to differentiate the real spiculations from lobulation and attachment. A simple pathological malignancy prediction model is also introduced. We used the publicly-available LIDC-IDRI dataset pathologists (strong-label) and radiologists (weak-label) ratings to train and test radiomics models containing this feature, and then externally validate the models. We achieved AUC = 0.80 and 0.76, respectively, with the models trained on the 811 weakly-labeled LIDC datasets and tested on the 72 strongly-labeled LIDC and 73 LUNGx datasets; the previous best model for LUNGx had AUC = 0.68. The number-of-spiculations feature was found to be highly correlated (Spearman’s rank correlation coefficient ) with the radiologists’ spiculation score. We developed a reproducible and interpretable, parameter-free technique for quantifying spiculations on nodules. The spiculation quantification measures was then applied to the radiomics framework for pathological malignancy prediction with reproducible semi-automatic segmentation of nodule. Using our interpretable features (size, attachment, spiculation, lobulation), we were able to achieve higher performance than previous models. In the future, we will exhaustively test our model for lung cancer screening in the clinic.