Medical Image Concept Detection Using Full Scale VGG-like Shallow and Transfer Learning Networks

Farhat Ullah Khan, Izzatdin Aziz, Nordin Zakaria

Abstract

Over the last two decades, medical imaging examinations, and technologies together have been exponentially increased. With the increased demand for medical examinations, the demand for medical imaging experts is also increased. Manual identification and annotation of biomedical concepts tend to be rigorous and error-prone due to the varied knowledge of imaging experts. There is a critical need for automated Medical Concept Detection methods. Finding the relevant biomedical concepts present in a medical image holds the key to solve many automated clinical diagnosis problems, a machine learning pipeline for medical information retrieval, and other related issues, like creating and managing legacy or cloud-based descriptive digital repository. Appropriate mapping from biomedical image concepts into precise textual summary highly depends on the efficiency of Medical Concept Detection techniques. A novel clustering technique is presented as a complementary data preconditioning step to reach high concept detection results. The authors grouped 8767 Concept unique Identifiers (CUIs) into 970 clusters (label size decreased by 26% approximately using 97.7% images from the dataset). The main objective of this research is to examine the state-of-the-art convolution-based deep learning pre-trained and full-scale training models for the task of multi-label classification of medical concepts using medical image input. The research work evaluates the performance of transfer learning networks: InceptionV3, Xception, Dense Convolution Network (DenseNet) 121, VGG-16, and MobileNet. This work also presents one full-scale learning CNN architecture for the identification of relevant biomedical concepts that exist in medical images. Transfer learning technique using Xception model has achieved the highest F1 score of 36.29. The shallow VGG-like full-scale training architecture also has shown a promising result with an F1 score of 20.018. The obtained results reflect the significant improvement from previous experiments, offering state-of-the-art performance, with new data preconditioning precedence for highly variable and complex datasets.

 

Keywords: concept detection, concept annotation, deep learning, medical image processing, neural networks, machine learning.


Full Text:

PDF


References


KHAN FU & AZIZ IB. Reducing high variability in medical image collection by a novel cluster-based synthetic oversampling technique. Proceedings of the 2019 IEEE Conference on Big Data and Analytics (ICBDA), 2019: 45–50. doi:10.1109/ICBDA47563.2019.8987171.

PELKA O, FRIEDRICH C, GARCÍA SECO DE HERRERA A, & MÜLLER H. Overview of the ImageCLEFmed 2019 concept detection task. CLEF working notes, CEUR 2019.

IONESCU B, MÜLLER H, VILLEGAS M, et al. Overview of ImageCLEF 2017: Information extraction from images. Proceedings of the International Conference of the Cross-Language Evaluation Forum for European Languages. Springer, 2017: 315–337.

DE HERRERA AGS, EICKHOFF C, ANDREARCZYK V, & MÜLLER H. Overview of the ImageCLEF 2018 Caption Prediction Tasks. CLEF (Working Notes), 2018.

EICKHOFF C, SCHWALL I, DE HERRERA AGS, MÜLLER H. Overview of ImageCLEFcaption – Image Caption Prediction and Concept Detection for Biomedical Images, 2017.

CLEF (Working Notes), 2017.

PALOTTI JR, ZUCCON G, GOEURIOT L, et al. CLEF eHealth Evaluation Lab 2015, Task 2: Retrieving Information About Medical Symptoms. CLEF (Working Notes), 2015: 1–22.

GUO Z, WANG X, ZHANG Y, & LI J. ImageSem at ImageCLEFmed caption 2019 task: a two-stage medical concept detection strategy. CLEF2019 Working Notes. CEUR Workshop Proceedings, (2019: 1613–0073.

SINHA P, PURKAYASTHA S, & GICHOYA J. Full training versus fine tuning for radiology images concept detection task for the ImageClef 2019 challenge. CLEF2019 Working Notes. CEUR Workshop Proceedings, 2019: 1613–0073.

LIANG S, LI X, ZHU Y, et al. ISIA at the ImageCLEF 2017 Image Caption Task. CLEF (Working Notes), 2017.

LITJENS G, KOOI T, BEJNORDI B.E, et al. A survey on deep learning in medical image analysis. Medical Image Analysis, 2017, 42: 60–88.

KOHLI M, PREVEDELLO LM, FILICE RW, & GEIS JR. Implementing machine learning in radiology practice and research. American Journal of Roentgenology 2017, 208: 754–760.

WANG X, GUO Z, ZHANG Y, LI J. Medical Image Labelling and Semantic Understanding for Clinical Applications. Proceedings of the International Conference of the Cross-Language Evaluation Forum for European Languages, Springer, 2019: 260–270.

RAHMAN M. A Cross Modal Deep Learning Based Approach for Caption Prediction and Concept Detection by CS Morgan State. CLEF (Working Notes), 2018.

PINHO E, & COSTA C. Feature Learning with Adversarial Networks for Concept Detection in Medical Images: UA. PT Bioinformatics at ImageCLEF 2018. CLEF (Working Notes), 2018.

DIMITRIS K, & ERGINA K. Concept detection on medical images using Deep Residual Learning Network. Working Notes CLEF 2017.

KOUGIA V, PAVLOPOULOS J, & ANDROUTSOPOULOS I. AUEB NLP group at ImageCLEFmed Caption 2019. CLEF2019 Working Notes. CEUR Workshop Proceedings, 2019: 09-12.

RAJPURKAR P, IRVIN J, ZHU K, et al. Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv, 2017, 1711.05225.

HUANG G, LIU Z, VAN DER MAATEN L, & WEINBERGER KQ. Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 4700–4708.

XU J, LIU W, LIU C, et al. Concept detection based on multilabel classification and image captioning approach-damo at ImageClef 2019. CLEF2019 Working Notes. CEUR Workshop Proceedings, 2019: 1613–0073.

ZHANG Y, WANG X, GUO Z, & LI J. ImageSem at ImageCLEF 2018 caption task: Image retrieval and transfer learning. CLEF CEUR Workshop, Avignon, France, 2018.

GONALVES A, PINHO E, & COSTA C. Informative and intriguing visual features: Ua.pt bioinformatics in ImageClef caption 2019. CLEF2019 Working Notes. CEUR Workshop Proceedings, 2019: 1613–0073.

ABACHA AB, DE HERRERA AGS, GAYEN S, et al. NLM at ImageCLEF 2017 Caption Task. CLEF (Working Notes), 2017.

HASAN SA, LING Y, LIU J, et al. PRNA at ImageCLEF 2017 Caption Prediction and Concept Detection Tasks. CLEF (Working Notes), 2017.

LYNDON D, KUMAR A, & KIM J. Neural Captioning for the ImageCLEF 2017 Medical Image Challenges. CLEF (Working Notes), 2017.

VINYALS O, TOSHEV A, BENGIO S, & ERHAN D. Show and tell: A neural image caption generator. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015: 3156–3164.

KATSIOS D, & KAVALLIERATOU E. Concept Detection on Medical Images using Deep Residual Learning Network. CLEF (Working Notes), 2017.

NY HOAVY N, MOTHE J, & RANDRIANARIVONY M.I. IRIT & MISA at Image CLEF 2017-Multi label classification. Proceedings of the International Conference of the CLEF Association, CLEF 2017 Labs Working Notes (CLEF 2017), 2017 (Dublin, Ireland): 1-7.

STEFAN LD, IONESCU B, & MÜLLER H. Generating captions for medical images with a deep learning multi-hypothesis approach: MedGIFT–UPB Participation in the ImageCLEF 2017 Caption Task. CEUR Workshop Proceedings, 2017, 1856: article 1523.

JOLY A, GOËAU H, GLOTIN H, et al. Lifeclef 2017 lab overview: multimedia species identification challenges. Proceedings of the International Conference of the Cross-Language Evaluation Forum for European Languages, Springer, 2017: 255-274.

VILLEGAS M, MÜLLER H, GILBERT A, et al. General overview of ImageCLEF at the CLEF 2015 labs. International conference of the cross-language evaluation forum for European languages. In: MOTHE J. et al. (eds) Experimental IR Meets Multilinguality, Multimodality, and Interaction. CLEF 2015. Lecture Notes in Computer Science, 9283: 444–461, https://doi.org/10.1007/978-3-319-24027-5_45

SOLDAINI L, & GOHARIAN N. Quickumls: a fast, unsupervised approach for medical concept extraction. MedIR workshop, SIGIR, 2016. Available from http://medir2016.imag.fr/data/MEDIR_2016_paper_16.pdf

KHAN FU, & AZIZ IB. Reducing high variability in medical image collection by a novel cluster based synthetic oversampling technique. Proceedings of the IEEE Conference on Big Data and Analytics (ICBDA), 2019: 45-50.

KUBANY A, ISHAY SB, OHAYON RS, et al. Comparison of state-of-the-art deep learning APIs for image multilabel classification using semantic metrics. Expert Systems with Applications, 2020, 161: 113656.

MAEDA-GUTIÉRREZ V, GALVÁN-TEJADA CE, ZANELLA-CALZADA LA, et al. Comparison of Convolutional Neural Network Architectures for Classification of Tomato Plant Diseases. Applied Sciences, 2020, 10: 1245.

SZEGEDY C, VANHOUCKE V, IOFFE S, et al. Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016: 2818-2826.

SIMONYAN K, & ZISSERMAN A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv: 2014, 1409.1556.

CHOLLET F. Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017: 1251–1258.

HOWARD A.G, ZHU M, CHEN B, et al Mobilenets: Efficient convolutional neural networks for mobile vision applications. ArXiv preprint arXiv: 2017, 1704.04861.

ROSEBROCK, A. Multi-label classification with Keras. 2018. Available from https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/2018 (accessed May 3, 2020).

PELKA O, KOITKA S, RÜCKERT J, et al. Radiology objects in COntext (ROCO): a multimodal image dataset. In Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis; Springer, 2018; pp. 180–189.

KHAN FU, AZIZ IB, AKHIR EAP. Pluggable Micronetwork for Layer Configuration Relay in a Dynamic Deep Neural Surface. IEEE Access, 2021, 9: 124831–124846. doi:10.1109/ACCESS.2021.3110709.

KHAN FU, ABDUL AZIZ I, AHIR EAP. PrimeNet: Adaptive Multi-Layer Deep Neural Structure for Enhanced Feature Selection in Early Convolution Stage. Preprints, 2021, 2021070699 (doi: 10.20944/preprints202107.0699.v1).


Refbacks

  • There are currently no refbacks.