The coronavirus outbreak continues to spread around the world and no one knows when it will stop. Therefore, from the first day of the identification of the virus in Wuhan, China, scientists have launched numerous research projects to understand the nature of the virus, how to detect it, and search for the most effective medicine to help and protect patients. Importantly, a rapid diagnostic and detection system is a priority and should be developed to stop COVID-19 from spreading. Medical imaging techniques have been used for this purpose. Current research is focused on exploiting different backbones like VGG, ResNet, DenseNet, or combining them to detect COVID-19. By using these backbones many aspects cannot be analyzed like the spatial and contextual information in the images, although this information can be useful for more robust detection performance. In this paper, we used 3D representation of the data as input for the proposed 3DCNN-based deep learning model. The process includes using the Bi-dimensional Empirical Mode Decomposition (BEMD) technique to decompose the original image into IMFs, and then building a video of these IMF images. The formed video is used as input for the 3DCNN model to classify and detect the COVID-19 virus. The 3DCNN model consists of a 3D VGG-16 backbone followed by a Context-aware attention (CAA) module, and then fully connected layers for classification. Each CAA module takes the feature maps of different blocks of the backbone, which allows learning from different feature maps. In our experiments, we used 6484 X-ray images, of which 1802 were COVID-19 positive cases, 1910 normal cases, and 2772 pneumonia cases. The experiment results showed that our proposed technique achieved the desired results on the selected dataset. Additionally, the use of the 3DCNN model with contextual information processing exploited CAA networks to achieve better performance.
Copyright © 2021. Published by Elsevier Ltd.

Author