In December 2019, local outbreak of a strange kind of pneumonia due to an unknown source was reported in the city of Wuhan, China.1 The source of the disease was soon discovered to be a new strain of “Coronavirus” (CoV) termed as “Severe Acute Respiratory Syndrome Coronavirus 2” (SARS-CoV-2) by the international committee on taxonomy of viruses. The disease caused by the virus was named Coronavirus Disease-2019 (COVID-19) by World Health Organisation (WHO) in February 2020. COVID-19 is a highly contagious, upper respiratory syndrome with more than 7 million confirmed infections in about 191 countries as of June 16, 2020.2 Presently, the disease has been held responsible for causing over a million deaths all across the globe, with the highest trolls in the countries like United States, India, Russia, Brazil, United Kingdom, Italy, Spain, and France. The situation is alarming especially in developing and underdeveloped nations due to the increasing number of deaths and infections on per day basis with a very fragile health system.
COVID-19 can cause sore throat, cough, fever, runny nose, fatigue, muscle pain, and in some extreme cases, it may lead to severe pneumonia, difficulty in breathing, multi-organ failure, and death.3 Pneumonia is an infection that causes inflammation in the lungs’ air sacs accountable for the oxygen exchange and makes breathing difficult. The inflammation that corresponds to the infiltration of the host immune cells at the site of infection results in the release of small molecules known as cytokines.4 The cytokines thus function to localize more immune cells at the site, which starts the cascade of release of furthermore cytokines (cytokines storm) at the site of injury. The increase in the concentration of a specific type of inflammatory cytokines such as IFN-γ, IL-4, and TNF-α results in the inflammation of the underlying tissue (alveoli sacs).5-7 The phenomenon leads to less oxygen containment in the lungs and results in short breathing patterns. Further, inflammation of the lungs limits oxygen supply to other organs such as the heart and brain causing hypoxia in tissues, and leading to multi-organ failure.3 Besides SARS-CoV-2, the other sources of pneumonia can be pathogens namely, bacteria, fungi, and other viruses.
Recent research indicates that combining radiologic image features with pathological results may be a great help in the early detection of COVID-19 cases.8-10 It is reported in the recent studies that visible changes can be observed in the chest X-ray (CXR) and computed tomography (CT) images of the COVID-19 patients, prior to the beginning of the disease symptoms.11 Researchers have found that COVID-19 patients’ chest scans show traceable marks or hazy darkened spots, which can serve as a tool to differentiate COVID-19 patients from healthy population.9, 12 Studies carried out on COVID-19 patients’ chest imaging data not only reported a range of opacities (viz., single nodular opacity, ground-glass opacities [GGO] or mixed GGO) in one or both lungs, but also reported a consolidation and vascular dilation in the lesion.10, 13, 14 Figure 1 shows CXR images taken at first, fourth, fifth, and seventh day of a 50-year-old COVID-19 patient with pneumonia. No significant changes can be observed from the CXR image captured on the first day in Figure 1A. However, patches, ill-defined bilateral alveolar consolidation, and progressive radiological worsening are clearly evident in Figure 1B–D and showing the disease progression over time.
The CXR imaging is inexpensive, quick, expose patient to less harmful radiations and is more widespread imaging technique than CT scan, which makes it very useful and relevant in the current COVID-19 scenario.14 Many significant studies have been published in recent past that advocate usage of machine learning (ML) and deep learning models for COVID-19 detection using CXR images.14, 16-18 For instance, Mahmud et al. proposed a novel deep neural network architecture based on depth-wise dilated convolutions named “CovXNet” and efficiently detected COVID-19 and other types of pneumonia with distinctive localization from CXRs.19 In another work, Chandra et al. developed an automatic COVID screening (ACoS) system for the detection of COVID-19 patients using majority voting-based ensemble classifier.20 Das et al. developed an automated deep transfer learning-based method of COVID-19 detection using the extreme version of the Inception (Xception) model.21 Alakus and Turkoglu carried out a comparative study among different deep learning models to predict COVID-19 infection.22 A detailed review of the ML and artificial intelligence (AI) developed for COVID-19 detection using CXR and CT scans is presented in Refs. 23, 24. It is evident that the diagnostic solution can be developed as cheap, accurate, and fast analysis tool to support multiple scans examination parallelly. Owning the ability of ML models in CXR image classification, the current study aims to develop ML-based analytical framework for early detection of COVID-19 and other types of pneumonia (caused by multiple pathogens) using patient’s CXR scans.
The rest of this article is organized as follows. In Section 2, the dataset used to train and validate the ML for disease identification is explained. Further, the methodology developed for CXR image feature extraction and ML-based image classification (COVID-19, viral and bacterial pneumonia patients, and healthy person) framework is explained in Section 3. The criterion used for model selection is presented in Section 4. In Section 5, the results and outcomes of the proposed study are presented; the discussion follows the results in Section 6. The outcomes and findings of the present work are summarized in the conclusion section.
In ML applications, the training data play a pivotal role in model building. Considering COVID-19 a novel disease, no suitable sized CXR image dataset is available for learning model building and validation. In the present work, in view to develop an analytical framework for automatic detection of COVID-19 using CXR images, a dataset has been created by collecting images from three different publicly available databases. The created image dataset includes CXR images of four categories/classes, which were captured from normal/healthy, pneumonia (bacterial and viral), and COVID-19 patients. In order to ensure the quality of images fed into the framework for credible results, all CXR images were passed through extensive checks for any irregularity.
A check was run on collected CXR images to figure out any instance of duplication of images collected from multiple sources, which helped in maintaining novelty in the image dataset. Also, manual interventions were carried out iteratively to ensure the removal of anomalous images, such as the ones with arrow markings on CXR images. Finally, all the pre-processing steps lead us to develop an appropriate size image dataset of 4325 images, categorized into four different classes. The created dataset helped us to achieve better generalization ability and disease diagnostic. A brief summary of the image dataset created in the present work is shown in Table 1. The trained model is also validated with different feature set on 100 CXR images from COVID-19 patients captured at the radiology lab in India. The CXR images were procured from X-Ray Centre (Bansal), Sangrur, Punjab, India (with concealed individual identity). The CXR images were inspected and interpreted by an expert Dr. Amandeep Aggarwal, Sangrur, Punjab, India.
|Type of images||No. of images (at the time of collection)||No. of images (selected)||Reference|
|Normal/healthy||1341||1058||Rahman et al.25 (https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/data)|
|Viral pneumonia||1345||591||Rahman et al.25; Cohen et al.26 (https://github.com/ieee8023/covid-chestxray-dataset)|
|Bacterial pneumonia||2782||2116||Rahman et al.25|
3 FEATURE EXTRACTION
We performed extensive data pre-processing, followed by the two rotationally invariant texture feature extraction techniques, namely, Haralick and Hu moments. Haralick features, given by Robert Haralick, are a set of 14 features computed by rotating two similar co-occurrence matrices at four specified degrees (0, 45, 90, and 135) to evaluate the features.28, 29 The gray-level co-occurrence matrix generated considering each element as the probability of the pixel from one matrix being adjacent to the pixel from the second matrix. We consider a total of 13 features for our proposed approach, since, the 14th feature, that is, max correlation coefficient is found to be unstable in most tests and not suitable for computation. While Haralick features play the role of texture analysis we employ another approach in order to perform an extensive analysis of the patterns in our X-ray image. For this, consider another set of statistical features, that is, the Hu Moments30 for our proposed model.
The pre-processing helped us to identify focus areas in the images and narrow our analysis to only the lungs. It was performed by collecting the textural features of the X-ray images, using the Haralick features, and identifying statistical features of shapes and patterns of the lung X-ray images using the Hu moments. With the results achieved by selecting individual features, we can observe that each of the two feature extraction techniques plays an important role in the diagnosis of COVID-19 X-ray images. Table 2 shows the list of 14 Haralick features considered in the present work for ML model building.
|Haralick||Angular Second Moment|
|Sum of Squares or Variance|
|Inverse Difference Moment|
|Info. Measure of Correlation 1|
|Info. Measure of Correlation 2|
|Max Correlation coefficient|
Haralick features refer to a set of 14 features computed by rotating two similar co-occurrence matrices at four specified degrees, that is, 0, 45, 90, and 135. The gray-level co-occurrence matrix generated considering each element as the probability of the pixel from one matrix being adjacent to the pixel to the second matrix. In the present work, 13 Haralick features have been used in framework building, since, the 14th feature, that is, max correlation coefficient is found to be unstable in most of the test cases and not suitable for computation.
While Haralick features play an important role in image texture analysis, we employ another approach to carry out an extensive analysis of the patterns in CXR images. In order to perform an extensive study of CXR images, another set of statistical features called as Hu Moments are estimated in the present work.30 The Hu Moments are a set of seven features, out of which the first six are invariant to image transformations, like image rotation, scale, reflection, and translation. However, the seventh feature is skew invariant, which helps to distinguish mirror images. These seven feature values are estimated using three values of image moments, that is, spatial moments, central moments, and central normalized moments. The features are used to define shapes in the CXR images. The combined features (Haralick + Hu moment) are evolved as the useful and effective feature extraction methodology for the proposed ML-based COVID-19 detection framework.
3.1 Proposed COVID-19 detection framework
In order to distinguish amongst four classes of CXR images; that is, healthy, pneumonia (bacterial), pneumonia (viral), and COVID-19 using CXR images, a multi-stage ensemble ML framework is proposed in the present work. The developed framework is a three-stage ML model () that distinguishes amongst patients (COVID-19, pneumonia [viral and bacterial]) and healthy population on analyzing input CXR images. Figure 2 shows the overall block diagram of the proposed multi-model framework. The first stage of the given framework includes four ML models, that is, Support Vector Machine (SVM), Decision Tree (DT), Gaussian Naive Bayes (gNB), and k-Nearest Neighbor (kNN). The second phase entails two ML models, that is, Adaboost and XGboost, and the third phase is comprised of single ML model, that is, Random Forest (RF) tree classifier. Extensive experimentation work was carried out to facilitate the selection of set of ML methods at different phases/stages of the proposed framework. The set of ML models applied at different stages of the diagnostic framework are presented in Figure 3.
During training, the input image dataset features are supplied to Stage-1 of the proposed framework, which exhibits four confusion matrices () at the output. These confusion matrices are used to identify the ambiguously classified images with an ambiguity check module. The image data, which do not consistently pass the binary classification test from each of these models; that is, the elements of that are not correctly classified, are forwarded to the next phase/stage of the framework. However, the unambiguously predicted image dataset is bypassed through Path II and restricted to entering into the rest of the processing stages, as depicted in Figure 2. The processing steps are repeated and applied at the output of Stage-2 as well. The final prediction of the class of the image dataset failing classification test at Stage-1 and Stage-2 is generated at the output of Stage-3. Once all the models of the given framework are completely trained, the trained framework is tested for its ability to classify different types of input CXR images.
The framework provides automatic tuning according to the validation score multiple algorithms achieve on training and stage-wise ambiguity check in our final predictions. During training, the labeled dataset is presented to the first stage of the proposed model (refer Figure 2), resulting in four different confusion matrices. Confusion matrices are used to identify the ambiguously binary classified data with an ambiguity check module. The data, which do not consistently pass the binary classification test from each of the models, are forwarded to the next phase for further prediction. Whereas, when data is unambiguously predicted rest of the stages are bypassed through Path II, refer Figure 2. The same process is applied at the output of the second phase. The final prediction, for data failing classification tests at Stages 1 and 2, is generated from the third phase. Once the model is completely trained, the resulting model is used for testing and validation.
4 MODEL SELECTION
In the first step, 70% of labeled image dataset is passed through individual ML models for training and 30% is used for testing. Based on the performance of each ML model, the low to high-performing tuned models are selected for being part of first to third stage of the proposed framework as depicted in Figure 3. The highest performing classifier model found in experimental analysis (i.e., RF tree classifier) is selected as the third or final stage of the framework. Keeping the best performing classification model at the final stage of the framework enables us to classify the instances with multiple wrong predictions in earlier stages.
The model is implemented in Python 3.6 with Tensorflow backend and trained on the basic configuration of Google Colab running on a dual-core Intel(R) Xeon(R) CPU @ 2.30GHz with a 13 GB Ram and 100 GB storage. Using this configuration, the model took approximately 3 min to train on the complete dataset using the 10-fold cross-validation learning scheme. Some of the libraries used were opencv (cv2) for processing X-ray images and performing Hu moments feature extraction, similarly, mahotas library was used for computing Haralick features of the images and imblearn for balancing the multiclass samples and data filtering. For importing ML algorithms, hyperparameter tuning and metrics computing modules the setup utilizes sklearn and xgboost library.
Out of a total of 4325 sample images, we used 3027 (~70%) for training and validation of the model, and 1298 (~30%) for testing. Given the limited size of clinical data, we followed 10-fold cross-validation mechanism. In 10-fold cross-validation, 2724 (~90% of 3027) image samples were randomly selected for training and rest of the images (~10% of 3027) were used for validation of the framework in each fold, as represented by Figure 4. This strategy helps in training the model on probably all 3027 images after completion of 10th fold, while validating on rest of them.
Once the complete framework is trained with the given image dataset, 1298 image samples are used to test whether the trained framework is able to predict the outcomes well or is due to over- or under-fitting from the data samples. In the present work, the proposed framework is trained and tested on 20 and 10 features, as shown in Table 3. Table 3 illustrates the classification accuracies of the proposed framework. It is clear that reduction in a number of features adversely affects the performance of the model. It is observed from Table 3 that the proposed multi-stage COVID-19 detection framework classifies four classes of CXR image dataset with a classification accuracy of ~87.75%–95.01%. Further, it is evident that reduction in a number of features adversely affects the performance of the framework. The detailed results, including true negative, true positive, false negative, false positive, sensitivity, specificity, precision, and F-score, are presented in Table 4.
|Image samples||Number of features||Features||Classification accuracy (training, testing, validation)|
Training samples= 3027
|20||13 Haralick + 7 Hu Moments|
|10||Correlation, Sum of squares/ Variance, Sum Average, Sum Variance, Difference Variance, Information Measure of Correlation 2, Hu Moments [feature index = 2, 5, 6, 7]|
- Abbreviations: Acc, accuracy; FN, false negative; FP, false positive; FS, F-1 score; MC, miss-classification; Pre, precision; Sen, sensitivity; Spe, specificity; TN, true negative; TP, true positive.
One of the important elements of the proposed model is the ambiguity check block, refer Figure 2. We analyze its behavior next. Table 5A,B and Figures 5 and 6 show the performance of the ambiguity check block for 20 and 10 feature counts, respectively. Table 5A,B shows a number of samples identified as ambiguous at the ambiguity check block 1 and 2. It is clear from Table 5A,B and Figures 5 and 6 that the percentage of unambiguous samples passed directly to the output, for test and validation data, follows test data samples. It indicates that though we had limited data, our strategies to train the network has made sure that the model is not over-fitting or under-fitting.
|Feature count||Total samples*||Ambiguous samples: LAYER 1||Ambiguous samples: LAYER 2||Ambiguity Checks (Training/Testing/Validation)|
|303||122||46||Validation – fold 1|
|303||107||33||Validation – fold 2|
|303||115||39||Validation – fold 3|
|303||110||32||Validation – fold 4|
|303||140||42||Validation – fold 5|
|303||107||40||Validation – fold 6|
|303||139||42||Validation – fold 7|
|303||103||26||Validation – fold 8|
|303||129||49||Validation – fold 9|
|303||117||36||Validation – fold 10|
|303||91||24||Validation – fold 1|
|303||98||31||Validation – fold 2|
|303||107||43||Validation – fold 3|
|303||110||46||Validation – fold 4|
|303||98||33||Validation – fold 5|
|303||101||33||Validation – fold 6|
|303||91||39||Validation – fold 7|
|303||110||42||Validation – fold 8|
|303||117||51||Validation – fold 9|
|303||100||36||Validation – fold 10|
Number of ambiguous samples passed to the next stage for further processing. The samples used for validation is independent of each other and are selected from the training set randomly.
Table 6 presents the classification accuracies of the proposed framework for training, testing, and validation dataset. The classification performance using all 20 features exceeds that of 10 features, indicating the importance of all the estimated features and their independence. Table 7 presents the validation accuracy of multi-stage framework on 100 COVID-19 CXR images (using 20, 13, and 7 image features). It is evident from the validation results that the proposed multi-stage framework is capable of diagnosing COVID-19 patients with significantly less number of image features as well. In the validation study, Haralick features were found to be very promising in the COVID-19 disease diagnosis. Validation accuracy can be further increased with balanced image sample set.
|Ensemble accuracy (20 features)||Feature count||Ensemble accuracy (10 features)||Feature count||Data type|
|Image samples||Number of features||Features||Validation accuracy (Covid-19 images)|
Training samples (Covid-19: 560, Bacterial Pneumonia: 2780, Viral Pneumonia: 1341, Normal: 1341) = 6022
Validation samples (Covid-19) = 99
|20||13 Haralick + 7 Hu Moments||63.63|
7 Hu Moments
The graphs shown in Figures 5 and 6 represent ambiguous image samples and un-ambiguous image samples (refer Figure 2) passed from Stage-1 to the next. The number of ambiguous samples for test, train, and validation data follows each other for 10 and 20 samples, respectively, indicating the excellent performance of the trained model for test and validation data. Same is the case with un-ambiguous image samples as well. Table A1 (in appendix) presents the test performance of the proposed framework using a different set of features, i.e., using 13 haralick, 7 hu moments and 20 (i.e., 13 haralick + 7 hu moments) features. The test results explain the efficacy of computed features in COVID-19 detection task. It is evident that the classification performance is significant even with seven image features (i.e., 73.65 %).
The primary concern with COVID-19 disease is its high transmission rate.35, 36 The SARS-CoV-2 virus is easily transmittable within the community by direct (touch and body-fluid transmission) or indirect (droplets from sneeze, cough, or breathing of an infected person) means of contact. Increasing and rapid growth rate of the COVID-19 cases has brought the healthcare system to the point of collapse in many advanced.37 The healthcare system is impacted adversely and facing shortage of ventilators, ICUs, and testing kits.16 Many countries had to impose lock-down to break the chain of transmission and advised their people to avoid movements and public gatherings. However, these preventive measures have only been little help in curbing COVID-19 pandemic, owing to the unavailability of specific therapeutics to combat the disease.
Considering the prevailing scenario, early detection of COVID-19 will be a crucial and important step in fighting the disease. Early diagnosis and correct treatment may limit disease progression to severe levels causing death. Also, early isolation of such patients will lead to control transmission rate and will possibly reduce the stress on the healthcare system. Currently, the most common and reliable testing method available for COVID-19 detection is a real-time reverse transcription-polymerase chain reaction (rRT-PCR) test.38 The test is conducted on upper and lower respiratory specimens collected from the patient and the test results are generally produced within few hours to 2 days.16 It is worth mentioning here that the SARS-CoV-2 RNA is typically detectable in a respiratory specimen during the acute phase of infection due to low rRT-PCR sensitivity (60%–70%). In addition, serological (antibody) test also exists for viral detection, but the limitation of this method is that the antibody development takes place after 7–10 days of viral infection. In such case, chest radiological imaging namely, CT and X-ray can be used as an alternate method to rRT-PCR and serological test for early diagnosis of COVID-19, and the viral symptoms can be investigated by critical examination of patient’s scans.39-41 Owning the efficacy of CT scans in COVID-19 detection, Hu et al. proposed weakly supervised deep learning for COVID-19 detection and classification using subjects’ chest CT scans.41 In addition to exploring ML model efficacy in COVID-19 detection task, researchers also discussed the limitations of such models in COVID-19 diagnosis and prognostication.42, 43 Following earlier work on chest CT and X-ray scans, the ML-based analytical framework was developed for early detection of COVID-19 and other types of pneumonia (caused by multiple pathogens) using patient’s CXR scans in the present work, which has shown the classification accuracy of ~88%–92%.
The proposed multistage COVID-19 detection framework is able to differentiate accurately between different patients and healthy population using CXR images. The proposed model was trained, tested, and validated on the publicly available dataset. The classification accuracies achieved in training, validation, and testing are 95.01%, 92.05%, and 89.21%, respectively for the four-class classification task. Classification results presented in Tables 3, 4, and 6 elucidate the efficacy of the proposed methodology, and importance of estimated features and their independence. Table 8 shows a comparative study of the proposed COVID-19 diagnostic framework with recently reported methods for COVID-19 detection. The performance is compared with studies carried out for three-class (i.e., COVID-19, pneumonia, and healthy/no findings) classification and four-class (i.e., COVID-19, bacterial pneumonia, viral pneumonia, and healthy/no findings) classification problems. It can be observed from Table 8 that the proposed framework works well for the four-class classification and exhibits highest classification accuracy compared to all recently proposed COVID-19 detection methodologies. The proposed automated COVID-19 diagnosis framework can play a major role in building an autonomous disease diagnosis system, which can be installed in hospitals having limited resources for patient testing. The healthcare personnel are the most exposed and vulnerable to the infection due to direct contact with the patient during testing and treatment regime. The autonomous mechanism will be a great help in limiting such infections, since, not many healthcare personnel will be involved in conducting patient scans and proximity can be maintained during such scans.
|Study||Number of class/number of chest X-rays||Proposed architecture||Accuracy (%)||Remarks|
|Ozturk et al.14||3-class/125 COVID-19 + 500 pneumonia + 500 no finding||DarkCovidNet||87.02|
|Civit-Masot et al.44||3-class/132 COVID-19 + 132 pneumonia + 132 healthy||VGG16-based deep learning model||86||AUCs on ROC curves are greater than 0.9|
|Khan et al.16||4-class/76 COVID-19 + 93 healthy + 87 viral pneumonia + 94 bacterial pneumonia||CoroNet||89.6||4-fold classification accuracy|
|Wang et al.45||4-class/not available||COVID-Net||83.5||COVID-Net performance analysis by Khan et al.16|
|Mahmud et al. 202019||4-class/305 COVID-19 + 305 healthy + 305 viral pneumonia + 305 bacterial pneumonia||CovXNet||90.3|
|Proposed framework||4-class/random selection among 1298 chest X-rays (COVID-19 + healthy + viral pneumonia + bacterial pneumonia)||Multi-stage framework||92.4||10-fold classification accuracy|
The present work proposed a novel multi-stage ML framework to distinguish amongst COVID-19, viral and bacterial pneumonia patients, and healthy person using CXR images. The trained framework, when tested and validated with test and validation image dataset, respectively, and reproduced significantly high classification accuracy compared to other recently proposed methods. The classification accuracies achieved in training, validation, and testing are 92.4%, 88.24%, and 87.13%, respectively. It is observed in the study that limiting the number of features adversely affects the performance of the framework. Therefore, it is recommended to use all 20 features for the disease diagnosis purpose. Once more image data of COVID-19 patients are available, the current framework will be extended to develop a deep learning model to choose best fit features for increased performance.
The authors are grateful to the X-Ray Centre (Bansal), Sangrur, Punjab, India for providing COVID-19 chest X-ray images. Also, we are grateful to Dr. Amandeep Aggarwal, Sangrur, Punjab, India for helping in analysis and interpretation of the acquired X-ray images.
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest.
Shikhar Johri: Methodology, Software, Validation, Compilation of Results and figures. Mehendi Goyal: Methodology, Validation, Compilation of Results and figures. Sahil Jain: Collection of Chest X-ray Images, Results Compilation. Manoj Baranwal: Conceptualization, Investigation, Manuscript writing-review and editing. Vinay Kumar: Conceptualization, Investigation, Manuscript writing-review and editing. Rahul Upadhyay: Conceptualization, Investigation, Manuscript writing-review and editing.
|Test||20 (all)||Bacterial Pneumonia||568||611||52||67||0.91||0.09||0.89||0.92||0.92||0.91|
|Test||13 (haralick feature)||Bacterial Pneumonia||554||569||94||81||0.87||0.13||0.87||0.86||0.86||0.87|
|Test||7 (hu moments feature)||Bacterial Pneumonia||563||448||215||72||0.78||0.22||0.89||0.68||0.68||0.77|
- Abbreviations: Acc, accuracy; FN, false negative; FP, false positive; FS, F-1 score; MC, miss-classification; Pre, precision; Sen, sensitivity; Spe, specificity; TN, true negative; TP, true positive.
Data openly available in a public repository.
- 1 WHO. WHO statement regarding cluster of pneumonia cases in Wuhan. WHO online release. 2020. https://www.who.int/china/news/detail/09-01-2020-who-statement-regardingcluster-of-pneumonia-cases-in-wuhan-china. Accessed February 11, 2020.
- 2 WHO. Coronavirus disease (COVID-19) situation report. WHO online release. 2020. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200616-covid-19-sitrep-148-draft.pdf?sfvrsn=9b2015e9_2. Accessed June 17, 2020.
- 3, , , et al. Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med. 2020; 382: 1708– 1720. https://doi.org/10.1056/nejmoa2002032.
- 4, , , et al. Inflammatory responses and inflammation-associated diseases in organs. Oncotarget. 2018; 9: 7204– 7218. https://doi.org/10.18632/oncotarget.23208.
- 5, , , et al. Rationale for targeting complement in COVID-19. EMBO Mol Med. 2020; 12(8): e12642.
- 6, , , et al. Severe COVID-19: a multifaceted viral vasculopathy syndrome. Ann Diagn Pathol. 2021; 50: 151645.
- 7, , , . Toward understanding molecular bases for biological diversification of human coronaviruses: present status and future perspectives. Front Microbiol. 2020; 11: 2016.
- 8, . Coronavirus disease 2019 (COVID-19): role of chest CT in diagnosis and management. Am J Roentgenol. 2020; 214: 1280– 1286. https://doi.org/10.2214/ajr.20.22954.
- 9, , , et al. Radiological findings from 81 patients with COVID-19 pneumonia in Wuhan, China: a descriptive study. Lancet Infect Dis. 2020; 24: 425– 434. https://doi.org/10.1016/s1473-3099(20)30086-4.
- 10, , , , . Relation between chest CT findings and clinical conditions of Coronavirus disease (COVID-19) pneumonia: a multicenter study. Am J Roentgenol. 2020; 214: 1072– 1077. https://doi.org/10.2214/ajr.20.22976.
- 11, , , et al. A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: a study of a family cluster. Lancet. 2020; 395: 514– 523. https://doi.org/10.1016/s0140-6736(20)30154-9.
- 12, , , et al. Sensitivity of chest CT for COVID-19: comparison to RT-PCR. Radiology. 2020; 296: E115– E117. https://doi.org/10.1148/radiol.2020200432.
- 13, , , et al. Chest radiographic and CT findings of the 2019 novel coronavirus disease (COVID-19): analysis of nine patients treated in Korea. Korean J Radiol. 2020; 21: 494– 500. https://doi.org/10.3348/kjr.2020.0132.
- 14, , , , , . Automated detection of COVID-19 cases using deep neural networks with X-ray images. Comput Biol Med. 2020; 121: 103792. https://doi.org/10.1016/j.compbiomed.2020.103792.
- 15. COVID-19 pneumonia – evolution over a week. 2020. https://radiopaedia.org/cases/covid-19-pneumonia-evolution-over-a-week-1
- 16, , . CoroNet: a deep neural network for detection and diagnosis of COVID-19 from chest X-ray images. Comput Methods Programs Biomed. 2020; 196: 105581. https://doi.org/10.1016/j.cmpb.2020.105581.
- 17, , , , . COVID-19 identification in chest X-ray images on flat and hierarchical classification scenarios. Comput Methods Programs Biomed. 2020; 194: 105532. https://doi.org/10.1016/j.cmpb.2020.105532.
- 18, , , , . Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput Biol Med. 2020; 121: 103795. https://doi.org/10.1016/j.compbiomed.2020.103795.
- 19, , . CovXNet: a multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization. Comput Biol Med. 2020; 122: 103869. https://doi.org/10.1016/j.compbiomed.2020.103869.
- 20, , , , . Coronavirus disease (COVID-19) detection in chest X-ray images using majority voting based classifier ensemble. Expert Syst Appl. 2021; 165: 113909.
- 21, , , , . Automated deep transfer learning-based approach for detection of COVID-19 infection in chest X-rays. IRBM. 2020.
- 22, . Comparison of deep learning approaches to predict COVID-19 infection. Chaos Solitons Fractals. 2020; 140: 110120.
- 23, , . Applications of machine learning and artificial intelligence for Covid-19 (SARS-CoV-2) pandemic: a review. Chaos Solitons Fractals. 2020; 139: 110059.
- 24, , , et al. Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review. 2020; arXiv preprint arXiv:2008.06388.
- 25, , . COVID-19 Radiography Database, Kaggle; 2020. https://www.kaggle.com/tawsifurrahman/covid19-radiography-database/data. Accessed July 12, 2020.
- 26, , . COVID-19 image data collection. 2020; arXiv:2003. 11597, 2020. https://github.com/ieee8023/covid-chestxray-dataset. Accessed July 12, 2020.
- 27. Chest X-ray images (pneumonia). https://www.kaggle.com/paultimothymooney/chest-xray-pneumonia. Accessed July 12, 2020.
- 28. Statistical and structural approaches to texture. Proc IEEE. 1979; 67: 786– 804. https://doi.org/10.1109/proc.1979.11328.
- 29, , , et al. An innovative neural network framework to classify blood vessels and tubules based on Haralick features evaluated in histological images of kidney biopsy. Neurocomputing. 2017; 228: 143– 153. https://doi.org/10.1016/j.neucom.2016.09.091.
- 30, , , et al. Automatic tuberculosis screening using chest radiographs. IEEE Trans Med Imaging. 2014; 33: 233– 245. https://doi.org/10.1109/tmi.2013.2284099.
- 31, . Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors. 2018; 18: 18. https://doi.org/10.3390/s18010018.
- 32, , . Application of S-transform for automated detection of vigilance level using EEG signals. J Biol Syst. 2016; 24(01): 1– 27. https://doi.org/10.1142/S0218339016500017.
- 33, , , . Classification of drowsy and controlled EEG signals. In: 2012 IEEE Nirma University International Conference on Engineering (NUiCONE); 2012: 1– 4. https://doi.org/10.1109/NUICONE.2012.6493289
- 34, . Boosting algorithms: regularization, prediction and model fitting. Stat Sci. 2007; 22: 516– 522. https://doi.org/10.1214/07-sts242rej.
- 35, , , , . Influenza-associated pneumonia as reference to assess seriousness of coronavirus disease (COVID-19). Eurosurveillance. 2020; 25: 1– 5. https://doi.org/10.2807/1560-7917.es.2020.25.11.2000258.
- 36, , , , , . Mutational frequencies of SARS-CoV-2 genome during the beginning months of the outbreak in USA. Pathogens. 2020; 9(7): 565. https://doi.org/10.3390/pathogens9070565.
- 37, , , , , . Transmission routes of 2019-nCoV and controls in dental practice. Int J Oral Sci. 2020; 12: 1– 6. https://doi.org/10.1038/s41368-020-0075-9.
- 38, , , et al. Detection of SARS–CoV-2 in different types of clinical specimens. JAMA. 2020; 323: 1843– 1844. https://doi.org/10.1001/jama.2020.3786.
- 39, , , , . Essentials for radiologists on COVID-19: an update-Radiology Scientific Expert Panel. Radiology. 2020; 296: E113– E114. https://doi.org/10.1148/radiol.2020200527.
- 40, , , , , . Chest CT for typical coronavirus disease 2019 (COVID-19) pneumonia: relationship to negative RT-PCR testing. Radiology. 2020; 296: E41– E45. https://doi.org/10.1148/radiol.2020200343.
- 41, , , et al. Weakly supervised deep learning for covid-19 infection detection and classification from CT images. IEEE Access. 2020; 8: 118869– 118883.
- 42, , , et al. Machine learning for COVID-19 diagnosis and prognostication: lessons for amplifying the signal while reducing the noise. Radiol Artif Intell. 2020;e210011. https://pubs.rsna.org/doi/pdf/10.1148/ryai.2021210011.
- 43, , , et al. Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell. 2021; 3(3): 199– 217.
- 44, , , . Deep learning system for COVID-19 diagnosis aid using X-ray pulmonary images. Appl Sci. 2020; 10(13): 4640.
- 45, , . COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest X-ray images. 2020; arXiv: 2003.09871. https://arxiv.org/abs/2003.09871
Credit: Google News