{"database":"biostudies-literature","file_versions":[],"scores":null,"additional":{"submitter":["Li C"],"funding":["Natural Science Foundation of Guangdong Province","National Natural Science Foundation of China","Shenzhen Fundamental Research Program"],"pagination":["4140-4151"],"full_dataset_link":["https://www.ebi.ac.uk/biostudies/studies/S-EPMC8904133"],"repository":["biostudies-literature"],"omics_type":["Unknown"],"volume":["25(11)"],"pubmed_abstract":["The coronavirus disease 2019 (COVID-19) has become a severe worldwide health emergency and is spreading at a rapid rate. Segmentation of COVID lesions from computed tomography (CT) scans is of great importance for supervising disease progression and further clinical treatment. As labeling COVID-19 CT scans is labor-intensive and time-consuming, it is essential to develop a segmentation method based on limited labeled data to conduct this task. In this paper, we propose a self-ensembled co-training framework, which is trained by limited labeled data and large-scale unlabeled data, to automatically extract COVID lesions from CT scans. Specifically, to enrich the diversity of unsupervised information, we build a co-training framework consisting of two collaborative models, in which the two models teach each other during training by using their respective predicted pseudo-labels of unlabeled data. Moreover, to alleviate the adverse impacts of noisy pseudo-labels for each model, we propose a self-ensembling strategy to perform consistency regularization for the up-to-date predictions of unlabeled data, in which the predictions of unlabeled data are gradually ensembled via moving average at the end of every training epoch. We evaluate our framework on a COVID-19 dataset containing 103 CT scans. Experimental results show that our proposed method achieves better performance in the case of only 4 labeled CT scans compared to the state-of-the-art semi-supervised segmentation networks."],"journal":["IEEE journal of biomedical and health informatics"],"pubmed_title":["Self-Ensembling Co-Training Framework for Semi-Supervised COVID-19 CT Segmentation."],"pmcid":["PMC8904133"],"funding_grant_id":["JCYJ20200109110208764","U1813204","JCYJ20200109110420626","61802385","2021A1515012604"],"pubmed_authors":["Li C","Zhang K","Feng Z","Lin F","Deng Z","Dong L","Dou Q","Si W","Heng PA","Deng X"],"additional_accession":[]},"is_claimable":false,"name":"Self-Ensembling Co-Training Framework for Semi-Supervised COVID-19 CT Segmentation.","description":"The coronavirus disease 2019 (COVID-19) has become a severe worldwide health emergency and is spreading at a rapid rate. Segmentation of COVID lesions from computed tomography (CT) scans is of great importance for supervising disease progression and further clinical treatment. As labeling COVID-19 CT scans is labor-intensive and time-consuming, it is essential to develop a segmentation method based on limited labeled data to conduct this task. In this paper, we propose a self-ensembled co-training framework, which is trained by limited labeled data and large-scale unlabeled data, to automatically extract COVID lesions from CT scans. Specifically, to enrich the diversity of unsupervised information, we build a co-training framework consisting of two collaborative models, in which the two models teach each other during training by using their respective predicted pseudo-labels of unlabeled data. Moreover, to alleviate the adverse impacts of noisy pseudo-labels for each model, we propose a self-ensembling strategy to perform consistency regularization for the up-to-date predictions of unlabeled data, in which the predictions of unlabeled data are gradually ensembled via moving average at the end of every training epoch. We evaluate our framework on a COVID-19 dataset containing 103 CT scans. Experimental results show that our proposed method achieves better performance in the case of only 4 labeled CT scans compared to the state-of-the-art semi-supervised segmentation networks.","dates":{"release":"2021-01-01T00:00:00Z","publication":"2021 Nov","modification":"2024-11-19T16:35:10.059Z","creation":"2024-11-19T16:35:10.059Z"},"accession":"S-EPMC8904133","cross_references":{"pubmed":["34375293"],"doi":["10.1109/JBHI.2021.3103646"]}}