An X-ray computed micro-tomography dataset for oil removal from carbonate porous media.
ABSTRACT: This study reveals the pore-scale details of oil mobilisation and recovery from a carbonate rock upon injection of aqueous nanoparticle (NP) suspensions. X-ray computed micro-tomography (?CT), which is a non-destructive imaging technique, was used to acquire a dataset which includes: (i) 3D images of the sample collected at the end of fluid injection steps, and (ii) 2D radiogram series collected during fluid injections. The latter allows monitoring fluid flow dynamics at time resolutions down to a few seconds using a laboratory-based ?CT scanner. By making this dataset publicly available we enable (i) new image reconstruction algorithms to be tested on large images, (ii) further development of image segmentation algorithms based on machine learning, and (iii) new models for multi-phase fluid displacements in porous media to be evaluated using images of a dynamic process in a naturally occurring and complex material. This dataset is comprehensive in that it offers a series of images that were captured before/during/and after the immiscible fluid injections.
Project description:This work provides new insights into the dynamics of silica nanoparticle-based removal of organic fluids (here oil) from naturally occurring porous media. We have used 4D (time-resolved 3D) imaging at pore-scale using X-ray computed micro-tomography (?CT) technique. The captured 3D tomographic time-series data reveal the dynamics of immiscible oil displacement from a carbonate rock upon injection of nanoparticle (NP) suspensions (0.06 and 0.12?wt% SiO2 in deionised water). Our analysis shows significant pore-scale remobilisation of initially trapped oil upon injection of the NP suspensions, specifically, at higher concentration. Our data shows that oil clusters become significantly smaller with larger fluid/fluid interface as a result of the higher concentration NP injection. This paper demonstrates that use of 2D radiograms collected during fluid injections allows monitoring flow dynamics at time resolutions down to a few seconds using conventional laboratory-based ?CT scanners. Here, as an underlying mechanism for oil remobilisation, we present the first 4D evidence of in-situ formation of an oil in water emulsion induced by nanoparticles.
Project description:Background: Infiltrations of 18F-fluorodeoxyglucose (FDG) injections affect positron emission tomography/computed tomography (PET/CT) image quality and quantification. A device using scintillation sensors (Lucerno Dynamics, Cary, NC) provides dynamic measurements acquired during FDG uptake to identify and characterize radioactivity near the injection site prior to patient imaging. Our aim was to compare sensor measurements against dynamic PET image acquisition, our proposed reference in assessing injection quality during the uptake period. Methods: Subjects undergoing routine FDG PET/CT imaging were eligible for this Institutional Review Board approved prospective study. After providing informed consent, subjects had sensors topically placed on their arms. FDG was injected into subjects' veins directly on the PET imaging table. Dynamic images of the injection site were acquired during 45 min of the uptake period. These dynamic image acquisitions and subjects' routine standard static images were evaluated by nuclear medicine physicians for abnormal FDG accumulation near the injection site. Sensor measurements were interpreted independently by Lucerno staff. Dynamic image acquisition interpretation results were compared to the sensor measurement interpretations and to static image interpretations. Results: Twenty-four subjects were consented and enrolled. Data from 21 subjects were gathered. During dynamic image acquisition review, physicians interpreted 4 subjects with no FDG accumulation at the injection site, whereas 17 showed evidence of accumulation. In 10 of the 17 cases that showed FDG accumulation, the FDG presence at the injection site resolved completely during uptake corresponding to venous stasis, the temporary sequestration of blood from circulation. Static image interpretation agreed with dynamic images interpretation in 11/21 (52%) subjects. Sensor measurement interpretations agreed with the dynamic images interpretations in 18/21 (86%) subjects. Conclusions: Sensor measurements can be an effective way to identify and characterize infiltrations and venous stasis. Comparable to an infiltration, venous stasis may produce spurious and clinically meaningful measurement bias and possibly even scan misinterpretation. Since the quality and quantification of PET/CT studies are of clinical importance, sensor measurements acquired during the FDG uptake may prove to be a useful quality control measure to reduce infiltration rates and potentially improve patient care. Registration: Clinicaltrials.gov, Identifier: NCT03041090.
Project description:PURPOSE:We aimed to evaluate the quality of chest computed tomography (CT) images obtained with low-dose CT using three iterative reconstruction (IR) algorithms. METHODS:Two 64-detector spiral CT scanners (HDCT and iCT) were used to scan a chest phantom containing 6 ground-glass nodules (GGNs) at 11 radiation dose levels. CT images were reconstructed by filtered back projection or three IR algorithms. Reconstructed images were analyzed for CT values, average noise, contrast-to-noise ratio (CNR) values, subjective image noise, and diagnostic acceptability of the GGNs. Repeated-measures analysis of variance was used for statistical analyses. RESULTS:Average noise decreased and CNR increased with increasing radiation dose when the same reconstruction algorithm was applied. Average image noise was significantly lower when reconstructed with MBIR than with iDOSE4 at the same low radiation doses. The two radiologists showed good interobserver consistency in image quality with kappa 0.83. A significant relationship was found between image noise and diagnostic acceptability of the GGNs. CONCLUSION:Three IR algorithms are able to reduce the image noise and improve the image quality of low-dose CT. In the same radiation dose, the low-dose CT image quality reconstructed with MBIR algorithms is better than that of other IR algorithms.
Project description:Objective:Digital pathology is today a widely used technology, and the digitalization of microscopic slides into whole slide images (WSIs) allows the use of machine learning algorithms as a tool in the diagnostic process. In recent years, "deep learning" algorithms for image analysis have been applied to digital pathology with great success. The training of these algorithms requires a large volume of high-quality images and image annotations. These large image collections are a potent source of information, and to use and share the information, standardization of the content through a consistent terminology is essential. The aim of this project was to develop a pilot dataset of exhaustive annotated WSI of normal and abnormal human tissue and link the annotations to appropriate ontological information. Materials and Methods:Several biomedical ontologies and controlled vocabularies were investigated with the aim of selecting the most suitable ontology for this project. The selection criteria required an ontology that covered anatomical locations, histological subcompartments, histopathologic diagnoses, histopathologic terms, and generic terms such as normal, abnormal, and artifact. WSIs of normal and abnormal tissue from 50 colon resections and 69 skin excisions, diagnosed 2015-2016 at the Department of Clinical Pathology in Linköping, were randomly collected. These images were manually and exhaustively annotated at the level of major subcompartments, including normal or abnormal findings and artifacts. Results:Systemized nomenclature of medicine clinical terms (SNOMED CT) was chosen, and the annotations were linked to its codes and terms. Two hundred WSI were collected and annotated, resulting in 17,497 annotations, covering a total area of 302.19 cm2, equivalent to 107,7 gigapixels. Ninety-five unique SNOMED CT codes were used. The time taken to annotate a WSI varied from 45 s to over 360 min, a total time of approximately 360 h. Conclusion:This work resulted in a dataset of 200 exhaustive annotated WSIs of normal and abnormal tissue from the colon and skin, and it has informed plans to build a comprehensive library of annotated WSIs. SNOMED CT was found to be the best ontology for annotation labeling. This project also demonstrates the need for future development of annotation tools in order to make the annotation process more efficient.
Project description:To develop a quality assurance (QA) workflow by using a robust, curated, manually segmented anatomic region-of-interest (ROI) library as a benchmark for quantitative assessment of different image registration techniques used for head and neck radiation therapy-simulation computed tomography (CT) with diagnostic CT coregistration.Radiation therapy-simulation CT images and diagnostic CT images in 20 patients with head and neck squamous cell carcinoma treated with curative-intent intensity-modulated radiation therapy between August 2011 and May 2012 were retrospectively retrieved with institutional review board approval. Sixty-eight reference anatomic ROIs with gross tumor and nodal targets were then manually contoured on images from each examination. Diagnostic CT images were registered with simulation CT images rigidly and by using four deformable image registration (DIR) algorithms: atlas based, B-spline, demons, and optical flow. The resultant deformed ROIs were compared with manually contoured reference ROIs by using similarity coefficient metrics (ie, Dice similarity coefficient) and surface distance metrics (ie, 95% maximum Hausdorff distance). The nonparametric Steel test with control was used to compare different DIR algorithms with rigid image registration (RIR) by using the post hoc Wilcoxon signed-rank test for stratified metric comparison.A total of 2720 anatomic and 50 tumor and nodal ROIs were delineated. All DIR algorithms showed improved performance over RIR for anatomic and target ROI conformance, as shown for most comparison metrics (Steel test, P < .008 after Bonferroni correction). The performance of different algorithms varied substantially with stratification by specific anatomic structures or category and simulation CT section thickness.Development of a formal ROI-based QA workflow for registration assessment demonstrated improved performance with DIR techniques over RIR. After QA, DIR implementation should be the standard for head and neck diagnostic CT and simulation CT allineation, especially for target delineation.
Project description:Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons algorithm was more accurate than the GMI-based demons algorithm in registering PET/CT esophageal images.
Project description:Image quality is a key issue in radiology, particularly in a clinical setting where it is important to achieve accurate diagnoses while minimizing radiation dose. Some computed tomography (CT) manufacturers have introduced algorithms that claim significant dose reduction. In this study, we assessed CT image quality produced by two reconstruction algorithms provided with GE Healthcare's Discovery 690 Elite positron emission tomography (PET) CT scanner. Image quality was measured for images obtained at various doses with both conventional filtered back-projection (FBP) and adaptive statistical iterative reconstruction (ASIR) algorithms. A stan-dard CT dose index (CTDI) phantom and a pencil ionization chamber were used to measure the CT dose at 120 kVp and an exposure of 260 mAs. Image quality was assessed using two phantoms. CT images of both phantoms were acquired at tube voltage (kV) of 120 with exposures ranging from 25 mAs to 400 mAs. Images were reconstructed using FBP and ASIR ranging from 10% to 100%, then analyzed for noise, low-contrast detectability, contrast-to-noise ratio (CNR), and modulation transfer function (MTF). Noise was 4.6 HU in water phantom images acquired at 260 mAs/FBP 120 kV and 130 mAs/50% ASIR 120 kV. The large objects (fre-quency < 7 lp/cm) retained fairly acceptable image quality at 130 mAs/50% ASIR, compared to 260 mAs/FBP. The application of ASIR for small objects (frequency >7 lp/cm) showed poor visibility compared to FBP at 260 mAs and even worse for images acquired at less than 130 mAs. ASIR blending more than 50% at low dose tends to reduce contrast of small objects (frequency >7 lp/cm). We concluded that dose reduction and ASIR should be applied with close attention if the objects to be detected or diagnosed are small (frequency > 7 lp/cm). Further investigations are required to correlate the small objects (frequency > 7 lp/cm) to patient anatomy and clinical diagnosis.
Project description:Medical image biomarkers of cancer promise improvements in patient care through advances in precision medicine. Compared to genomic biomarkers, image biomarkers provide the advantages of being non-invasive, and characterizing a heterogeneous tumor in its entirety, as opposed to limited tissue available via biopsy. We developed a unique radiogenomic dataset from a Non-Small Cell Lung Cancer (NSCLC) cohort of 211 subjects. The dataset comprises Computed Tomography (CT), Positron Emission Tomography (PET)/CT images, semantic annotations of the tumors as observed on the medical images using a controlled vocabulary, and segmentation maps of tumors in the CT scans. Imaging data are also paired with results of gene mutation analyses, gene expression microarrays and RNA sequencing data from samples of surgically excised tumor tissue, and clinical data, including survival outcomes. This dataset was created to facilitate the discovery of the underlying relationship between tumor molecular and medical image features, as well as the development and evaluation of prognostic medical image biomarkers.
Project description:An important step in PET brain kinetic analysis is the registration of functional data to an anatomical MR image. Typically, PET-MR registrations in nonhuman primate neuroreceptor studies used PET images acquired early post-injection, (e.g., 0-10 min) to closely resemble the subject's MR image. However, a substantial fraction of these registrations (~25%) fail due to the differences in kinetics and distribution for various radiotracer studies and conditions (e.g., blocking studies). The Multi-Transform Method (MTM) was developed to improve the success of registrations between PET and MR images. Two algorithms were evaluated, MTM-I and MTM-II. The approach involves creating multiple transformations by registering PET images of different time intervals, from a dynamic study, to a single reference (i.e., MR image) (MTM-I) or to multiple reference images (i.e., MR and PET images pre-registered to the MR) (MTM-II). Normalized mutual information was used to compute similarity between the transformed PET images and the reference image(s) to choose the optimal transformation. This final transformation is used to map the dynamic dataset into the animal's anatomical MR space, required for kinetic analysis. The chosen transforms from MTM-I and MTM-II were evaluated using visual rating scores to assess the quality of spatial alignment between the resliced PET and reference images. One hundred twenty PET datasets involving eleven different tracers from 3 different scanners were used to evaluate the MTM algorithms. Studies were performed with baboons and rhesus monkeys on the HR+, HRRT, and Focus-220. Successful transformations increased from 77.5%, 85.8%, to 96.7% using the 0-10 min method, MTM-I, and MTM-II, respectively, based on visual rating scores. The Multi-Transform Methods proved to be a robust technique for PET-MR registrations for a wide range of PET studies.
Project description:Despite of the ongoing interest in the fusion of multi-band images for surveillance applications and a steady stream of publications in this area, there is only a very small number of static registered multi-band test images (and a total lack of dynamic image sequences) publicly available for the development and evaluation of image fusion algorithms. To fill this gap, the TNO Multiband Image Collection provides intensified visual (390-700 nm), near-infrared (700-1000 nm), and longwave infrared (8-12 µm) nighttime imagery of different military and surveillance scenarios, showing different objects and targets (e.g., people, vehicles) in a range of different (e.g., rural, urban) backgrounds. The dataset will be useful for the development of static and dynamic image fusion algorithms, color fusion algorithms, multispectral target detection and recognition algorithms, and dim target detection algorithms.