Project description:This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.
Project description:The automatic computerized detection of regions of interest (ROI) is an important step in the process of medical image processing and analysis. The reasons are many, and include an increasing amount of available medical imaging data, existence of inter-observer and inter-scanner variability, and to improve the accuracy in automatic detection in order to assist doctors in diagnosing faster and on time. A novel algorithm, based on visual saliency, is developed here for the identification of tumor regions from MR images of the brain. The GBM saliency detection model is designed by taking cue from the concept of visual saliency in natural scenes. A visually salient region is typically rare in an image, and contains highly discriminating information, with attention getting immediately focused upon it. Although color is typically considered as the most important feature in a bottom-up saliency detection model, we circumvent this issue in the inherently gray scale MR framework. We develop a novel pseudo-coloring scheme, based on the three MRI sequences, viz. FLAIR, T2 and T1C (contrast enhanced with Gadolinium). A bottom-up strategy, based on a new pseudo-color distance and spatial distance between image patches, is defined for highlighting the salient regions in the image. This multi-channel representation of the image and saliency detection model help in automatically and quickly isolating the tumor region, for subsequent delineation, as is necessary in medical diagnosis. The effectiveness of the proposed model is evaluated on MRI of 80 subjects from the BRATS database in terms of the saliency map values. Using ground truth of the tumor regions for both high- and low- grade gliomas, the results are compared with four highly referred saliency detection models from literature. In all cases the AUC scores from the ROC analysis are found to be more than 0.999 ± 0.001 over different tumor grades, sizes and positions.
Project description:BackgroundHip fracture occurs when an applied force exceeds the force that the proximal femur can support (the fracture load or "strength") and can have devastating consequences with poor functional outcomes. Proximal femoral strengths for specific loading conditions can be computed by subject-specific finite element analysis (FEA) using quantitative computerized tomography (QCT) images. However, the radiation and availability of QCT limit its clinical usability. Alternative low-dose and widely available measurements, such as dual energy X-ray absorptiometry (DXA) and genetic factors, would be preferable for bone strength assessment. The aim of this paper is to design a deep learning-based model to predict proximal femoral strength using multi-view information fusion.ResultsWe developed new models using multi-view variational autoencoder (MVAE) for feature representation learning and a product of expert (PoE) model for multi-view information fusion. We applied the proposed models to an in-house Louisiana Osteoporosis Study (LOS) cohort with 931 male subjects, including 345 African Americans and 586 Caucasians. We performed genome-wide association studies (GWAS) to select 256 genetic variants with the lowest p-values for each proximal femoral strength and integrated whole genome sequence (WGS) features and DXA-derived imaging features to predict proximal femoral strength. The best prediction model for fall fracture load was acquired by integrating WGS features and DXA-derived imaging features. The designed models achieved the mean absolute percentage error of 18.04%, 6.84% and 7.95% for predicting proximal femoral fracture loads using linear models of fall loading, nonlinear models of fall loading, and nonlinear models of stance loading, respectively.ConclusionThe proposed models are capable of predicting proximal femoral strength using WGS features and DXA-derived imaging features. Though this tool is not a substitute for predicting FEA using QCT images, it would make improved assessment of hip fracture risk more widely available while avoiding the increased radiation exposure from QCT.
Project description:Biological tissues have complex 3D collagen fiber architecture that cannot be fully visualized by conventional second harmonic generation (SHG) microscopy due to electric dipole considerations. We have developed a multi-view SHG imaging platform that successfully visualizes all orientations of collagen fibers. This is achieved by rotating tissues relative to the excitation laser plane of incidence, where the complete fibrillar structure is then visualized following registration and reconstruction. We evaluated high frequency and Gaussian weighted fusion reconstruction algorithms, and found the former approach performs better in terms of the resulting resolution. The new approach is a first step toward SHG tomography.
Project description:MotivationCancer genetic heterogeneity analysis has critical implications for tumour classification, response to therapy and choice of biomarkers to guide personalized cancer medicine. However, existing heterogeneity analysis based solely on molecular profiling data usually suffers from a lack of information and has limited effectiveness. Many biomedical and life sciences databases have accumulated a substantial volume of meaningful biological information. They can provide additional information beyond molecular profiling data, yet pose challenges arising from potential noise and uncertainty.ResultsIn this study, we aim to develop a more effective heterogeneity analysis method with the help of prior information. A network-based penalization technique is proposed to innovatively incorporate a multi-view of prior information from multiple databases, which accommodates heterogeneity attributed to both differential genes and gene relationships. To account for the fact that the prior information might not be fully credible, we propose a weighted strategy, where the weight is determined dependent on the data and can ensure that the present model is not excessively disturbed by incorrect information. Simulation and analysis of The Cancer Genome Atlas glioblastoma multiforme data demonstrate the practical applicability of the proposed method.Availability and implementationR code implementing the proposed method is available at https://github.com/mengyunwu2020/PECM. The data that support the findings in this paper are openly available in TCGA (The Cancer Genome Atlas) at https://portal.gdc.cancer.gov/.Supplementary informationSupplementary data are available at Bioinformatics online.
Project description:Meat analogue is a food product mainly made of plant proteins. It is considered to be a sustainable food and has gained a lot of interest in recent years. Hybrid meat is a next generation meat analogue prepared by the co-processing of both plant and animal protein ingredients at different ratios and is considered to be nutritionally superior to the currently available plant-only meat analogues. Three-dimensional (3D) printing technology is becoming increasingly popular in food processing. Three-dimensional food printing involves the modification of food structures, which leads to the creation of soft food. Currently, there is no available research on 3D printing of meat analogues. This study was carried out to create plant and animal protein-based formulations for 3D printing of hybrid meat analogues with soft textures. Pea protein isolate (PPI) and chicken mince were selected as the main plant protein and meat sources, respectively, for 3D printing tests. Then, rheology and forward extrusion tests were carried out on these selected samples to obtain a basic understanding of their potential printability. Afterwards, extrusion-based 3D printing was conducted to print a 3D chicken nugget shape. The addition of 20% chicken mince paste to PPI based paste achieved better printability and fibre structure.
Project description:Predicting the equilibrium solubility of organic, crystalline materials at all relevant temperatures is crucial to the digital design of manufacturing unit operations in the chemical industries. The work reported in our current publication builds upon the limited number of recently published quantitative structure-property relationship studies which modelled the temperature dependence of aqueous solubility. One set of models was built to directly predict temperature dependent solubility, including for materials with no solubility data at any temperature. We propose that a modified cross-validation protocol is required to evaluate these models. Another set of models was built to predict the related enthalpy of solution term, which can be used to estimate solubility at one temperature based upon solubility data for the same material at another temperature. We investigated whether various kinds of solid state descriptors improved the models obtained with a variety of molecular descriptor combinations: lattice energies or 3D descriptors calculated from crystal structures or melting point data. We found that none of these greatly improved the best direct predictions of temperature dependent solubility or the related enthalpy of solution endpoint. This finding is surprising because the importance of the solid state contribution to both endpoints is clear. We suggest our findings may, in part, reflect limitations in the descriptors calculated from crystal structures and, more generally, the limited availability of polymorph specific data. We present curated temperature dependent solubility and enthalpy of solution datasets, integrated with molecular and crystal structures, for future investigations.
Project description:Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.
Project description:Phenotypic analysis of cassava root crowns (CRCs) so far has been limited to visual inspection and very few measurements due to its laborious process in the field. Here, we developed a platform for acquiring 3D CRC models using close-range photogrammetry for phenotypic analysis. The state of the art is a low cost and easy to set up 3D acquisition requiring only a background sheet, a reference object and a camera, compatible with field experiments in remote areas. We tested different software with CRC samples, and Agisoft and Blender were the most suitable software for generating high-quality 3D models and data analysis, respectively. We optimized the workflow by testing different numbers of images for 3D reconstruction and found that a minimum of 25 images per CRC can provide high quality 3D models. Up to ten traits, including 3D crown volumes, 3D crown surface, root density, surface-to-volume ratio, root numbers, root angle, crown diameter, cylinder soil volume, CRC compactness and root length can be extracted providing novel parameters for studying cassava storage roots. We applied this platform to partial-inbred cassava populations and demonstrated that our platform provides reliable 3D CRC modelling for phenotypic analysis, analysis of genetic variances and supporting breeding selection.
Project description:A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.