Project description:Unmanned aerial vehicles have an immense capacity for remote imaging of plants in agronomic field research trials. Traits extracted from the plots can explain development of the plants coverage, growth, flowering status, and related phenomenon. An important prerequisite step to obtain such information is to find the exact position of plots to extract them from an orthomosaic image. Extraction of plots using tools which assume a uniform spacing is often erroneous because the plots may neither be perfectly aligned nor equally distributed in a field. A novel approach is proposed which uses image-based optimization algorithm to find the alignment of plots. The method begins with a uniformly spaced grid of plots which is iteratively aligned with regions of high vegetation index, i.e., the underlying plots. The approach is validated and tested on two different orthomosaic images of fields containing wheat plots with simulated and real alignment problems, respectively. The result of alignment is compared to manually located ground truth position of plots and the errors are quantitatively analyzed. The effectiveness of the proposed method is confirmed in accurately estimating the phenotypic trait of canopy coverage compared to the common methods of extraction from uniform grids or trimmed grids. The software developed in this study is available from SourceForge, https://sourceforge.net/projects/phenalysis/.
Project description:Breast cancer is the most diagnosed cancer worldwide and represents the fifth cause of cancer mortality globally. It is a highly heterogeneous disease, that comprises various molecular subtypes, often diagnosed by immunohistochemistry. This technique is widely employed in basic, translational and pathological anatomy research, where it can support the oncological diagnosis, therapeutic decisions and biomarker discovery. Nevertheless, its evaluation is often qualitative, raising the need for accurate quantitation methodologies. We present the software BreastAnalyser, a valuable and reliable tool to automatically measure the area of 3,3'-diaminobenzidine tetrahydrocholoride (DAB)-brown-stained proteins detected by immunohistochemistry. BreastAnalyser also automatically counts cell nuclei and classifies them according to their DAB-brown-staining level. This is performed using sophisticated segmentation algorithms that consider intrinsic image variability and save image normalization time. BreastAnalyser has a clean, friendly and intuitive interface that allows to supervise the quantitations performed by the user, to annotate images and to unify the experts' criteria. BreastAnalyser was validated in representative human breast cancer immunohistochemistry images detecting various antigens. According to the automatic processing, the DAB-brown area was almost perfectly recognized, being the average difference between true and computer DAB-brown percentage lower than 0.7 points for all sets. The detection of nuclei allowed proper cell density relativization of the brown signal for comparison purposes between the different patients. BreastAnalyser obtained a score of 85.5 using the system usability scale questionnaire, which means that the tool is perceived as excellent by the experts. In the biomedical context, the connexin43 (Cx43) protein was found to be significantly downregulated in human core needle invasive breast cancer samples when compared to normal breast, with a trend to decrease as the subtype malignancy increased. Higher Cx43 protein levels were significantly associated to lower cancer recurrence risk in Oncotype DX-tested luminal B HER2- breast cancer tissues. BreastAnalyser and the annotated images are publically available https://citius.usc.es/transferencia/software/breastanalyser for research purposes.
Project description:Automatic target recognition that relies on rapid feature extraction of real-time target from photo-realistic imaging will enable efficient identification of target patterns. To achieve this objective, Cross-plots of binary patterns are explored as potential signatures for the observed target by high-speed capture of the crucial spatial features using minimal computational resources. Target recognition was implemented based on the proposed pattern recognition concept and tested rigorously for its precision and recall performance. We conclude that Cross-plotting is able to produce a digital fingerprint of a target that correlates efficiently and effectively to signatures of patterns having its identity in a target repository.
Project description:Above-ground biomass (AGB) is a trait with much potential for exploitation within wheat breeding programs and is linked closely to canopy height (CH). However, collecting phenotypic data for AGB and CH within breeding programs is labor intensive, and in the case of AGB, destructive and prone to assessment error. As a result, measuring these traits is seldom a priority for breeders, especially at the early stages of a selection program. LiDAR has been demonstrated as a sensor capable of collecting three-dimensional data from wheat field trials, and potentially suitable for providing objective, non-destructive, high-throughput estimates of AGB and CH for use by wheat breeders. The current study investigates the deployment of a LiDAR system on a ground-based high-throughput phenotyping platform in eight wheat field trials across southern Australia, for the non-destructive estimate of AGB and CH. LiDAR-derived measurements were compared to manual measurements of AGB and CH collected at each site and assessed for their suitability of application within a breeding program. Correlations between AGB and LiDAR Projected Volume (LPV) were generally strong (up to r = 0.86), as were correlations between CH and LiDAR Canopy Height (LCH) (up to r = 0.94). Heritability (H2) of LPV (H2 = 0.32-0.90) was observed to be greater than, or similar to, the heritability of AGB (H2 = 0.12-0.78) for the majority of measurements. A similar level of heritability was observed for LCH (H2 = 0.41-0.98) and CH (H2 = 0.49-0.98). Further to this, measurements of LPV and LCH were shown to be highly repeatable when collected from either the same or opposite direction of travel. LiDAR scans were collected at a rate of 2,400 plots per hour, with the potential to further increase throughput to 7,400 plots per hour. This research demonstrates the capability of LiDAR sensors to collect high-quality, non-destructive, repeatable measurements of AGB and CH suitable for use within both breeding and research programs.
Project description:Given the capacity of Optical Coherence Tomography (OCT) imaging to display structural changes in a wide variety of eye diseases and neurological disorders, the need for OCT image segmentation and the corresponding data interpretation is latterly felt more than ever before. In this paper, we wish to address this need by designing a semi-automatic software program for applying reliable segmentation of 8 different macular layers as well as outlining retinal pathologies such as diabetic macular edema. The software accommodates a novel graph-based semi-automatic method, called "Livelayer" which is designed for straightforward segmentation of retinal layers and fluids. This method is chiefly based on Dijkstra's Shortest Path First (SPF) algorithm and the Live-wire function together with some preprocessing operations on the to-be-segmented images. The software is indeed suitable for obtaining detailed segmentation of layers, exact localization of clear or unclear fluid objects and the ground truth, demanding far less endeavor in comparison to a common manual segmentation method. It is also valuable as a tool for calculating the irregularity index in deformed OCT images. The amount of time (seconds) that Livelayer required for segmentation of Inner Limiting Membrane, Inner Plexiform Layer-Inner Nuclear Layer, Outer Plexiform Layer-Outer Nuclear Layer was much less than that for the manual segmentation, 5 s for the ILM (minimum) and 15.57 s for the OPL-ONL (maximum). The unsigned errors (pixels) between the semi-automatically labeled and gold standard data was on average 2.7, 1.9, 2.1 for ILM, IPL-INL, OPL-ONL, respectively. The Bland-Altman plots indicated perfect concordance between the Livelayer and the manual algorithm and that they could be used interchangeably. The repeatability error was around one pixel for the OPL-ONL and < 1 for the other two. The unsigned errors between the Livelayer and the manual algorithm was 1.33 for ILM and 1.53 for Nerve Fiber Layer-Ganglion Cell Layer in peripapillary B-Scans. The Dice scores for comparing the two algorithms and for obtaining the repeatability on segmentation of fluid objects were at acceptable levels.
Project description:BackgroundIn recent years, high-throughput microscopy has emerged as a powerful tool to analyze cellular dynamics in an unprecedentedly high resolved manner. The amount of data that is generated, for example in long-term time-lapse microscopy experiments, requires automated methods for processing and analysis. Available software frameworks are well suited for high-throughput processing of fluorescence images, but they often do not perform well on bright field image data that varies considerably between laboratories, setups, and even single experiments.ResultsIn this contribution, we present a fully automated image processing pipeline that is able to robustly segment and analyze cells with ellipsoid morphology from bright field microscopy in a high-throughput, yet time efficient manner. The pipeline comprises two steps: (i) Image acquisition is adjusted to obtain optimal bright field image quality for automatic processing. (ii) A concatenation of fast performing image processing algorithms robustly identifies single cells in each image. We applied the method to a time-lapse movie consisting of ∼315,000 images of differentiating hematopoietic stem cells over 6 days. We evaluated the accuracy of our method by comparing the number of identified cells with manual counts. Our method is able to segment images with varying cell density and different cell types without parameter adjustment and clearly outperforms a standard approach. By computing population doubling times, we were able to identify three growth phases in the stem cell population throughout the whole movie, and validated our result with cell cycle times from single cell tracking.ConclusionsOur method allows fully automated processing and analysis of high-throughput bright field microscopy data. The robustness of cell detection and fast computation time will support the analysis of high-content screening experiments, on-line analysis of time-lapse experiments as well as development of methods to automatically track single-cell genealogies.
Project description:In an urban context, tree data are used in city planning, in locating hazardous trees and in environmental monitoring. This study focuses on developing an innovative methodology to automatically estimate the most relevant individual structural parameters of urban trees sampled by a Mobile LiDAR System at city level. These parameters include the Diameter at Breast Height (DBH), which was estimated by circle fitting of the points belonging to different height bins using RANSAC. In the case of non-circular trees, DBH is calculated by the maximum distance between extreme points. Tree sizes were extracted through a connectivity analysis. Crown Base Height, defined as the length until the bottom of the live crown, was calculated by voxelization techniques. For estimating Canopy Volume, procedures of mesh generation and α-shape methods were implemented. Also, tree location coordinates were obtained by means of Principal Component Analysis. The workflow has been validated on 29 trees of different species sampling a stretch of road 750 m long in Delft (The Netherlands) and tested on a larger dataset containing 58 individual trees. The validation was done against field measurements. DBH parameter had a correlation R2 value of 0.92 for the height bin of 20 cm which provided the best results. Moreover, the influence of the number of points used for DBH estimation, considering different height bins, was investigated. The assessment of the other inventory parameters yield correlation coefficients higher than 0.91. The quality of the results confirms the feasibility of the proposed methodology, providing scalability to a comprehensive analysis of urban trees.
Project description:We report the results of a novel protocol for running online experiments using a combination of an online experimental platform in parallel with web-conferencing software in two formats—with and without subject webcams—to improve subjects’ attention and engagement. We compare the results between our online sessions with the offline (lab) sessions of the same experiment. We find that both online formats lead to comparable subject characteristics and performance as the offline (lab) experiment. However, the webcam-on protocol has less noisy data, and hence better statistical power, than the protocol without a webcam. The webcam-on protocol can detect reasonable effect sizes with a comparable sample size as in the offline (lab) protocol. Supplementary Information The online version contains supplementary material available at 10.1007/s40881-021-00112-w.
Project description:A fast and robust interpolation filter based on finite difference TPS has been proposed in this paper. The proposed method employs discrete cosine transform to efficiently solve the linear system of TPS equations in case of gridded data, and by a pre-defined weight function with respect to simulation residuals to reduce the effect of outliers and misclassified non-ground points on the accuracy of reference ground surface construction. Fifteen groups of benchmark datasets, provided by the International Society for Photogrammetry and Remote Sensing (ISPRS) commission, were employed to compare the performance of the proposed method with that of the multi-resolution hierarchical classification method (MHC). Results indicate that with respect to kappa coefficient and total error, the proposed method is averagely more accurate than MHC. Specifically, the proposed method is 1.03 and 1.32 times as accurate as MHC in terms of kappa coefficient and total errors. More importantly, the proposed method is averagely more than 8 times faster than MHC. In comparison with some recently developed methods, the proposed algorithm also achieves a good performance.
Project description:BackgroundPrecision agriculture is an emerging research field that relies on monitoring and managing field variability in phenotypic traits. An important phenotypic trait is biomass, a comprehensive indicator that can reflect crop yields. However, non-destructive biomass estimation at fine levels is unknown and challenging due to the lack of accurate and high-throughput phenotypic data and algorithms.ResultsIn this study, we evaluated the capability of terrestrial light detection and ranging (lidar) data in estimating field maize biomass at the plot, individual plant, leaf group, and individual organ (i.e., individual leaf or stem) levels. The terrestrial lidar data of 59 maize plots with more than 1000 maize plants were collected and used to calculate phenotypes through a deep learning-based pipeline, which were then used to predict maize biomass through simple regression (SR), stepwise multiple regression (SMR), artificial neural network (ANN), and random forest (RF). The results showed that terrestrial lidar data were useful for estimating maize biomass at all levels (at each level, R2 was greater than 0.80), and biomass estimation at leaf group level was the most precise (R2 = 0.97, RMSE = 2.22 g) among all four levels. All four regression techniques performed similarly at all levels. However, considering the transferability and interpretability of the model itself, SR is the suggested method for estimating maize biomass from terrestrial lidar-derived phenotypes. Moreover, height-related variables showed to be the most important and robust variables for predicting maize biomass from terrestrial lidar at all levels, and some two-dimensional variables (e.g., leaf area) and three-dimensional variables (e.g., volume) showed great potential as well.ConclusionWe believe that this study is a unique effort on evaluating the capability of terrestrial lidar on estimating maize biomass at difference levels, and can provide a useful resource for the selection of the phenotypes and models required to estimate maize biomass in precision agriculture practices.