Project description:Metabolomics commonly relies on using one-dimensional (1D) 1H NMR spectroscopy or liquid chromatography-mass spectrometry (LC-MS) to derive scientific insights from large collections of biological samples. NMR and MS approaches to metabolomics require, among other issues, a data processing pipeline. Quantitative assessment of the performance of these software platforms is challenged by a lack of standardized data sets with "known" outcomes. To resolve this issue, we created a novel simulated LC-MS data set with known peak locations and intensities, defined metabolite differences between groups (i.e., fold change > 2, coefficient of variation ≤ 25%), and different amounts of added Gaussian noise (0, 5, or 10%) and missing features (0, 10, or 20%). This data set was developed to improve benchmarking of existing LC-MS metabolomics software and to validate the updated version of our MVAPACK software, which added gas chromatography-MS and LC-MS functionality to its existing 1D and two-dimensional NMR data processing capabilities. We also included two experimental LC-MS data sets acquired from a standard mixture andMycobacterium smegmatiscell lysates since a simulated data set alone may not capture all the unique characteristics and variability of real spectra needed to assess software performance properly. Our simulated and experimental LC-MS data sets were processed with the MS-DIAL and XCMSOnline software packages and our MVAPACK toolkit to showcase the utility of our data sets to benchmark MVAPACK against community standards. Our results demonstrate the enhanced objectivity and clarity of software assessment that can be achieved when both simulated and experimental data are employed since distinctly different software performances were observed with the simulated and experimental LC-MS data sets. We also demonstrate that the performance of MVAPACK is equivalent to or exceeds existing LC-MS software programs while providing a single platform for processing and analyzing both NMR and MS data sets.
Project description:Finding the 3D structure of proteins and their complexes has several applications, such as developing vaccines that target viral proteins effectively. Methods such as cryogenic electron microscopy (cryo-EM) have improved in their ability to capture high-resolution images, and when applied to a purified sample containing copies of a macromolecule, they can be used to produce a high-quality snapshot of different 2D orientations of the macromolecule, which can be combined to reconstruct its 3D structure. Instead of purifying a sample so that it contains only one macromolecule, a process that can be difficult, time-consuming, and expensive, a cell sample containing multiple particles can be photographed directly and separated into its constituent particles using computational methods. Previous work, SLICEM, has separated 2D projection images of different particles into their respective groups using 2 methods, clustering a graph with edges weighted by pairwise similarities of common lines of the 2D projections. In this work, we develop DeepSLICEM, a pipeline that clusters rich representations of 2D projections, obtained by combining graphical features from a similarity graph based on common lines, with additional image features extracted from a convolutional neural network. DeepSLICEM explores 6 pretrained convolutional neural networks and one supervised Siamese CNN for image representation, 10 pretrained deep graph neural networks for similarity graph node representations, and 4 methods for clustering, along with 8 methods for directly clustering the similarity graph. On 6 synthetic and experimental datasets, the DeepSLICEM pipeline finds 92 method combinations achieving better clustering accuracy than previous methods from SLICEM. Thus, in this paper, we demonstrate that deep neural networks have great potential for accurately separating mixtures of 2D projections of different macromolecules computationally.
Project description:Cryo-electron microscopy reconstruction methods are uniquely able to reveal structures of many important macromolecules and macromolecular complexes. EMDataBank.org, a joint effort of the Protein Data Bank in Europe (PDBe), the Research Collaboratory for Structural Bioinformatics (RCSB) and the National Center for Macromolecular Imaging (NCMI), is a global 'one-stop shop' resource for deposition and retrieval of cryoEM maps, models and associated metadata. The resource unifies public access to the two major archives containing EM-based structural data: EM Data Bank (EMDB) and Protein Data Bank (PDB), and facilitates use of EM structural data of macromolecules and macromolecular complexes by the wider scientific community.
Project description:The growing body of work in the epidemiology literature focused on G-computation includes theoretical explanations of the method but very few simulations or examples of application. The small number of G-computation analyses in the epidemiology literature relative to other causal inference approaches may be partially due to a lack of didactic explanations of the method targeted toward an epidemiology audience. The authors provide a step-by-step demonstration of G-computation that is intended to familiarize the reader with this procedure. The authors simulate a data set and then demonstrate both G-computation and traditional regression to draw connections and illustrate contrasts between their implementation and interpretation relative to the truth of the simulation protocol. A marginal structural model is used for effect estimation in the G-computation example. The authors conclude by answering a series of questions to emphasize the key characteristics of causal inference techniques and the G-computation procedure in particular.
Project description:ImportanceSurgical data scientists lack video data sets that depict adverse events, which may affect model generalizability and introduce bias. Hemorrhage may be particularly challenging for computer vision-based models because blood obscures the scene.ObjectiveTo assess the utility of the Simulated Outcomes Following Carotid Artery Laceration (SOCAL)-a publicly available surgical video data set of hemorrhage complication management with instrument annotations and task outcomes-to provide benchmarks for surgical data science techniques, including computer vision instrument detection, instrument use metrics and outcome associations, and validation of a SOCAL-trained neural network using real operative video.Design, setting, and participantsFor this quailty improvement study, a total of 75 surgeons with 1 to 30 years' experience (mean, 7 years) were filmed from January 1, 2017, to December 31, 2020, managing catastrophic surgical hemorrhage in a high-fidelity cadaveric training exercise at nationwide training courses. Videos were annotated from January 1 to June 30, 2021.InterventionsSurgeons received expert coaching between 2 trials.Main outcomes and measuresHemostasis within 5 minutes (task success, dichotomous), time to hemostasis (in seconds), and blood loss (in milliliters) were recorded. Deep neural networks (DNNs) were trained to detect surgical instruments in view. Model performance was measured using mean average precision (mAP), sensitivity, and positive predictive value.ResultsSOCAL contains 31 443 frames with 65 071 surgical instrument annotations from 147 trials with associated surgeon demographic characteristics, time to hemostasis, and recorded blood loss for each trial. Computer vision-based instrument detection methods using DNNs trained on SOCAL achieved a mAP of 0.67 overall and 0.91 for the most common surgical instrument (suction). Hemorrhage control challenges standard object detectors: detection of some surgical instruments remained poor (mAP, 0.25). On real intraoperative video, the model achieved a sensitivity of 0.77 and a positive predictive value of 0.96. Instrument use metrics derived from the SOCAL video were significantly associated with performance (blood loss).Conclusions and relevanceHemorrhage control is a high-stakes adverse event that poses unique challenges for video analysis, but no data sets of hemorrhage control exist. The use of SOCAL, the first data set to depict hemorrhage control, allows the benchmarking of data science applications, including object detection, performance metric development, and identification of metrics associated with outcomes. In the future, SOCAL may be used to build and validate surgical data science models.
Project description:In January 2020, a workshop was held at EMBL-EBI (Hinxton, UK) to discuss data requirements for deposition and validation of cryoEM structures, with a focus on single-particle analysis. The meeting was attended by 47 experts in data processing, model building and refinement, validation, and archiving of such structures. This report describes the workshop's motivation and history, the topics discussed, and consensus recommendations resulting from the workshop. Some challenges for future methods-development efforts in this area are also highlighted, as is the implementation to date of some of the recommendations.
Project description:With larger, higher speed detectors and improved automation, individual CryoEM instruments are capable of producing a prodigious amount of data each day, which must then be stored, processed and archived. While it has become routine to use lossless compression on raw counting-mode movies, the averages which result after correcting these movies no longer compress well. These averages could be considered sufficient for long term archival, yet they are conventionally stored with 32 bits of precision, despite high noise levels. Derived images are similarly stored with excess precision, providing an opportunity to decrease project sizes and improve processing speed. We present a simple argument based on propagation of uncertainty for safe bit truncation of flat-fielded images combined with lossless compression. The same method can be used for most derived images throughout the processing pipeline. We test the proposed strategy on two standard, data-limited CryoEM data sets, demonstrating that these limits are safe for real-world use. We find that 5 bits of precision is sufficient for virtually any raw CryoEM data and that 8-12 bits is sufficient for intermediate averages or final 3-D structures. Additionally, we detail and recommend specific rules for discretization of data as well as a practical compressed data representation that is tuned to the specific needs of CryoEM.
Project description:In January 2020, a workshop was held at EMBL-EBI (Hinxton, UK) to discuss data requirements for the deposition and validation of cryoEM structures, with a focus on single-particle analysis. The meeting was attended by 47 experts in data processing, model building and refinement, validation, and archiving of such structures. This report describes the workshop's motivation and history, the topics discussed, and the resulting consensus recommendations. Some challenges for future methods-development efforts in this area are also highlighted, as is the implementation to date of some of the recommendations.
Project description:BackgroundLow-density lipoprotein (LDL) particles, the major carriers of cholesterol in the human circulation, have a key role in cholesterol physiology and in the development of atherosclerosis. The most prominent structural components in LDL are the core-forming cholesteryl esters (CE) and the particle-encircling single copy of a huge, non-exchangeable protein, the apolipoprotein B-100 (apoB-100). The shape of native LDL particles and the conformation of native apoB-100 on the particles remain incompletely characterized at the physiological human body temperature (37 °C).Methodology/principal findingsTo study native LDL particles, we applied cryo-electron microscopy to calculate 3D reconstructions of LDL particles in their hydrated state. Images of the particles vitrified at 6 °C and 37 °C resulted in reconstructions at ~16 Å resolution at both temperatures. 3D variance map analysis revealed rigid and flexible domains of lipids and apoB-100 at both temperatures. The reconstructions showed less variability at 6 °C than at 37 °C, which reflected increased order of the core CE molecules, rather than decreased mobility of the apoB-100. Compact molecular packing of the core and order in a lipid-binding domain of apoB-100 were observed at 6 °C, but not at 37 °C. At 37 °C we were able to highlight features in the LDL particles that are not clearly separable in 3D maps at 6 °C. Segmentation of apoB-100 density, fitting of lipovitellin X-ray structure, and antibody mapping, jointly revealed the approximate locations of the individual domains of apoB-100 on the surface of native LDL particles.Conclusions/significanceOur study provides molecular background for further understanding of the link between structure and function of native LDL particles at physiological body temperature.
Project description:Sweet potato feathery mottle virus (SPFMV) and Sweet potato mild mottle virus (SPMMV) are members of the genera Potyvirus and Ipomovirus, family Potyviridae, sharing Ipomoea batatas as common host, but transmitted, respectively, by aphids and whiteflies. Virions of family members consist of flexuous rods with multiple copies of a single coat protein (CP) surrounding the RNA genome. Here we report the generation of virus-like particles (VLPs) by transient expression of the CPs of SPFMV and SPMMV in the presence of a replicating RNA in Nicotiana benthamiana. Analysis of the purified VLPs by cryo-electron microscopy, gave structures with resolutions of 2.6 and 3.0 Å, respectively, showing a similar left-handed helical arrangement of 8.8 CP subunits per turn with the C-terminus at the inner surface and a binding pocket for the encapsidated ssRNA. Despite their similar architecture, thermal stability studies reveal that SPMMV VLPs are more stable than those of SPFMV.