Project description:The Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), is NASA's latest reanalysis for the satellite era (1980 onward) using the Goddard Earth Observing System, version 5 (GEOS-5), Earth system model. MERRA-2 provides several improvements over its predecessor (MERRA-1), including aerosol assimilation for the entire period. MERRA-2 assimilates bias-corrected aerosol optical depth (AOD) from the Moderate Resolution Imaging Spectroradiometer and the Advanced Very High Resolution Radiometer instruments. Additionally, MERRA-2 assimilates (non bias corrected) AOD from the Multiangle Imaging SpectroRadiometer over bright surfaces and AOD from Aerosol Robotic Network sunphotometer stations. This paper, the second of a pair, summarizes the efforts to assess the quality of the MERRA-2 aerosol products. First, MERRA-2 aerosols are evaluated using independent observations. It is shown that the MERRA-2 absorption aerosol optical depth (AAOD) and ultraviolet aerosol index (AI) compare well with Ozone Monitoring Instrument observations. Next, aerosol vertical structure and surface fine particulate matter (PM2.5) are evaluated using available satellite, aircraft, and ground-based observations. While MERRA-2 generally compares well to these observations, the assimilation cannot correct for all deficiencies in the model (e.g., missing emissions). Such deficiencies can explain many of the biases with observations. Finally, a focus is placed on several major aerosol events to illustrate successes and weaknesses of the AOD assimilation: the Mount Pinatubo eruption, a Saharan dust transport episode, the California Rim Fire, and an extreme pollution event over China. The article concludes with a summary that points to best practices for using the MERRA-2 aerosol reanalysis in future studies.
Project description:With increasing reports of resistance to artemisinins and artemisinin-combination therapies, targeting the Plasmodium proteasome is a promising strategy for antimalarial development. We recently reported a highly selective Plasmodium falciparum proteasome inhibitor with anti-malarial activity in the humanized mouse model. To balance the permeability of the series of macrocycles with other drug-like properties, we conducted further structure-activity relationship studies on a biphenyl ether-tethered macrocyclic scaffold. Extensive SAR studies around the P1, P3, and P5 groups and peptide backbone identified compound TDI-8414. TDI-8414 showed nanomolar antiparasitic activity, no toxicity to HepG2 cells, high selectivity against the Plasmodium proteasome over the human constitutive proteasome and immunoproteasome, improved solubility and PAMPA permeability, and enhanced metabolic stability in microsomes and plasma of both humans and mice.
Project description:Studies aimed at elucidating the reaction mechanism of farnesyltransferase (FTase), which catalyzes the prenylation of many cellular signaling proteins including Ras, has been an active area of research. Much is known regarding substrate binding and the impact of various catalytic site residues on catalysis. However, the molecular level details regarding the conformational rearrangement of farnesyldiphosphate (FPP), which has been proposed via structural analysis and mutagenesis studies to occur prior to the chemical step, is still poorly understood. Following on our previous computational characterization of the resting state of the FTase ternary complex, the thermodynamics of the conformational rearrangement step in the absence of magnesium was investigated for the wild type FTase and the Y300Fbeta mutant complexed with the peptide CVIM. In addition, we also explored the target dependence of the conformational activation step by perturbing isoleucine into a leucine (CVLM). The calculated free energy profiles of the proposed conformational transition confirm the presence of a stable intermediate state, which was identified only when the diphosphate is monoprotonated (FPP2-). The farnesyl group in the computed intermediate state assumes a conformation similar to that of the product complex, particularly for the first two isoprene units. We found that Y300beta can readily form hydrogen bonds with either of the phosphates of FPP. Removing the hydroxyl group on Y300beta does not significantly alter the thermodynamics of the conformational transition, but shifts the location of the intermediate farther away from the nucleophile by 0.5 A, which suggests that Y300beta facilitate the reaction by stabilizing the chemical step. Our results also showed an increased transition barrier height for CVLM (1.5 kcal/mol higher than that of CVIM). Although qualitatively consistent with the findings from the recent kinetic isotope experiments by Fierke and co-workers, the magnitude is not large enough to affect the rate-limiting step.
Project description:To better understand proteostasis in health and disease, determination of protein half-lives is essential. We improved the precision and accuracy of peptide-ion intensity based quantification in order to enable accurate determination of protein turnover in non-dividing cells using dynamic-SILAC. This enabled precise and accurate protein half-life determination ranging from 10 to more than 1000 hours. We achieve good proteomic coverage ranging from four to six thousand proteins in several types of non-dividing cells, corresponding to a total of 9699 unique proteins over the entire dataset. Good agreement was observed in half-lives between B-cells, natural killer cells and monocytes, while hepatocytes and mouse embryonic neurons showed substantial differences. Our comprehensive dataset enabled extension and statistical validation of the previous observation that subunits of protein complexes tend to have coherent turnover. Furthermore, we observed complex architecture dependent turnover within complexes of the proteasome and the nuclear pore complex. Our method is broadly applicable and might be used to investigate protein turnover in various cell types.
Project description:The mouse spot test, an in vivo mutation assay, has been used to assess a number of chemicals. It is at present the only in vivo mammalian test system capable of detecting somatic gene mutations according to OECD guidelines (OECD guideline 484). It is however rather insensitive, animal consuming and expensive type of test. More recently several assays using transgenic animals have been developed. From data in the literature, the present study compares the results of in vivo testing of over twenty chemicals using the mouse spot test and compares them with results from the two transgenic mouse models with the best data base available, the lacI model (commercially available as the Big Blue(R) mouse), and the lacZ model (commercially available as the Mutatrade mark Mouse). There was agreement in the results from the majority of substances. No differences were found in the predictability of the transgenic animal assays and the mouse spot test for carcinogenicity. However, from the limited data available, it seems that the transgenic mouse assay has several advantages over the mouse spot test and may be a suitable test system replacing the mouse spot test for detection of gene but not chromosome mutations in vivo.
Project description:The combinatorial complexity of histone samples is manifold higher than what is usually encountered in proteomics. Consequently, a considerably bigger part of the acquired MSMS spectra remains unannotated to date. Adapted search parameters can dig deeper into the dark histone ion space, but the lack of false discovery rate (FDR) control and the high level of ambiguity when searching combinatorial PTMs makes it very hard to assess whether the newly assigned ions are informative. Therefore, we use an easily adoptable time-lapse enzymatic deacetylation (HDAC1) of a commercial histone extract as a quantify-first strategy that allows isolating ion populations of interest, when studying e.g. acetylation on histones, that currently remain in the dark. By adapting search parameters to study potential issues in sample preparation, data acquisition and data analysis, we stepwise managed to double the portion of annotated precursors of interest from 10.5% to 21.6%. This strategy is intended to make up for the lack of validated FDR control and has led to several adaptations of our current workflow that will reduce the portion of the dark histone ion space in the future.
Project description:BackgroundProton pencil beam (PB) dose calculation algorithms have limited accuracy within heterogeneous tissues of lung cancer patients, which may be addressed by modern commercial Monte Carlo (MC) algorithms. We investigated clinical pencil beam scanning (PBS) dose differences between PB and MC-based treatment planning for lung cancer patients.MethodsWith IRB approval, a comparative dosimetric analysis between RayStation MC and PB dose engines was performed on ten patient plans. PBS gantry plans were generated using single-field optimization technique to maintain target coverage under range and setup uncertainties. Dose differences between PB-optimized (PBopt), MC-recalculated (MCrecalc), and MC-optimized (MCopt) plans were recorded for the following region-of-interest metrics: clinical target volume (CTV) V95, CTV homogeneity index (HI), total lung V20, total lung VRX (relative lung volume receiving prescribed dose or higher), and global maximum dose. The impact of PB-based and MC-based planning on robustness to systematic perturbation of range (±3% density) and setup (±3 mm isotropic) was assessed. Pairwise differences in dose parameters were evaluated through non-parametric Friedman and Wilcoxon sign-rank testing.ResultsIn this ten-patient sample, CTV V95 decreased significantly from 99-100% for PBopt to 77-94% for MCrecalc and recovered to 99-100% for MCopt (P<10-5). The median CTV HI (D95/D5) decreased from 0.98 for PBopt to 0.91 for MCrecalc and increased to 0.95 for MCopt (P<10-3). CTV D95 robustness to range and setup errors improved under MCopt (ΔD95 =-1%) compared to MCrecalc (ΔD95 =-6%, P=0.006). No changes in lung dosimetry were observed for large volumes receiving low to intermediate doses (e.g., V20), while differences between PB-based and MC-based planning were noted for small volumes receiving high doses (e.g., VRX). Global maximum patient dose increased from 106% for PBopt to 109% for MCrecalc and 112% for MCopt (P<10-3).ConclusionsMC dosimetry revealed a reduction in target dose coverage under PB-based planning that was regained under MC-based planning along with improved plan robustness. MC-based optimization and dose calculation should be integrated into clinical planning workflows of lung cancer patients receiving actively scanned proton therapy.
Project description:Many visual representations, such as volume-rendered images and metro maps, feature a noticeable amount of information loss due to a variety of many-to-one mappings. At a glance, there seem to be numerous opportunities for viewers to misinterpret the data being visualized, hence, undermining the benefits of these visual representations. In practice, there is little doubt that these visual representations are useful. The recently-proposed information-theoretic measure for analyzing the cost-benefit ratio of visualization processes can explain such usefulness experienced in practice and postulate that the viewers' knowledge can reduce the potential distortion (e.g., misinterpretation) due to information loss. This suggests that viewers' knowledge can be estimated by comparing the potential distortion without any knowledge and the actual distortion with some knowledge. However, the existing cost-benefit measure consists of an unbounded divergence term, making the numerical measurements difficult to interpret. This is the second part of a two-part paper, which aims to improve the existing cost-benefit measure. Part I of the paper provided a theoretical discourse about the problem of unboundedness, reported a conceptual analysis of nine candidate divergence measures for resolving the problem, and eliminated three from further consideration. In this Part II, we describe two groups of case studies for evaluating the remaining six candidate measures empirically. In particular, we obtained instance data for (i) supporting the evaluation of the remaining candidate measures and (ii) demonstrating their applicability in practical scenarios for estimating the cost-benefit of visualization processes as well as the impact of human knowledge in the processes. The real world data about visualization provides practical evidence for evaluating the usability and intuitiveness of the candidate measures. The combination of the conceptual analysis in Part I and the empirical evaluation in this part allows us to select the most appropriate bounded divergence measure for improving the existing cost-benefit measure.