Project description:BackgroundMonte Carlo simulation is considered as the most accurate method for dose calculation in radiotherapy. PRIMO is a Monte-Carlo program with a user-friendly graphical interface.Material and methodA VitalBeam with 6MV and 6MV flattening filter free (FFF), equipped with the 120 Millennium multileaf collimator was simulated by PRIMO. We adjusted initial energy, energy full width at half maximum (FWHM), focal spot FWHM, and beam divergence to match the measurements. The water tank and ion-chamber were used in the measurement. Percentage depth dose (PDD) and off axis ratio (OAR) were evaluated with gamma passing rates (GPRs) implemented in PRIMO. PDDs were matched at different widths of standard square fields. OARs were matched at five depths. Transmission factor and dose leaf gap (DLG) were simulated. DLG was measured by electronic portal imaging device using a sweeping gap method.ResultFor the criterion of 2%/2 mm, 1%/2 mm and 1%/1 mm, the GPRs of 6MV PDD were 99.33-100%, 99-100%, and 99-100%, respectively; the GPRs of 6MV FFF PDD were 99.33-100%, 98.99-99.66%, and 97.64-98.99%, respectively; the GPRs of 6MV OAR were 96.4-100%, 90.99-100%, and 85.12-98.62%, respectively; the GPRs of 6MV FFF OAR were 95.15-100%, 89.32-100%, and 87.02-99.74%, respectively. The calculated DLG matched well with the measurement (6MV: 1.36 mm vs. 1.41 mm; 6MV FFF: 1.07 mm vs. 1.03 mm, simulation vs measurement). The transmission factors were similar (6MV: 1.25% vs. 1.32%; 6MV FFF: 0.8% vs. 1.12%, simulation vs measurement).ConclusionThe calculated PDD, OAR, DLG and transmission factor were all in good agreement with measurements. PRIMO is an independent (with respect to analytical dose calculation algorithm) and accurate Monte Carlo tool.
Project description:A computational method is developed to carry out explicit solvent simulations of complex molecular systems under conditions of constant pH. In constant-pH simulations, preidentified ionizable sites are allowed to spontaneously protonate and deprotonate as a function of time in response to the environment and the imposed pH. The method, based on a hybrid scheme originally proposed by H. A. Stern (J. Chem. Phys. 2007, 126, 164112), consists of carrying out short nonequilibrium molecular dynamics (neMD) switching trajectories to generate physically plausible configurations with changed protonation states that are subsequently accepted or rejected according to a Metropolis Monte Carlo (MC) criterion. To ensure microscopic detailed balance arising from such nonequilibrium switches, the atomic momenta are altered according to the symmetric two-ends momentum reversal prescription. To achieve higher efficiency, the original neMD-MC scheme is separated into two steps, reducing the need for generating a large number of unproductive and costly nonequilibrium trajectories. In the first step, the protonation state of a site is randomly attributed via a Metropolis MC process on the basis of an intrinsic pKa; an attempted nonequilibrium switch is generated only if this change in protonation state is accepted. This hybrid two-step inherent pKa neMD-MC simulation method is tested with single amino acids in solution (Asp, Glu, and His) and then applied to turkey ovomucoid third domain and hen egg-white lysozyme. Because of the simple linear increase in the computational cost relative to the number of titratable sites, the present method is naturally able to treat extremely large systems.
Project description:The use of Monte-Carlo (MC) p$$ p $$ -values when testing the significance of a large number of hypotheses is now commonplace. In large-scale hypothesis testing, we will typically encounter at least some p$$ p $$ -values near the threshold of significance, which require a larger number of MC replicates than p$$ p $$ -values that are far from the threshold. As a result, some incorrect conclusions can be reached due to MC error alone; for hypotheses near the threshold, even a very large number (eg, 106$$ 1{0}^6 $$ ) of MC replicates may not be enough to guarantee conclusions reached using MC p$$ p $$ -values. Gandy and Hahn (GH)6-8 have developed the only method that directly addresses this problem. They defined a Monte-Carlo error rate (MCER) to be the probability that any decisions on accepting or rejecting a hypothesis based on MC p$$ p $$ -values are different from decisions based on ideal p$$ p $$ -values; their method then makes decisions by controlling the MCER. Unfortunately, the GH method is frequently very conservative, often making no rejections at all and leaving a large number of hypotheses "undecided". In this article, we propose MERIT, a method for large-scale MC hypothesis testing that also controls the MCER but is more statistically efficient than the GH method. Through extensive simulation studies, we demonstrate that MERIT controls the MCER while making more decisions that agree with the ideal p$$ p $$ -values than GH does. We also illustrate our method by an analysis of gene expression data from a prostate cancer study.
Project description:BackgroundIn the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones.Principal findingsPCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published.SignificanceIf the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process. Since in practice, the spatial pattern of a plant association remains unknown before starting a vegetation survey, for field applications the use of PCQM3 along with the corrected estimator is recommended. However, for sparse plant populations, where the use of PCQM3 may pose practical limitations, the PCQM2 or PCQM1 would be applied. During application of PCQM in the field, care should be taken to summarize the distance data based on 'the inverse summation of squared distances' but not 'the summation of inverse squared distances' as erroneously published.
Project description:We present a computational model of the interaction between hydrophobic cations, such as the antimicrobial peptide, Magainin2, and membranes that include anionic lipids. The peptide's amino acids were represented as two interaction sites: one corresponds to the backbone alpha-carbon and the other to the side chain. The membrane was represented as a hydrophobic profile, and its anionic nature was represented by a surface of smeared charges. Thus, the Coulombic interactions between the peptide and the membrane were calculated using the Gouy-Chapman theory that describes the electrostatic potential in the aqueous phase near the membrane. Peptide conformations and locations near the membrane, and changes in the membrane width, were sampled at random, using the Metropolis criterion, taking into account the underlying energetics. Simulations of the interactions of heptalysine and the hydrophobic-cationic peptide, Magainin2, with acidic membranes were used to calibrate the model. The calibrated model reproduced structural data and the membrane-association free energies that were measured also for other basic and hydrophobic-cationic peptides. Interestingly, amphipathic peptides, such as Magainin2, were found to adopt two main membrane-associated states. In the first, the peptide resided mostly outside the polar headgroups region. In the second, which was energetically more favorable, the peptide assumed an amphipathic-helix conformation, where its hydrophobic face was immersed in the hydrocarbon region of the membrane and the charged residues were in contact with the surface of smeared charges. This dual behavior provides a molecular interpretation of the available experimental data.
Project description:Multiple trait introgression is the process by which multiple desirable traits are converted from a donor to a recipient cultivar through backcrossing and selfing. The goal of this procedure is to recover all the attributes of the recipient cultivar, with the addition of the specified desirable traits. A crucial step in this process is the selection of parents to form new crosses. In this study, we propose a new selection approach that estimates the genetic distribution of the progeny of backcrosses after multiple generations using information of recombination events. Our objective is to select the most promising individuals for further backcrossing or selfing. To demonstrate the effectiveness of the proposed method, a case study has been conducted using maize data where our method is compared with state-of-the-art approaches. Simulation results suggest that the proposed method, look-ahead Monte Carlo, achieves higher probability of success than existing approaches. Our proposed selection method can assist breeders to efficiently design trait introgression projects.
Project description:BACKGROUND: Although Monte Carlo simulations of light propagation in full segmented three-dimensional MRI based anatomical models of the human head have been reported in many articles. To our knowledge, there is no patient-oriented simulation for individualized calibration with NIRS measurement. Thus, we offer an approach for brain modeling based on image segmentation process with in vivo MRI T1 three-dimensional image to investigate the individualized calibration for NIRS measurement with Monte Carlo simulation. METHODS: In this study, an individualized brain is modeled based on in vivo MRI 3D image as five layers structure. The behavior of photon migration was studied for this individualized brain detections based on three-dimensional time-resolved Monte Carlo algorithm. During the Monte Carlo iteration, all photon paths were traced with various source-detector separations for characterization of brain structure to provide helpful information for individualized design of NIRS system. RESULTS: Our results indicate that the patient-oriented simulation can provide significant characteristics on the optimal choice of source-detector separation within 3.3 cm of individualized design in this case. Significant distortions were observed around the cerebral cortex folding. The spatial sensitivity profile penetrated deeper to the brain in the case of expanded CSF. This finding suggests that the optical method may provide not only functional signal from brain activation but also structural information of brain atrophy with the expanded CSF layer. The proposed modeling method also provides multi-wavelength for NIRS simulation to approach the practical NIRS measurement. CONCLUSIONS: In this study, the three-dimensional time-resolved brain modeling method approaches the realistic human brain that provides useful information for NIRS systematic design and calibration for individualized case with prior MRI data.
Project description:Blood pressure measurements are one of the most routinely performed medical tests globally. Blood pressure is an important metric since it provides information that can be used to diagnose several vascular diseases. Conventional blood pressure measurement systems use cuff-based devices to measure the blood pressure, which may be uncomfortable and sometimes burdensome to the subjects. Therefore, in this study, we propose a cuffless blood pressure estimation model based on Monte Carlo simulation (MCS). We propose a heterogeneous finger model for the MCS at wavelengths of 905 nm and 940 nm. After recording the photon intensities from the MCS over a certain range of blood pressure values, the actual photoplethysmography (PPG) signals were used to estimate blood pressure. We used both publicly available and self-made datasets to evaluate the performance of the proposed model. In case of the publicly available dataset for transmission-type MCS, the mean absolute errors are 3.32 ± 6.03 mmHg for systolic blood pressure (SBP), 2.02 ± 2.64 mmHg for diastolic blood pressure (DBP), and 1.76 ± 2.8 mmHg for mean arterial pressure (MAP). The self-made dataset is used for both transmission- and reflection-type MCSs; its mean absolute errors are 2.54 ± 4.24 mmHg for SBP, 1.49 ± 2.82 mmHg for DBP, and 1.51 ± 2.41 mmHg for MAP in the transmission-type case as well as 3.35 ± 5.06 mmHg for SBP, 2.07 ± 2.83 mmHg for DBP, and 2.12 ± 2.83 mmHg for MAP in the reflection-type case. The estimated results of the SBP and DBP satisfy the requirements of the Association for the Advancement of Medical Instrumentation (AAMI) standards and are within Grade A according to the British Hypertension Society (BHS) standards. These results show that the proposed model is efficient for estimating blood pressures using fingertip PPG signals.
Project description:BackgroundIn the biological sciences the TCID50 (median tissue culture infective dose) assay is often used to determine the strength of a virus.MethodsWhen the so-called Spearman-Kaerber calculation is used, the ratio between the pfu (the number of plaque forming units, the effective number of virus particles) and the TCID50, theoretically approaches a simple function of Eulers constant. Further, the standard deviation of the logarithm of the TCID50 approaches a simple function of the dilution factor and the number of wells used for determining the ratios in the assay. However, these theoretical calculations assume that the dilutions of the assay are independent, and in practice this is not completely correct. The assay was simulated using Monte Carlo techniques.ResultsOur simulation studies show that the theoretical results actually hold true for practical implementations of the assay. Furthermore, the simulation studies show that the distribution of the (the log of) TCID50, although discrete in nature, has a close relationship to the normal distribution.ConclusionThe pfu is proportional to the TCID50 titre with a factor of about 0.56 when using the Spearman-Kaerber calculation method. The normal distribution can be used for statistical inferences and ANOVA on the (the log of) TCID50 values is meaningful with group sizes of 5 and above.
Project description:Generalized structured component analysis (GSCA) is a theoretically well-founded approach to component-based structural equation modeling (SEM). This approach utilizes the bootstrap method to estimate the confidence intervals of its parameter estimates without recourse to distributional assumptions, such as multivariate normality. It currently provides the bootstrap percentile confidence intervals only. Recently, the potential usefulness of the bias-corrected and accelerated bootstrap (BCa) confidence intervals (CIs) over the percentile method has attracted attention for another component-based SEM approach-partial least squares path modeling. Thus, in this study, we implemented the BCa CI method into GSCA and conducted a rigorous simulation to evaluate the performance of three bootstrap CI methods, including percentile, BCa, and Student's t methods, in terms of coverage and balance. We found that the percentile method produced CIs closer to the desired level of coverage than the other methods, while the BCa method was less prone to imbalance than the other two methods. Study findings and implications are discussed, as well as limitations and directions for future research.