Project description:Noetic comes from the Greek word noēsis, meaning inner wisdom or direct knowing. Noetic experiences often transcend the perception of our five senses and are ubiquitous worldwide, although no instrument exists to evaluate noetic characteristics both within and between individuals. We developed the Noetic Signature Inventory (NSI) through an iterative qualitative and statistical process as a tool to subjectively assess noetic characteristics. Study 1 developed and evaluated a 175-item NSI using 521 self-selected research participants, resulting in a 46-item NSI with an 11-factor model solution. Study 2 examined the 11-factor solution, construct validity, and test–retest reliability, resulting in a 44-item NSI with a 12-factor model solution. Study 3 confirmed the final 44-item NSI in a diverse population. The 12-factors were: (1) Inner Knowing, (2) Embodied Sensations, (3) Visualizing to Access or Affect, (4) Inner Knowing Through Touch, (5) Healing, (6) Knowing the Future, (7) Physical Sensations from Other People, (8) Knowing Yourself, (9) Knowing Other’s Minds, (10) Apparent Communication with Non-physical Beings, (11) Knowing Through Dreams, and (12) Inner Voice. The NSI demonstrated internal consistency, convergent and divergent content validity, and test–retest reliability. The NSI can be used for the future studies to evaluate intra- and inter-individual variation of noetic experiences.
Project description:Although principal component analysis is frequently used in multivariate/ analysis, it has disadvantages when applied to experimental or diagnostic data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the set. Second, the method is sensitive to experimental noise and bias between sample groups, since it cannot reflect the design of experiments; rather, it estimates the same weight and independence of all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. The resulting components were scaled to unify their size unit. Also, the principal axes were identified using training data sets and shared among experiments. This training data reflects the design of experiments, and its preparation allows noise to be reduced and group bias to be removed. The effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. Additionally, unknown samples were appropriately classified using pre-arranged axes, and principal axes well reflected the characteristics of groups in the experiments. This SuperSeries is composed of the SubSeries listed below.
Project description:We propose localized functional principal component analysis (LFPCA), looking for orthogonal basis functions with localized support regions that explain most of the variability of a random process. The LFPCA is formulated as a convex optimization problem through a novel Deflated Fantope Localization method and is implemented through an efficient algorithm to obtain the global optimum. We prove that the proposed LFPCA converges to the original FPCA when the tuning parameters are chosen appropriately. Simulation shows that the proposed LFPCA with tuning parameters chosen by cross validation can almost perfectly recover the true eigenfunctions and significantly improve the estimation accuracy when the eigenfunctions are truly supported on some subdomains. In the scenario that the original eigenfunctions are not localized, the proposed LFPCA also serves as a nice tool in finding orthogonal basis functions that balance between interpretability and the capability of explaining variability of the data. The analyses of a country mortality data reveal interesting features that cannot be found by standard FPCA methods.
Project description:We introduce models for the analysis of functional data observed at multiple time points. The dynamic behavior of functional data is decomposed into a time-dependent population average, baseline (or static) subject-specific variability, longitudinal (or dynamic) subject-specific variability, subject-visit-specific variability and measurement error. The model can be viewed as the functional analog of the classical longitudinal mixed effects model where random effects are replaced by random processes. Methods have wide applicability and are computationally feasible for moderate and large data sets. Computational feasibility is assured by using principal component bases for the functional processes. The methodology is motivated by and applied to a diffusion tensor imaging (DTI) study designed to analyze differences and changes in brain connectivity in healthy volunteers and multiple sclerosis (MS) patients. An R implementation is provided.87.
Project description:The Sleep Heart Health Study (SHHS) is a comprehensive landmark study of sleep and its impacts on health outcomes. A primary metric of the SHHS is the in-home polysomnogram, which includes two electroencephalographic (EEG) channels for each subject, at two visits. The volume and importance of this data presents enormous challenges for analysis. To address these challenges, we introduce multilevel functional principal component analysis (MFPCA), a novel statistical methodology designed to extract core intra- and inter-subject geometric components of multilevel functional data. Though motivated by the SHHS, the proposed methodology is generally applicable, with potential relevance to many modern scientific studies of hierarchical or longitudinal functional outcomes. Notably, using MFPCA, we identify and quantify associations between EEG activity during sleep and adverse cardiovascular outcomes.
Project description:Motivated by modern observational studies, we introduce a class of functional models that expand nested and crossed designs. These models account for the natural inheritance of the correlation structures from sampling designs in studies where the fundamental unit is a function or image. Inference is based on functional quadratics and their relationship with the underlying covariance structure of the latent processes. A computationally fast and scalable estimation procedure is developed for high-dimensional data. Methods are used in applications including high-frequency accelerometer data for daily activity, pitch linguistic data for phonetic analysis, and EEG data for studying electrical brain activity during sleep.
Project description:PurposePeak amplitude and peak latency in the pattern reversal visual evoked potential (prVEP) vary with maturation. We considered that principal component analysis (PCA) may be used to describe age-related variation over the entire prVEP time course and provide a means of modeling and removing variation due to developmental age.MethodsPrVEP was recorded from 155 healthy subjects ages 11 to 19 years at two time points. We created a model of the prVEP by identifying principal components (PCs) that explained >95% of the variance in a "training" dataset of 40 subjects. We examined the ability of the PCs to explain variance in an age- and sex-matched "validation" dataset (n = 40) and calculated the intrasubject reliability of the PC coefficients between the two time points. We explored the effect of subject age and sex upon the PC coefficients.ResultsSeven PCs accounted for 96.0% of the variability of the training dataset and 90.5% of the variability in the validation dataset with good within-subject reliability across time points (R > 0.7 for all PCs). The PCA model revealed narrowing and amplitude reduction of the P100 peak with maturation, and a broader and smaller P100 peak in male subjects compared to female subjects.ConclusionsPCA is a generalizable, reliable, and unbiased method of analyzing prVEP. The PCA model revealed changes across maturation and biological sex not fully described by standard peak analysis.Translational relevanceWe describe a novel application of PCA to characterize developmental changes of prVEP in youths that can be used to compare healthy and pathologic pediatric cohorts.
Project description:We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
Project description:We introduce fast multilevel functional principal component analysis (fast MFPCA), which scales up to high dimensional functional data measured at multiple visits. The new approach is orders of magnitude faster than and achieves comparable estimation accuracy with the original MFPCA (Di et al., 2009). Methods are motivated by the National Health and Nutritional Examination Survey (NHANES), which contains minute-level physical activity information of more than 10000 participants over multiple days and 1440 observations per day. While MFPCA takes more than five days to analyze these data, fast MFPCA takes less than five minutes. A theoretical study of the proposed method is also provided. The associated function mfpca.face() is available in the R package refund.
Project description:BackgroundPrincipal component analysis is used to summarize matrix data, such as found in transcriptome, proteome or metabolome and medical examinations, into fewer dimensions by fitting the matrix to orthogonal axes. Although this methodology is frequently used in multivariate analyses, it has disadvantages when applied to experimental data. First, the identified principal components have poor generality; since the size and directions of the components are dependent on the particular data set, the components are valid only within the data set. Second, the method is sensitive to experimental noise and bias between sample groups. It cannot reflect the experimental design that is planned to manage the noise and bias; rather, it estimates the same weight and independence to all the samples in the matrix. Third, the resulting components are often difficult to interpret. To address these issues, several options were introduced to the methodology. First, the principal axes were identified using training data sets and shared across experiments. These training data reflect the design of experiments, and their preparation allows noise to be reduced and group bias to be removed. Second, the center of the rotation was determined in accordance with the experimental design. Third, the resulting components were scaled to unify their size unit.ResultsThe effects of these options were observed in microarray experiments, and showed an improvement in the separation of groups and robustness to noise. The range of scaled scores was unaffected by the number of items. Additionally, unknown samples were appropriately classified using pre-arranged axes. Furthermore, these axes well reflected the characteristics of groups in the experiments. As was observed, the scaling of the components and sharing of axes enabled comparisons of the components beyond experiments. The use of training data reduced the effects of noise and bias in the data, facilitating the physical interpretation of the principal axes.ConclusionsTogether, these introduced options result in improved generality and objectivity of the analytical results. The methodology has thus become more like a set of multiple regression analyses that find independent models that specify each of the axes.