ABSTRACT: We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.
Project description:The three-dimensional shape of the ear has been proven to be a stable candidate for biometric authentication because of its desirable properties such as universality, uniqueness, and permanence. In this paper, a special laser scanner designed for online three-dimensional ear acquisition was described. Based on the dataset collected by our scanner, two novel feature classes were defined from a three-dimensional ear image: the global feature class (empty centers and angles) and local feature class (points, lines, and areas). These features are extracted and combined in an optimal way for three-dimensional ear recognition. Using a large dataset consisting of 2,000 samples, the experimental results illustrate the effectiveness of fusing global and local features, obtaining an equal error rate of 2.2%.
Project description:ObjectiveThis study aims to employ physiological model simulation to systematically analyze the frequency-domain components of PPG signals and extract their key features. The efficacy of these frequency-domain features in effectively distinguishing emotional states will also be investigated.MethodsA dual windkessel model was employed to analyze PPG signal frequency components and extract distinctive features. Experimental data collection encompassed both physiological (PPG) and psychological measurements, with subsequent analysis involving distribution patterns and statistical testing (U-tests) to examine feature-emotion relationships. The study implemented support vector machine (SVM) classification to evaluate feature effectiveness, complemented by comparative analysis using pulse rate variability (PRV) features, morphological features, and the DEAP dataset.ResultsThe results demonstrate significant differentiation in PPG frequency-domain feature responses to arousal and valence variations, achieving classification accuracies of 87.5% and 81.4%, respectively. Validation on the DEAP dataset yielded consistent patterns with accuracies of 73.5% (arousal) and 71.5% (valence). Feature fusion incorporating the proposed frequency-domain features enhanced classification performance, surpassing 90% accuracy.ConclusionThis study uses physiological modeling to analyze PPG signal frequency components and extract key features. We evaluate their effectiveness in emotion recognition and reveal relationships among physiological parameters, frequency features, and emotional states.SignificanceThese findings advance understanding of emotion recognition mechanisms and provide a foundation for future research.
Project description:Recently, gait has been gathering extensive interest for the non-fungible position in applications. Although various methods have been proposed for gait recognition, most of them can only attain an excellent recognition performance when the probe and gallery gaits are in a similar condition. Once external factors (e.g., clothing variations) influence people's gaits and changes happen in human appearances, a significant performance degradation occurs. Hence, in our article, a robust hybrid part-based spatio-temporal feature learning method is proposed for gait recognition to handle this cloth-changing problem. First, human bodies are segmented into the affected and non/less unaffected parts based on the anatomical studies. Then, a well-designed network is proposed in our method to formulate our required hybrid features from the non/less unaffected body parts. This network contains three sub-networks, aiming to generate features independently. Each sub-network emphasizes individual aspects of gait, hence an effective hybrid gait feature can be created through their concatenation. In addition, temporal information can be used as complement to enhance the recognition performance, a sub-network is specifically proposed to establish the temporal relationship between consecutive short-range frames. Also, since local features are more discriminative than global features in gait recognition, in this network a sub-network is specifically proposed to generate features of local refined differences. The effectiveness of our proposed method has been evaluated by experiments on the CASIA Gait Dataset B and OU-ISIR Treadmill Gait Dataset B. Related experiments illustrate that compared with other gait recognition methods, our proposed method can achieve a prominent result when handling this cloth-changing gait recognition problem.
Project description:To determine whether scISOr-seq isoforms are translated into proteins, we extracted protein samples from the organ of Corti and identified using Mass Spectrometry (MS)-based proteomics.
Project description:Autoimmune Inner Ear Disease (AIED) is characterized by bilateral, fluctuating sensorineural hearing loss with periods of hearing decline triggered by unknown stimuli. Here we examined whether an environmental exposure to mold in these AIED patients is sufficient to generate a pro-inflammatory response that may, in part, explain periods of acute exacerbation of disease. We hypothesized that molds may stimulate an aberrant immune response in these patients as both several Aspergillus species and penecillium share homology with the LCCL domain of the inner ear protein, cochlin. We showed the presence of higher levels of anti-mold IgG in plasma of AIED patients at dilution of 1:256 (p?=?0.032) and anti-cochlin IgG 1:256 (p?=?0.0094 and at 1:512 p?=?0.024) as compared with controls. Exposure of peripheral blood mononuclear cells (PBMC) of AIED patients to mold resulted in an up-regulation of IL-1? mRNA expression, enhanced IL-1? and IL-6 secretion, and generation of IL-17 expressing cells in mold-sensitive AIED patients, suggesting mold acts as a PAMP in a subset of these patients. Naïve B cells secreted IgM when stimulated with conditioned supernatant from AIED patients' monocytes treated with mold extract. In conclusion, the present studies indicate that fungal exposure can trigger autoimmunity in a subset of susceptible AIED patients.
Project description:Antibody recognition of antigens is a critical element of adaptive immunity. One key class of antibody-antigen complexes is comprised of antibodies targeting linear epitopes of proteins, which in some cases are conserved elements of viruses and pathogens of relevance for vaccine design and immunotherapy. Here we report a detailed analysis of the structural and interface features of this class of complexes, based on a set of nearly 200 nonredundant high resolution antibody-peptide complex structures that were assembled from the Protein Data Bank. We found that antibody-bound peptides adopt a broad range of conformations, often displaying limited secondary structure, and that the same peptide sequence bound by different antibodies can in many cases exhibit varying conformations. Propensities of contacts with antibody loops and extent of antibody binding conformational changes were found to be broadly similar to those for antibodies in complex with larger protein antigens. However, antibody-peptide interfaces showed lower buried surface areas and fewer hydrogen bonds than antibody-protein antigen complexes, while calculated binding energy per buried interface area was found to be higher on average for antibody-peptide interfaces, likely due in part to a greater proportion of buried hydrophobic residues and higher shape complementarity. This dataset and these observations can be of use for future studies focused on this class of interactions, including predictive computational modeling efforts and the design of antibodies or epitope-based vaccine immunogens.
Project description:Feature extraction is essential for classifying different motor imagery (MI) tasks in a brain-computer interface. To improve classification accuracy, we propose a novel feature extraction method in which the connectivity increment rate (CIR) of the brain function network (BFN) is extracted. First, the BFN is constructed on the basis of the threshold matrix of the Pearson correlation coefficient of the mu rhythm among the channels. In addition, a weighted BFN is constructed and expressed by the sum of the existing edge weights to characterize the cerebral cortex activation degree in different movement patterns. Then, on the basis of the topological structures of seven mental tasks, three regional networks centered on the C3, C4, and Cz channels are constructed, which are consistent with correspondence between limb movement patterns and cerebral cortex in neurophysiology. Furthermore, the CIR of each regional functional network is calculated to form three-dimensional vectors. Finally, we use the support vector machine to learn a classifier for multiclass MI tasks. Experimental results show a significant improvement and demonstrate the success of the extracted feature CIR in dealing with MI classification. Specifically, the average classification performance reaches 88.67% which is higher than other competing methods, indicating that the extracted CIR is effective for MI classification.
Project description:Environmental transcriptomics (metatranscriptomics) for a specific lineage of eukaryotic microbes (e.g., Dinoflagellata) would be instrumental for unraveling the genetic mechanisms by which these microbes respond to the natural environment, but it has not been exploited because of technical difficulties. Using the recently discovered dinoflagellate mRNA-specific spliced leader as a selective primer, we constructed cDNA libraries (e-cDNAs) from one marine and two freshwater plankton assemblages. Small-scale sequencing of the e-cDNAs revealed functionally diverse transcriptomes proven to be of dinoflagellate origin. A set of dinoflagellate common genes and transcripts of dominant dinoflagellate species were identified. Further analyses of the dataset prompted us to delve into the existing, largely unannotated dinoflagellate EST datasets (DinoEST). Consequently, all four nucleosome core histones, two histone modification proteins, and a nucleosome assembly protein were detected, clearly indicating the presence of nucleosome-like machinery long thought not to exist in dinoflagellates. The isolation of rhodopsin from taxonomically and ecotypically diverse dinoflagellates and its structural similarity and phylogenetic affinity to xanthorhodopsin suggest a common genetic potential in dinoflagellates to use solar energy nonphotosynthetically. Furthermore, we found 55 cytoplasmic ribosomal proteins (RPs) from the e-cDNAs and 24 more from DinoEST, showing that the dinoflagellate phylum possesses all 79 eukaryotic RPs. Our results suggest that a sophisticated eukaryotic molecular machine operates in dinoflagellates that likely encodes many more unsuspected physiological capabilities and, meanwhile, demonstrate that unique spliced leaders are useful for profiling lineage-specific microbial transcriptomes in situ.
Project description:BackgroundMicro-expression is a kind of expression produced by people spontaneously and unconsciously when receiving stimulus. It has the characteristics of low intensity and short duration. Moreover, it cannot be controlled and disguised. Thus, micro-expression can objectively reflect people's real emotional states. Therefore, automatic recognition of micro-expressions can help machines better understand the users' emotion, which can promote human-computer interaction. What's more, micro-expression recognition has a wide range of applications in fields like security systems and psychological treatment. Nowadays, thanks to the development of artificial intelligence, most micro-expression recognition algorithms are based on deep learning. The features extracted by deep learning model from the micro-expression video sequences mainly contain facial motion feature information and identity feature information. However, in micro-expression recognition tasks, the motions of facial muscles are subtle. As a result, the recognition can be easily interfered by identity feature information.MethodsTo solve the above problem, a micro-expression recognition algorithm which decouples facial motion features and identity features is proposed in this paper. A Micro-Expression Motion Information Features Extraction Network (MENet) and an Identity Information Features Extraction Network (IDNet) are designed. By adding a Diverse Attention Operation (DAO) module and constructing divergence loss function in MENet, facial motion features can be effectively extracted. Global attention operations are used in IDNet to extract identity features. A Mutual Information Neural Estimator (MINE) is utilized to decouple facial motion features and identity features, which can help the model obtain more discriminative micro-expression features.ResultsExperiments on the SDU, MMEW, SAMM and CASME II datasets were conducted, which achieved competitive results and proved the superiority of the proposed algorithm.
Project description:Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.