Project description:Facial expression classification requires large amounts of data to reflect the diversity of conditions in the real world. Public databases support research tasks providing researchers an appropriate work framework. However, often these databases do not focus on artistic creation. We developed an innovative facial expression dataset that can help both artists and researchers in the field of affective computing. This dataset can be managed interactively by an intuitive and easy to use software application. The dataset is composed of 640 facial images from 20 virtual characters each creating 32 facial expressions. The avatars represent 10 men and 10 women, aged between 20 and 80, from different ethnicities. Expressions are classified by the six universal expressions according to Gary Faigin classification.
Project description:This article presents C3I-SynFace: a large-scale synthetic human face dataset with corresponding ground truth annotations of head pose and face depth generated using the iClone 7 Character Creator "Realistic Human 100" toolkit with variations in ethnicity, gender, race, age, and clothing. The data is generated from 15 female and 15 male synthetic 3D human models extracted from iClone software in FBX format. Five facial expressions - neutral, angry, sad, happy, and scared are added to the face models to add further variations. With the help of these models, an open-source data generation pipeline in Python is proposed to import these models into the 3D computer graphics tool Blender and render the facial images along with the ground truth annotations of head pose and face depth in raw format. The datasets contain more than 100k ground truth samples with their annotations. With the help of virtual human models, the proposed framework can generate extensive synthetic facial datasets (e.g., head pose or face depths datasets) with a high degree of control over facial and environmental variations such as pose, illumination, and background. Such large datasets can be used for the improved and targeted training of deep neural networks.
Project description:The quality of datasets is crucial in computer graphics and machine learning research and development. This paper presents the Render Lighting Dataset, featuring 63,648 rendered images of Blender's primitive shapes with various lighting conditions and engines. The images were created using Blender 4.0's Cycles and Eevee Render Engines, with careful attention to detail in texture mapping and UV unwrapping. The dataset covers six different lighting conditions, including Area Light, Spotlight, Point Light, Tri-Light, HDRI (Sunlight), and HDRI (Overcast), each adjusted using Blender's different options in the Color Management panel. With thirteen unique materials, ranging from Coastal Sand to Glossy Plastic, the dataset provides visual diversity for researchers to explore material properties under different lighting conditions using different render engines. This dataset serves as a valuable resource for researchers looking to enhance 3D rendering engines. Its diverse set of rendered images under varied lighting conditions and material properties allows researchers to benchmark and evaluate the performance of different rendering engines, develop new rendering algorithms and techniques, optimize rendering parameters, and understand rendering challenges. By enabling more realistic and efficient rendering, advancing research in lighting simulation, and facilitating the development of AI-driven rendering techniques, this dataset has the potential to shape the future of computer graphics and rendering technology.
Project description:The lighting industry currently accounts for a significant proportion of all energy demand. Luminescent white lighting is often impure, inefficient, expensive, and detrimentally emits as a point source, meaning the light is emitted from a focused point. A luminescent light diffuser offers the potential to create a spatially broad lighting fixture. We developed a luminescent light diffuser consisting of three commercially available luminescent dye species (rhodamine 6G, fluorescein, 7-diethylamino-4-methylcoumarin) dispersed within a polymer matrix (polyvinyl alcohol), or commercial paint, and coated on a planar waveguide. A Light-emitting diode (LED) (385 nm) is directed into the waveguide which excites the luminescent species, coating the panel, creating a device that emits spatially broad pure white light. As the emission depends on escape cone emission from the waveguide, the device's emission was found to depend highly on the coating film quality and components. We present two systems: a small 40 mm × 40 mm prototype, made using standard water-soluble polymer (polyvinyl alcohol), to study the underlying operational principles, and a 100 mm × 100 mm device with optimized efficiency fabricated with a clear commercial paint. By doping the polymer matrix with scattering silica microparticles we achieved a maximum photon outcoupling efficiency of 78%, whilst maintaining colour purity with an increased device size of more than 300 times (compared with the input LED). This work shows that it is possible to construct an inexpensive and spatially broad lighting source, whilst maintaining colour purity at a low cost.
Project description:Recent studies have indicated that facial electromyogram (fEMG)-based facial-expression recognition (FER) systems are promising alternatives to the conventional camera-based FER systems for virtual reality (VR) environments because they are economical, do not depend on the ambient lighting, and can be readily incorporated into existing VR headsets. In our previous study, we applied a Riemannian manifold-based feature extraction approach to fEMG signals recorded around the eyes and demonstrated that 11 facial expressions could be classified with a high accuracy of 85.01%, with only a single training session. However, the performance of the conventional fEMG-based FER system was not high enough to be applied in practical scenarios. In this study, we developed a new method for improving the FER performance by employing linear discriminant analysis (LDA) adaptation with labeled datasets of other users. Our results indicated that the mean classification accuracy could be increased to 89.40% by using the LDA adaptation method (p < .001, Wilcoxon signed-rank test). Additionally, we demonstrated the potential of a user-independent FER system that could classify 11 facial expressions with a classification accuracy of 82.02% without any training sessions. To the best of our knowledge, this was the first study in which the LDA adaptation approach was employed in a cross-subject manner. It is expected that the proposed LDA adaptation approach would be used as an important method to increase the usability of fEMG-based FER systems for social VR applications.
Project description:Laughter and smiling are significant facial expressions used in human to human communication. We present a computational model for the generation of facial expressions associated with laughter and smiling in order to facilitate the synthesis of such facial expressions in virtual characters. In addition, a new method to reproduce these types of laughter is proposed and validated using databases of generic and specific facial smile expressions. In particular, a proprietary database of laugh and smile expressions is also presented. This database lists the different types of classified and generated laughs presented in this work. The generated expressions are validated through a user study with 71 subjects, which concluded that the virtual character expressions built using the presented model are perceptually acceptable in quality and facial expression fidelity. Finally, for generalization purposes, an additional analysis shows that the results are independent of the type of virtual character's appearance.
Project description:Person identification at airports requires the comparison of a passport photograph with its bearer. In psychology, this process is typically studied with static pairs of face photographs that require identity-match (same person shown) versus mismatch (two different people) decisions, but this approach provides a limited proxy for studying how environment and social interaction factors affect this task. In this study, we explore the feasibility of virtual reality (VR) as a solution to this problem, by examining the identity matching of avatars in a VR airport. We show that facial photographs of real people can be rendered into VR avatars in a manner that preserves image and identity information (Experiments 1 to 3). We then show that identity matching of avatar pairs reflects similar cognitive processes to the matching of face photographs (Experiments 4 and 5). This pattern holds when avatar matching is assessed in a VR airport (Experiments 6 and 7). These findings demonstrate the feasibility of VR as a new method for investigating face matching in complex environments.
Project description:50,000 cells were injected orthotopically into the inguinal fat pad of a Nod-Scid-Gamma (NSG) immuno-compromised mouse. Injected cells were 80% unlabelled 4T1 cells (parental population), and 20% ZsGreen-labelled 4T1-T cells (clone isolated in Wagenblast et Al, Nature, 2015). Tumour were allowed to develop for 20 days, and then collected during necropsy. Disaggegated cells were processed through the 10X genomics Single Cell 3' gene expression pipeline. This data is intended as an example dataset for a novel virtual reality viewer for single-cell data described in Bressan et Al, Nat. Cancer, 2021 (submitted)
Project description:In this article we introduce a human face image dataset. Images were taken in close to real-world conditions using several cameras, often mobile phone׳s cameras. The dataset contains 224 subjects imaged under four different figures (a nearly clean-shaven countenance, a nearly clean-shaven countenance with sunglasses, an unshaven or stubble face countenance, an unshaven or stubble face countenance with sunglasses) in up to two recording sessions. Existence of partially covered face images in this dataset could reveal the robustness and efficiency of several facial image processing algorithms. In this work we present the dataset and explain the recording method.
Project description:Virtual reality platforms producing interactive and highly realistic characters are being used more and more as a research tool in social and affective neuroscience to better capture both the dynamics of emotion communication and the unintentional and automatic nature of emotional processes. While idle motion (i.e., non-communicative movements) is commonly used to create behavioural realism, its use to enhance the perception of emotion expressed by a virtual character is critically lacking. This study examined the influence of naturalistic (i.e., based on human motion capture) idle motion on two aspects (the perception of other's pain and affective reaction) of an empathic response towards pain expressed by a virtual character. In two experiments, 32 and 34 healthy young adults were presented video clips of a virtual character displaying a facial expression of pain while its body was either static (still condition) or animated with natural postural oscillations (idle condition). The participants in Experiment 1 rated the facial pain expression of the virtual human as more intense, and those in Experiment 2 reported being more touched by its pain expression in the idle condition compared to the still condition, indicating a greater empathic response towards the virtual human's pain in the presence of natural postural oscillations. These findings are discussed in relation to the models of empathy and biological motion processing. Future investigations will help determine to what extent such naturalistic idle motion could be a key ingredient in enhancing the anthropomorphism of a virtual human and making its emotion appear more genuine.