Project description:Body composition is a key component of health in both individuals and populations, and excess adiposity is associated with an increased risk of developing chronic diseases. Body mass index (BMI) and other clinical or commercially available tools for quantifying body fat (BF) such as DXA, MRI, CT, and photonic scanners (3DPS) are often inaccurate, cost prohibitive, or cumbersome to use. The aim of the current study was to evaluate the performance of a novel automated computer vision method, visual body composition (VBC), that uses two-dimensional photographs captured via a conventional smartphone camera to estimate percentage total body fat (%BF). The VBC algorithm is based on a state-of-the-art convolutional neural network (CNN). The hypothesis is that VBC yields better accuracy than other consumer-grade fat measurements devices. 134 healthy adults ranging in age (21-76 years), sex (61.2% women), race (60.4% White; 23.9% Black), and body mass index (BMI, 18.5-51.6 kg/m2) were evaluated at two clinical sites (N = 64 at MGH, N = 70 at PBRC). Each participant had %BF measured with VBC, three consumer and two professional bioimpedance analysis (BIA) systems. The PBRC participants also had air displacement plethysmography (ADP) measured. %BF measured by dual-energy x-ray absorptiometry (DXA) was set as the reference against which all other %BF measurements were compared. To test our scientific hypothesis we run multiple, pair-wise Wilcoxon signed rank tests where we compare each competing measurement tool (VBC, BIA, …) with respect to the same ground-truth (DXA). Relative to DXA, VBC had the lowest mean absolute error and standard deviation (2.16 ± 1.54%) compared to all of the other evaluated methods (p < 0.05 for all comparisons). %BF measured by VBC also had good concordance with DXA (Lin's concordance correlation coefficient, CCC: all 0.96; women 0.93; men 0.94), whereas BMI had very poor concordance (CCC: all 0.45; women 0.40; men 0.74). Bland-Altman analysis of VBC revealed the tightest limits of agreement (LOA) and absence of significant bias relative to DXA (bias -0.42%, R2 = 0.03; p = 0.062; LOA -5.5% to +4.7%), whereas all other evaluated methods had significant (p < 0.01) bias and wider limits of agreement. Bias in Bland-Altman analyses is defined as the discordance between the y = 0 axis and the regressed line computed from the data in the plot. In this first validation study of a novel, accessible, and easy-to-use system, VBC body fat estimates were accurate and without significant bias compared to DXA as the reference; VBC performance exceeded those of all other BIA and ADP methods evaluated. The wide availability of smartphones suggests that the VBC method for evaluating %BF could play an important role in quantifying adiposity levels in a wide range of settings.Trial registration: ClinicalTrials.gov Identifier: NCT04854421.
Project description:Instrumental motion analysis constitutes a promising development in the assessment of motor function in clinical populations affected by movement disorders. To foster implementation and facilitate interpretation of respective outcomes, we aimed to establish normative data of healthy subjects for a markerless RGB-Depth camera-based motion analysis system and to illustrate their use. We recorded 133 healthy adults (56% female) aged 20 to 60 years with an RGB-Depth camera-based motion analysis system. Forty-three spatiotemporal parameters were extracted from six short, standardized motor tasks-including three gait tasks, stepping in place, standing-up and sitting down, and a postural control task. Associations with confounding factors, height, weight, age, and sex were modelled using a predictive linear regression approach. A z-score normalization approach was provided to improve usability of the data. We reported descriptive statistics for each spatiotemporal parameter (mean, standard deviation, coefficient of variation, quartiles). Robust confounding associations emerged for step length and step width in comfortable speed gait only. Accessible normative data usage was lastly exemplified with recordings from one randomly selected individual with multiple sclerosis. We provided normative data for an RGB depth camera-based motion analysis system covering broad aspects of motor capacity.
Project description:Movement health is understanding our body's ability to perform movements during activities of daily living such as lifting, reaching, and bending. The benefits of improved movement health have long been recognized and are wide-ranging from improving athletic performance to helping ease of performing simple tasks, but only recently has this concept been put into practice by clinicians and quantitatively studied by researchers. With digital health and movement monitoring becoming more ubiquitous in society, smartphone applications represent a promising avenue for quantifying, monitoring, and improving the movement health of an individual. In this paper, we validate Halo Movement, a movement health assessment which utilizes the front-facing camera of a smartphone and applies computer vision and machine learning algorithms to quantify movement health and its sub-criteria of mobility, stability, and posture through a sequence of five exercises/activities. On a diverse cohort of 150 participants of various ages, body types, and ability levels, we find moderate to strong statistically significant correlations between the Halo Movement assessment overall score, metrics from sensor-based 3D motion capture, and scores from a sequence of 13 standardized functional movement tests. Further, the smartphone assessment is able to differentiate regular healthy individuals from professional movement athletes (e.g., dancers, cheerleaders) and from movement impaired participants, with higher resolution than that of existing functional movement screening tools and thus may be more appropriate than the existing tests for quantifying functional movement in able-bodied individuals. These results support using Halo Movement's overall score as a valid assessment of movement health.
Project description:With the enforcement of social distancing due to the pandemic, a need to conduct postural assessments through remote care arose. So, this study aimed to assess the intra- and inter-rater reproducibility of the Remote Static Posture Assessment (ARPE) protocol's Postural Checklist. The study involved 51 participants, with the postural assessment conducted by two researchers. For intra-rater reproducibility assessment, one rater administered the ARPE protocol twice, with an interval of 7-days between assessments (test-retest). A second independent rater assessed inter-rater reproducibility. Kappa statistics (k) and percentage agreement (%C) were used, with a significance level of 0.05. The intra-rater reproducibility analysis indicated high reliability, k values varied from 0.921 to 1.0, with %C ranging from 94% to 100% for all items on the ARPE protocol's Postural Checklist. Inter-rater reproducibility indicates reliability ranging from slight to good, k values exceeded 0.4 for the entire checklist, except for four items: waists in the frontal photograph (k = 0.353), scapulae in the rear photograph (k = 0.310), popliteal line of the knees in the rear photograph (k = 0.270), and foot posture in the rear photograph (k = 0.271). Nonetheless, %C surpassed 50% for all but the scapulae item (%C = 47%). The ARPE protocol's Postural Checklist is reproducible and can be administered by the same or different raters for static posture assessment. However, when used by distinct raters, the items waists (front of the frontal plane), scapulae, popliteal line of the knees, and feet (rear of the frontal plane) should not be considered.
Project description:Kinematic analysis aimed toward scientific investigation or professional purposes is commonly unaffordable and complex to use. The purpose of this study was to verify concurrent validation between a cycling-specific 3D camera and the gold-standard 3D general camera systems. Overall, 11 healthy amateur male triathletes were filmed riding their bicycles with Vicon 3D cameras and the Retul 3D cameras for bike fitting analysis simultaneously. All 18 kinematic measurements given by the bike fitting system were compared with the same data given by Vicon cameras through Pearson correlation (r), intraclass correlation coefficients (ICC), standard error measurements (SEM), and Bland-Altman (BA) analysis. Confidence intervals of 95% are given. A very high correlation between cameras was found on six of 18 measurements. All other presented a high correlation between cameras (between 0.7 and 0.9). In total, six variables indicate a SEM of less than one degree between systems. Only two variables indicate a SEM higher than two degrees between camera systems. Overall, four measures indicate bias tendency according to BA. The cycling-specific led-emitting 3D camera system tested revealed a high or very high degree of correlation with the gold-standard 3D camera system used in laboratory motion capture. In total, 14 measurements of this equipment could be used in sports medicine clinical practice and even by researchers of cycling studies.
Project description:Using consumer depth cameras at close range yields a higher surface resolution of the object, but this makes more serious noises. This form of noise tends to be located at or on the edge of the realistic surface over a large area, which is an obstacle for real-time applications that do not rely on point cloud post-processing. In order to fill this gap, by analyzing the noise region based on position and shape, we proposed a composite filtering system for using consumer depth cameras at close range. The system consists of three main modules that are used to eliminate different types of noise areas. Taking the human hand depth image as an example, the proposed filtering system can eliminate most of the noise areas. All algorithms in the system are not based on window smoothing and are accelerated by the GPU. By using Kinect v2 and SR300, a large number of contrast experiments show that the system can get good results and has extremely high real-time performance, which can be used as a pre-step for real-time human-computer interaction, real-time 3D reconstruction, and further filtering.
Project description:BackgroundThe prevalence of stroke is high in both males and females, and it rises with age. Stroke often leads to sensor and motor issues, such as hemiparesis affecting one side of the body. Poststroke patients require torso stabilization exercises, but maintaining proper posture can be challenging due to their condition.ObjectiveOur goal was to develop the Postural SmartVest, an affordable wearable technology that leverages a smartphone's built-in accelerometer to monitor sagittal and frontal plane changes while providing visual, tactile, and auditory feedback to guide patients in achieving their best-at-the-time posture during rehabilitation.MethodsTo design the Postural SmartVest, we conducted brainstorming sessions, therapist interviews, gathered requirements, and developed the first prototype. We used this initial prototype in a feasibility study with individuals without hemiparesis (n=40, average age 28.4). They used the prototype during 1-hour seated sessions. Their feedback led to a second prototype, which we used in a pilot study with a poststroke patient. After adjustments and a kinematic assessment using the Vicon Gait Plug-in system, the third version became the Postural SmartVest. We assessed the Postural SmartVest in a within-subject experiment with poststroke patients (n=40, average age 57.1) and therapists (n=20, average age 31.3) during rehabilitation sessions. Participants engaged in daily activities, including walking and upper limb exercises, without and with app feedback.ResultsThe Postural SmartVest comprises a modified off-the-shelf athletic lightweight compression tank top with a transparent pocket designed to hold a smartphone running a customizable Android app securely. This app continuously monitors sagittal and frontal plane changes using the built-in accelerometer sensor, providing multisensory feedback through audio, vibration, and color changes. Patients reported high ratings for weight, comfort, dimensions, effectiveness, ease of use, stability, durability, and ease of adjustment. Therapists noted a positive impact on rehabilitation sessions and expressed their willingness to recommend it. A 2-tailed t-test showed a significant difference (P<.001) between the number of the best-at-the-time posture positions patients could maintain in 2 stages, without feedback (mean 13.1, SD 7.12) and with feedback (mean 4.2, SD 3.97), demonstrating the effectiveness of the solution in improving posture awareness.ConclusionsThe Postural SmartVest aids therapists during poststroke rehabilitation sessions and assists patients in improving their posture during these sessions.
Project description:Respiration rate (RR) and respiration patterns (RP) are considered early indicators of physiological conditions and cardiorespiratory diseases. In this study, we addressed the problem of contactless estimation of RR and classification of RP of one person or two persons in a confined space under realistic conditions. We used three impulse radio ultrawideband (IR-UWB) radars and a 3D depth camera (Kinect) to avoid any blind spot in the room and to ensure that at least one of the radars covers the monitored subjects. This article proposes a subject localization and radar selection algorithm using a Kinect camera to allow the measurement of the respiration of multiple people placed at random locations. Several different experiments were conducted to verify the algorithms proposed in this work. The mean absolute error (MAE) between the estimated RR and reference RR of one-subject and two-subjects RR estimation are 0.61±0.53 breaths/min and 0.68±0.24 breaths/min, respectively. A respiratory pattern classification algorithm combining feature-based random forest classifier and pattern discrimination algorithm was developed to classify different respiration patterns including eupnea, Cheyne-Stokes respiration, Kussmaul respiration and apnea. The overall classification accuracy of 90% was achieved on a test dataset. Finally, a real-time system showing RR and RP classification on a graphical user interface (GUI) was implemented for monitoring two subjects.
Project description:Biometric authentication is popular in authentication systems, and gesture as a carrier of behavior characteristics has the advantages of being difficult to imitate and containing abundant information. This research aims to use three-dimensional (3D) depth information of gesture movement to perform authentication with less user effort. We propose an approach based on depth cameras, which satisfies three requirements: Can authenticate from a single, customized gesture; achieves high accuracy without an excessive number of gestures for training; and continues learning the gesture during use of the system. To satisfy these requirements respectively: We use a sparse autoencoder to memorize the single gesture; we employ data augmentation technology to solve the problem of insufficient data; and we use incremental learning technology for allowing the system to memorize the gesture incrementally over time. An experiment has been performed on different gestures in different user situations that demonstrates the accuracy of one-class classification (OCC), and proves the effectiveness and reliability of the approach. Gesture authentication based on 3D depth cameras could be achieved with reduced user effort.
Project description:To evaluate the postures in ergonomics applications, studies have proposed the use of low-cost, marker-less, and portable depth camera-based motion tracking systems (DCMTSs) as a potential alternative to conventional marker-based motion tracking systems (MMTSs). However, a simple but systematic method for examining the estimation errors of various DCMTSs is lacking. This paper proposes a benchmarking method for assessing the estimation accuracy of depth cameras for full-body landmark location estimation. A novel alignment board was fabricated to align the coordinate systems of the DCMTSs and MMTSs. The data from an MMTS were used as a reference to quantify the error of using a DCMTS to identify target locations in a 3-D space. To demonstrate the proposed method, the full-body landmark location tracking errors were evaluated for a static upright posture using two different DCMTSs. For each landmark, we compared each DCMTS (Kinect system and RealSense system) with an MMTS by calculating the Euclidean distances between symmetrical landmarks. The evaluation trials were performed twice. The agreement between the tracking errors of the two evaluation trials was assessed using intraclass correlation coefficient (ICC). The results indicate that the proposed method can effectively assess the tracking performance of DCMTSs. The average errors (standard deviation) for the Kinect system and RealSense system were 2.80 (1.03) cm and 5.14 (1.49) cm, respectively. The highest average error values were observed in the depth orientation for both DCMTSs. The proposed method achieved high reliability with ICCs of 0.97 and 0.92 for the Kinect system and RealSense system, respectively.