Extended-Kalman-filter-based dynamic mode decomposition for simultaneous system identification and denoising.
ABSTRACT: A new dynamic mode decomposition (DMD) method is introduced for simultaneous system identification and denoising in conjunction with the adoption of an extended Kalman filter algorithm. The present paper explains the extended-Kalman-filter-based DMD (EKFDMD) algorithm which is an online algorithm for dataset for a small number of degree of freedom (DoF). It also illustrates that EKFDMD requires significant numerical resources for many-degree-of-freedom (many-DoF) problems and that the combination with truncated proper orthogonal decomposition (trPOD) helps us to apply the EKFDMD algorithm to many-DoF problems, though it prevents the algorithm from being fully online. The numerical experiments of a noisy dataset with a small number of DoFs illustrate that EKFDMD can estimate eigenvalues better than or as well as the existing algorithms, whereas EKFDMD can also denoise the original dataset online. In particular, EKFDMD performs better than existing algorithms for the case in which system noise is present. The EKFDMD with trPOD, which unfortunately is not fully online, can be successfully applied to many-DoF problems, including a fluid-problem example, and the results reveal the superior performance of system identification and denoising.
Project description:The low-distortion processing of well-testing geological parameters is a key way to provide decision-making support for oil and gas field development. However, the classical processing methods face many problems, such as the stochastic nature of the data, the randomness of initial parameters, poor denoising ability, and the lack of data compression and prediction mechanisms. These problems result in poor real-time predictability of oil operation status and difficulty in offline interpreting the played back data. Given these, we propose a wavelet-based Kalman smoothing method for processing uncertain oil well-testing data. First, we use correlation and reconstruction errors as analysis indicators and determine the optimal combination of decomposition scale and vanishing moments suitable for wavelet analysis of oil data. Second, we build a ground pressure measuring platform and use the pressure gauge equipped with the optimal combination parameters to complete the downhole online wavelet decomposition, filtering, Kalman prediction, and data storage. After the storage data are played back, the optimal Kalman parameters obtained by particle swarm optimization are used to complete the data smoothing for each sample. The experiments compare the signal-to-noise ratio and the root mean square error before and after using different classical processing models. In addition, robustness analysis is added. The proposed method, on the one hand, has the features of decorrelation and compressing data, which provide technical support for real-time uploading of downhole data; on the other hand, it can perform minimal variance unbiased estimates of the data, filter out the interference and noise, reduce the reconstruction error, and make the data have a high resolution and strong robustness.
Project description:In this research, we focus on the use of Unmanned Aerial Vehicles (UAVs) for the delivery of payloads and navigation towards safe-landing zones, specifically on the modeling of flight dynamics of lightweight vehicles denoted Precision Aerial Delivery Systems (PADSs). While a wide range of nonlinear models has been developed and tested on high-end applications considering various degrees of freedom (DOF), linear models suitable for low-cost applications have not been explored thoroughly. In this study, we propose and compare two linear models, a linearized version of a 6-DOF model specifically developed for micro-lightweight systems, and an alternative model based on a double integrator. Both linear models are implemented with a sensor fusion algorithm using a Kalman filter to estimate the position and attitude of PADSs, and their performance is compared to a nonlinear 6-DOF model. Simulation results demonstrate that both models, when incorporated into a Kalman filter estimation scheme, can determine the flight dynamics of PADSs during smooth flights. While it is validated that the double integrator model can adequately operate under the proposed estimation scheme for up to small acceleration changes, the linearized model proves to be capable of reproducing the nonlinear model characteristics even during moderately steep turns.
Project description:Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.
Project description:Modern scientific research produces datasets of increasing size and complexity that require dedicated numerical methods to be processed. In many cases, the analysis of spectroscopic data involves the denoising of raw data before any further processing. Current efficient denoising algorithms require the singular value decomposition of a matrix with a size that scales up as the square of the data length, preventing their use on very large datasets. Taking advantage of recent progress on random projection and probabilistic algorithms, we developed a simple and efficient method for the denoising of very large datasets. Based on the QR decomposition of a matrix randomly sampled from the data, this approach allows a gain of nearly three orders of magnitude in processing time compared with classical singular value decomposition denoising. This procedure, called urQRd (uncoiled random QR denoising), strongly reduces the computer memory footprint and allows the denoising algorithm to be applied to virtually unlimited data size. The efficiency of these numerical tools is demonstrated on experimental data from high-resolution broadband Fourier transform ion cyclotron resonance mass spectrometry, which has applications in proteomics and metabolomics. We show that robust denoising is achieved in 2D spectra whose interpretation is severely impaired by scintillation noise. These denoising procedures can be adapted to many other data analysis domains where the size and/or the processing time are crucial.
Project description:Objective: To date, many brain-machine interface (BMI) studies have developed decoding algorithms for neuroprostheses that provide users with precise control of upper arm reaches with some limited grasping capabilities. However, comparatively few have focused on quantifying the performance of precise finger control. Here we expand upon this work by investigating online control of individual finger groups. Approach: We have developed a novel training manipulandum for non-human primate (NHP) studies to isolate the movements of two specific finger groups: index and middle-ring-pinkie (MRP) fingers. We use this device in combination with the ReFIT (Recalibrated Feedback Intention-Trained) Kalman filter to decode the position of each finger group during a single degree of freedom task in two rhesus macaques with Utah arrays in motor cortex. The ReFIT Kalman filter uses a two-stage training approach that improves online control of upper arm tasks with substantial reductions in orbiting time, thus making it a logical first choice for precise finger control. Results: Both animals were able to reliably acquire fingertip targets with both index and MRP fingers, which they did in blocks of finger group specific trials. Decoding from motor signals online, the ReFIT Kalman filter reliably outperformed the standard Kalman filter, measured by bit rate, across all tested finger groups and movements by 31.0 and 35.2%. These decoders were robust when the manipulandum was removed during online control. While index finger movements and middle-ring-pinkie finger movements could be differentiated from each other with 81.7% accuracy across both subjects, the linear Kalman filter was not sufficient for decoding both finger groups together due to significant unwanted movement in the stationary finger, potentially due to co-contraction. Significance: To our knowledge, this is the first systematic and biomimetic separation of digits for continuous online decoding in a NHP as well as the first demonstration of the ReFIT Kalman filter improving the performance of precise finger decoding. These results suggest that novel nonlinear approaches, apparently not necessary for center out reaches or gross hand motions, may be necessary to achieve independent and precise control of individual fingers.
Project description:Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects.
Project description:A ride control system (RCS) based linear quadratic regulator (LQR) and genetic algorithm (GA) design is presented, to reduce the heave, roll and pitch motion (three degrees of freedom motion (3 DOF motion)) of the wave piercing catamarans (WPC) in beam waves. A detailed 3 DOF ride control model which consists of the coupling and decoupling relationships between longitudinal and transverse motion is proposed for the WPC vessel. And the complex hydrodynamic coefficients and disturbances induced by beam waves are analyzed. Moreover, two stern flaps are designed for the system in the way of alternate flapping. In the controller design, the LQR method based on GA method is adopted to reduce the 3 DOF motion of the ship. Depending on the robust search mechanism and global optimum of GA, weighting parameters can be obtained to calculate the desired gain. Finally, the motion reduction and motion sickness incidence (MSI) results demonstrate the feasibility and effectiveness of the proposed controller, and the comfort of passengers and crews can also be improved.
Project description:The ultimate goal of machine learning-based myoelectric control is simultaneous and independent control of multiple degrees of freedom (DOFs), including wrist and digit artificial joints. For prosthetic finger control, regression-based methods are typically used to reconstruct position/velocity trajectories from surface electromyogram (EMG) signals. Unfortunately, such methods have thus far met with limited success. In this work, we propose action decoding, a paradigm-shifting approach for independent, multi-digit movement intent prediction based on multi-output, multi-class classification. At each moment in time, our algorithm decodes movement intent for each available DOF into one of three classes: open, close, or stall (i.e., no movement). Despite using a classifier as the decoder, arbitrary hand postures are possible with our approach. We analyse a public dataset previously recorded and published by us, comprising measurements from 10 able-bodied and two transradial amputee participants. We demonstrate the feasibility of using our proposed action decoding paradigm to predict movement action for all five digits as well as rotation of the thumb. We perform a systematic offline analysis by investigating the effect of various algorithmic parameters on decoding performance, such as feature selection and choice of classification algorithm and multi-output strategy. The outcomes of the offline analysis presented in this study will be used to inform the real-time implementation of our algorithm. In the future, we will further evaluate its efficacy with real-time control experiments involving upper-limb amputees.
Project description:BACKGROUND: Reducing the effects of sequencing errors and PCR artifacts has emerged as an essential component in amplicon-based metagenomic studies. Denoising algorithms have been designed that can reduce error rates in mock community data, but they change the sequence data in a manner that can be inconsistent with the process of removing errors in studies of real communities. In addition, they are limited by the size of the dataset and the sequencing technology used. RESULTS: FlowClus uses a systematic approach to filter and denoise reads efficiently. When denoising real datasets, FlowClus provides feedback about the process that can be used as the basis to adjust the parameters of the algorithm to suit the particular dataset. When used to analyze a mock community dataset, FlowClus produced a lower error rate compared to other denoising algorithms, while retaining significantly more sequence information. Among its other attributes, FlowClus can analyze longer reads being generated from all stages of 454 sequencing technology, as well as from Ion Torrent. It has processed a large dataset of 2.2 million GS-FLX Titanium reads in twelve hours; using its more efficient (but less precise) trie analysis option, this time was further reduced, to seven minutes. CONCLUSIONS: Many of the amplicon-based metagenomics datasets generated over the last several years have been processed through a denoising pipeline that likely caused deleterious effects on the raw data. By using FlowClus, one can avoid such negative outcomes while maintaining control over the filtering and denoising processes. Because of its efficiency, FlowClus can be used to re-analyze multiple large datasets together, thereby leading to more standardized conclusions. FlowClus is freely available on GitHub (jsh58/FlowClus); it is written in C and supported on Linux.
Project description:Early marker-based metagenomic studies were performed without properly accounting for the effects of noise (sequencing errors, PCR single-base errors, and PCR chimeras). Denoising algorithms have been developed, but they were validated using data derived from mock communities, in which the true sequences were known. Since the algorithms were designed to be used in real community studies, it is important to evaluate the results in such cases. With this goal in mind, we processed a real 16S rRNA metagenomic dataset through five denoising pipelines. By reconstituting the sequence reads at each stage of the pipelines, we determined how the reads were being altered. In one denoising pipeline, AmpliconNoise, we found that the algorithm that was designed to remove pyrosequencing errors changed the reads in a manner inconsistent with the known spectrum of these errors, until one of the parameters was increased substantially from its default value. Additionally, because the longest read was picked as the representative for each cluster, sequences were added to the 3' ends of shorter reads that were often dissimilar from what had been removed by the truncations of the previous filtering step. In QIIME, the denoising algorithm caused a much larger number of changes to the reads unless the parameters were changed from their defaults. The denoising pipeline in mothur avoided some of these negative side-effects because of its strict default filtering criteria, but these criteria also greatly limited the sequence information produced at the end of the pipeline. We recommend that those using these denoising pipelines be cognizant of these issues and examine how their reads are being transformed by the denoising process as a component of their analysis.