Sensor Localization from Distance and Orientation Constraints.
ABSTRACT: The sensor localization problem can be formalized using distance and orientation constraints, typically in 3D. Local methods can be used to refine an initial location estimation, but in many cases such estimation is not available and a method able to determine all the feasible solutions from scratch is necessary. Unfortunately, existing methods able to find all the solutions in distance space can not take into account orientations, or they can only deal with one- or two-dimensional problems and their extension to 3D is troublesome. This paper presents a method that addresses these issues. The proposed approach iteratively projects the problem to decrease its dimension, then reduces the ranges of the variable distances, and back-projects the result to the original dimension, to obtain a tighter approximation of the feasible sensor locations. This paper extends previous works introducing accurate range reduction procedures which effectively integrate the orientation constraints. The mutual localization of a fleet of robots carrying sensors and the position analysis of a sensor moved by a parallel manipulator are used to validate the approach.
Project description:Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.
Project description:Effective feedback control requires all state variable information of the system. However, in the translational flexible-link manipulator (TFM) system, it is unrealistic to measure the vibration signals and their time derivative of any points of the TFM by infinite sensors. With the rigid-flexible coupling between the global motion of the rigid base and the elastic vibration of the flexible-link manipulator considered, a two-time scale virtual sensor, which includes the speed observer and the vibration observer, is designed to achieve the estimation for the vibration signals and their time derivative of the TFM, as well as the speed observer and the vibration observer are separately designed for the slow and fast subsystems, which are decomposed from the dynamic model of the TFM by the singular perturbation. Additionally, based on the linear-quadratic differential games, the observer gains of the two-time scale virtual sensor are optimized, which aims to minimize the estimation error while keeping the observer stable. Finally, the numerical calculation and experiment verify the efficiency of the designed two-time scale virtual sensor.
Project description:Autonomous grasping with an aerial manipulator in the applications of aerial transportation and manipulation is still a challenging problem because of the complex kinematics/dynamics and motion constraints of the coupled rotors-manipulator system. The paper develops a novel aerial manipulation system with a lightweight manipulator, an X8 coaxial octocopter and onboard visual tracking system. To implement autonomous grasping control, we develop a novel and efficient approach that includes trajectory planning, visual trajectory tracking and kinematic compensation. Trajectory planning for aerial grasping control is formulated as a multi-objective optimization problem, while motion constraints and collision avoidance are considered in the optimization. A genetic method is applied to obtain the optimal solution. A kinematic compensation-based visual trajectory tracking is introduced to address the coupled affection between the manipulator and octocopter, with the advantage of discarding the complex dynamic parameter calibration. Finally, several experiments are performed to verify the effectiveness of the proposed approach.
Project description:Aircraft pose estimation is a necessary technology in aerospace applications, and accurate pose parameters are the foundation for many aerospace tasks. In this paper, we propose a novel pose estimation method for straight wing aircraft without relying on 3D models or other datasets, and two widely separated cameras are used to acquire the pose information. Because of the large baseline and long-distance imaging, feature point matching is difficult and inaccurate in this configuration. In our method, line features are extracted to describe the structure of straight wing aircraft in images, and pose estimation is performed based on the common geometry constraints of straight wing aircraft. The spatial and length consistency of the line features is used to exclude irrelevant line segments belonging to the background or other parts of the aircraft, and density-based parallel line clustering is utilized to extract the aircraft's main structure. After identifying the orientation of the fuselage and wings in images, planes intersection is used to estimate the 3D localization and attitude of the aircraft. Experimental results show that our method estimates the aircraft pose accurately and robustly.
Project description:This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.
Project description:By incorporating the physical constraints in joint space, a different-level simultaneous minimization scheme, which takes both the robot kinematics and robot dynamics into account, is presented and investigated for fault-tolerant motion planning of redundant manipulator in this paper. The scheme is reformulated as a quadratic program (QP) with equality and bound constraints, which is then solved by a discrete-time recurrent neural network. Simulative verifications based on a six-link planar redundant robot manipulator substantiate the efficacy and accuracy of the presented acceleration fault-tolerant scheme, the resultant QP and the corresponding discrete-time recurrent neural network.
Project description:Studying the human visual system with high temporal resolution is a significant challenge due to the limitations of the available, noninvasive measurement tools. MEG and EEG provide the millisecond temporal resolution necessary for answering questions about intracortical communication involved in visual processing, but source estimation is ill-posed and unreliable when multiple; simultaneously active areas are located close together. To address this problem, we have developed a retinotopy-constrained source estimation method to calculate the time courses of activation in multiple visual areas. Source estimation was disambiguated by: (1) fixing MEG/EEG generator locations and orientations based on fMRI retinotopy and surface tessellations constructed from high-resolution MRI images; and (2) solving for many visual field locations simultaneously in MEG/EEG responses, assuming source current amplitudes to be constant or varying smoothly across the visual field. Because of these constraints on the solutions, estimated source waveforms become less sensitive to sensor noise or random errors in the specification of the retinotopic dipole models. We demonstrate the feasibility of this method and discuss future applications such as studying the timing of attentional modulation in individual visual areas.
Project description:In this article, we present a novel stochastic algorithm called <i>simultaneous sensor calibration and deformation estimation</i> (SCADE) to address the problem of modeling deformation behavior of a generic continuum manipulator (CM) in free and obstructed environments. In SCADE, using a novel mathematical formulation, we introduce <i>a priori</i> model-independent filtering algorithm to fuse the continuous and inaccurate measurements of an embedded sensor (e.g., magnetic or piezoelectric sensors) with an intermittent but accurate data of an external imaging system (e.g., optical trackers or cameras). The main motivation of this article is the crucial need of obtaining an accurate shape/position estimation of a CM utilized in a surgical intervention. In these robotic procedures, the CM is typically equipped with an embedded sensing unit (ESU) while an external imaging modality (e.g., ultrasound or a fluoroscopy machine) is also available in the surgical site. The results of two different set of prior experiments in free and obstructed environments were used to evaluate the efficacy of SCADE algorithm. The experiments were performed with a CM specifically designed for orthopaedic interventions equipped with an inaccurate Fiber Bragg Grating (FBG) ESU and overhead camera. The results demonstrated the successful performance of the SCADE algorithm in simultaneous estimation of unknown deformation behavior of the utilized unmodeled CM together with realizing the time-varying drift of the poor-calibrated FBG sensing unit. Moreover, the results showed the phenomenal out-performance of the SCADE algorithm in estimation of the CM's tip position as compared to FBG-based position estimations.
Project description:The use of a robotic arm manipulator as a platform for coincident radiation mapping and laser profiling of radioactive sources on a flat surface is investigated in this work. A combined scanning head, integrating a micro-gamma spectrometer and Time of Flight (ToF) sensor were moved in a raster scan pattern across the surface, autonomously undertaken by the robot arm over a 600 × 260 mm survey area. A series of radioactive sources of different emission intensities were scanned in different configurations to test the accuracy and sensitivity of the system. We demonstrate that in each test configuration the system was able to generate a centimeter accurate 3D model complete with an overlaid radiation map detailing the emitted radiation intensity and the corrected surface dose rate.
Project description:Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate.