Reshaping Estimated Human Intention by Robot Moves in Human-Robot Interactions

The methodology and experiments associated with the reshaping of human intention are based on robot movements during Human-Robot Interactions (HRI).  Although works on estimating human intentions are quite well known research areas in the literature, reshaping intentions through interactions is a new branching of the human-robot interaction field beginning to gain significance. We analyze how previously estimated human intentions change based on his/her cooperation with mobile robots in a real human-robot environment. Our approach uses some methods like the Observable Operator Models (OOMs) and Hidden Markov Models (HMMs) in two levels: the low-level tracks individuals for which their initial intentions are detected while the high-level guides the mobile robots into moves that aim to change intentions of individuals in the environment. In the low level, postures and locations of the human are monitored by applying image processing methods. The high level uses an algorithm which includes learning models to estimate the initial human intention and a decision making system to reshape the previously estimated human intention. The novelty of this work does not only come from the originality of the intention reshaping concept through robot moves, but this work also initiates the use, in the literature, of OOMs in the human-robot interaction applications . The two-level system developed is tested on videos taken from human-robot environment. The results obtained using the proposed approach are discussed according to performance based on the “degree” of reshaping of the detected intentions.


Fluid Inspired Human Body Pose and Hand Gesture Imitation

Imitation learning is one of the forms of social learning that enables the human or robot agents to learn new skills. The knowledge acquired for imitation can be basically represented as action mapping based on “organ matching” which determines the correspondence between imitator and imitatee, if the imitator and the demonstrator share the same embodiment. In this work, we aim at imitation of two system with totally different dynamics, imitating each other, where any correspondence is missing. Towards this aim, we adopt a case where the imitator is a fluidic system which dynamics is totally different than the imitatee, that is a human performing different body poses and human hand gestures. Our work proposes the fluidics formation control of fluid particles where the formation results from the imitation of observed human body poses and hand gestures. Fluidic formation control layer is responsible of assigning the correct fluid parameters to the swarm formation layer according to the body poses and hand gestures adopted by the human performer. The movement of the fluid particles is modeled using the Smoothed Particle Hydrodynamics (SPH) which is a particle based Lagrangian method for simulation of fluid flows. The region based controller first extract the human body parts and human hand regions generating the regions where the attention is attracted by the imitatee and fits an appropriate ellipses to delimite boundaries of those regions. The ellipse parameters such as center of the ellipses, eccentricity, length of the major and minor axis etc. are used by the fludic layer in order to generate human body poses and hand gestures.




Preshaping is an important issue in order to determine the suitable orientation and landing momentum transfer when impacting the object to optimally initiate without energy loss, the grasping task, as a continuum from preshaped impact to grasping. The initial effect of contact forces and moments generated upon landing of the fingers onto the object gives the object, the initial rotation and translation tendencies due to momentum transfer upon impact. For that reason, the preshape impacting the object creates a pattern of motion tendencies that should be suitable to the proper initiation of the grasping task. Our objective is to use fluidics to generate a momentum transfer phenomena with the continuum of single model (namely fluid dynamics) from preshaping of a multi-fingered hand to the approach of an object, to the initial momentum distribution of the object surfaces upon approach leading to the preparation of th object for motion, to the actual landing of the fingers and initializing the task from the momentum transfer upon contacts to the grasped object.

Active Localization of An Agile Target Using Audio-Visually Capable Mobile Agents


In mobile robotics, localization is one of the most essential functions in order to operate a robot autonomously in an unknown environment. Reliability of the localization directly effects the performance of autonomous robot operation since the robot can carry out its tasks more precisely if it is provided a reliable localization information. Localization can be done by implementing odometry techniques but due to the mechanical uncertainties, the data provided by odometry may not be sufficiently reliable. At this point, the idea of compensating the location data with respect to the environment comes into play.In this thesis, a real life localization problem will be proposed along with the theoretical and practical experimentations in attempt to solve this particular problem. Mobile agents with very limited visual capabilities, will be expected to track and localize a target which is considerably agile with respect to the mobile agents. The agents will also be equipped with instruments that grant them a relatively more capable auditory ability. Fusing this audio-visual data, the agents will be expected to discover the movement behavior of the target that is being tracked.

Reshaping a Current Recognized Human Intention Into a Desired New One by Intelligent Autonomous Moves of Service Robots


In this study, the aim is to detect the human intentions and reshape it into a desired new one. The reshaping is done by autonomous robot moves. Robots will decide how to move or act for this purpose. The decision making algorithm will start by random state generation. The generated states will be the movement sequences or interactions of robots with the certain objects to take attention of the human or to direct the intention. After the first state generation, by using elastic network, the generation of the new states will keep going until the desired intention is reached. The control of the intention reshaping is done by detecting the current intention of the human after each robot moves.