The Cognitive Robotics Research group undertakes research into human-robot interaction, autonomous vehicles and bio-inspired robotics. A primary focus of this research is enabling more natural interactions between humans and robots. This theme centres largely around the changing needs of society as we become more reliant on robots and need more intuitive ways of interacting with them.
Accurate robotic replication of human head gestures
The aim of this project is create a robot that closely represents a human both in form and behaviour and that is capable of engaging in convincing communicative interactions with humans.
This is to be achieved through the skills of software, hardware and electronics engineers combined with the observational and creative skills of the artist and the evaluative skills of the psychologist.
A significant feature of this project is that it aims to achieve the levels of sensitivity and subtlety in robotic movements that are necessary for creating believable human-like gestures.
This will be combined with an investigation of the role that aesthetics has in influencing our emotional engagement experience when observing the robot.
Bootstrapping Biomimetic Control
This work seeks to provide a new class of novel and highly efficient biomimetic learning and exploration schemes to realize control on platforms that elude detailed modelling and simulation.
These include musco-sceletal arms and heads, autonomous underwater vehicles, quadrupeds and hopping machines, an elephant trunk and further robots that are available at the partners’ labs.
This is achieved through inspiration from observations of how infants learn their motor coordination even before they are able to control their limbs.
The key principle is goal directed exploration for direct learning of inverse models.
Intelligent Transport Systems
This project combines several areas of research including wheel torque control, computer vision and gesture recognition to develop an autonomous vehicle that can think and react to visual and audio commands as well as what’s happening in its surroundings.
The use of remote vehicles working (e.g. quadcopters) in tandem with the autonomous vehicle is being developed as a method of mapping terrain to enable the autonomous vehicle to make quicker and better decisions about best path to take.
Multi Sensor Fusion for Simultaneous Localization and Mapping on Autonomous Vehicles
Although many different sensors are nowadays available on autonomous vehicles, the full potential of techniques which integrate information coming from these different sensors to increase the ability of autonomous vehicles of avoiding accidents and, more generally, increase their safety levels remains untapped.
Navigation techniques in static environments based on fusion of multiple sensors are well known, but it is not clear how such methods can cope with more realistic, dynamic environments which may include people, other moving vehicles, or mutating environment features. The problem goes under the name of Simultaneous Localization And Mapping with Moving Objects Tracking (SLAMMOT).
Although several approaches were proposed in literature, so far they do not appear to be able to exploit the availability of multiple heterogeneous sensors. The aim of this project is therefore to investigate solutions to this problem that combine the (potentially conflicting) information coming from an array of heterogeneous sensors in order to accurately localize the vehicle and estimate the (evolving) configuration of the surrounding dynamic environment.