Relationships. Hallucinating Humans for Learning Object Affordances

Hallucinating Humans for Learning Object Affordances, Robot Learning Lab, Computer Science Department, Cornell University.

We bear in mind that the object being worked on is going to be ridden in, sat upon, looked at, talked into, activated, operated, or in some other way used by people individually or en masse. –Dreyfuss (1955).

In fact, humans are the primary reason for our environments to even exist. In this work, we show that modeling human-object relationships (i.e., object affordances) gives us a more compact way of modeling the contextual relationships (as compared to modeling object-object the relationships).

One key aspect of our work is that the humans may not be even seen by our algorithm! Given a large dataset containing only objects, our method considers human poses as a latent variable, and uses infinite topic models to learn the relationships. We apply this to the robotic task of arranging a cluttered house.

Publications

  1. Infinite Latent Conditional Random Fields for Modeling Environments through Humans, Yun Jiang, Ashutosh Saxena. InRobotics: Science and Systems (RSS), 2013. [PDFSupplementary material]
  2. Hallucinated Humans as the Hidden Context for Labeling 3D Scenes, Yun Jiang, Hema S. Koppula, Ashutosh Saxena. InComputer Vision and Pattern Recognition (CVPR), 2013 (oral). [PDF]
  3. Learning Object Arrangements in 3D Scenes using Human Context, Yun Jiang, Marcus Lim, Ashutosh Saxena. InInternational Conference of Machine Learning (ICML), June 2012. [PDF]
  4. Hallucinating Humans for Learning Robotic Placement of Objects, Yun JiangAshutosh Saxena. In International Symposium on Experimental Robotics (ISER), May 2012. [PDF]

Data and Code

Download data and code and find more details here.

Related Project

Placing and Arranging Objects (robot manipulation).

People

Yun Jiang yunjiang at cs.cornell.edu

Marcus Lim

Ashutosh Saxena asaxena at cs.cornell.ed

Personal Robotics team

Sponsors

Microsoft Faculty Fellowship, 2012.

 

Relationships: Robots and Humans. Icub

Robots can develop basic language skills through interaction with a human, according to new results from University of Hertfordshire researchers.

Learning basic language

Dr Caroline Lyon, Professor Chrystopher Nehaniv and Dr Joe Saunders have carried out experiments as part of the iTalk project with the childlike iCub humanoid robot to show how language learning emerges.

Initially the robot can only babble and perceives speech as a string of sounds, not divided up into words. After engaging in a few minutes of “conversation” with humans, in which the participants were instructed to speak to the robot as if it were a small child, the robot adapted its output to the most frequently heard syllables to produce some word forms such as the names of simple shapes and colours.

Methodology

Dr Caroline Lyon said: ‘It is known that infants are sensitive to the frequency of sounds in speech, and these experiments show how this sensitivity can be modelled and contribute to the learning of word forms by a robot.’

The iTalk project teaches the robot to speak using methods similar to those used to teach children and is a key part in the learning process of the human-robot interaction. Although the iCub robot is learning to produce word forms, it does not know their meaning, and learning meanings is another part of the iTalk project’s research.

Find out more

These scientific and technological advances could have a significant impact on the future generation of interactive robotic systems.

Read research paper ‘Interactive language learning by robots: the transition from babbling to word forms’.

Relationships: Robots and Humans. ROILA

ROILA, Robot Interaction Language, is a spoken language for robots. It is constructed to make it easy for humans to learn, but also easy for the robots to understand. ROILA is optimized for the robots’ automatic speech recognition and understanding.

The number of robots in our society is increasing rapidly. The number of service robots that interact with everyday people already outnumbers industrial robots. The easiest way to communicate with these service robots, such asRoomba or Nao, would be natural speech. But current speech recognition technology has not reached a level yet at which it would be easy to use. Often robots misunderstand words or are not able to make sense of them. Some researchers argue that speech recognition will never reach the level of humans.

Palm Inc. faced a similar problem with hand writing recognition for their handheld computers. They invented Graffiti, an artificial alphabet, that was easy to learn and easy for the computer to recognize.  ROILA takes a similar approach by offering an artificial language that is easy to learn for humans and easy to understand for robots. An artificial language as defined by the Oxford Encyclopedia is a language deliberately invented or constructed, especially as a means of communication in computing or information technology.

We reviewed the most successful artificial and natural languages across the dimensions of morphology and phonology (see overview in the form of a large table) and composed a language that is extremely easy to learn. The simple grammar has no irregularities and the words are composed of phonemes that are shared amongst the majority of natural languages. The set of major phonemes was generated from the overview of natural languages. Moreover, we composed a genetic algorithm that generated ROILA’s words in a way that they are easy to pronounce. The same algorithm makes sure that the words in the dictionary sound as different from each other as possible.  This helps the speech recognizer to accurately understand the human speaker.

Most previously developed artificial languages have not been able to attract many human speakers, with the exception of Esperanto. However, with the rise of robots a new community on our planet is formed and there is no reason why robots should not have their own language. Soon there will be millions or robots to which you can talk to in the ROILA language. In summary, we aim to design a “Robotic Interaction Language” that addresses the problems associated with speech interaction using natural languages. Our language is constructed on the basis of two important goals, firstly it should be learnable by the user and secondly, the language should be optimized for efficient recognition by a robot.

ROILA is free to use for everybody and we offer all the technical tools and manuals to make your robot understand and speak ROILA. At the same time we offer courses for humans to learn the ROILA language.

Automated Installer and Java Library March 17, 2013. Development versions of the Automated Installer and Java Library are currently available on GitHub. Please report any issues to Josh at 17019428@student.uws.edu.au. The automated installer and java library are designed to help make it easier to work with ROILA. They are still a work in progress, so there are some features that won’t work fully (especially in the library.) GitHub will be updated with improved copies in the coming months. It will be changed shortly to update a bug with the downloading of the pre-compiled library. There are more planned features for this to come, so keep an eye out on GitHub.

We are currently developing courses in ROILA. They are available in our ROILA Academy. We are also giving a ROILA introductory course to Dutch High School students at the Huygens College Eindhoven. The short course will consist of three lessons followed by a ROILA final exam. The ROILA course will be part of their Science curriculum. The homework curriculum for this course is uploaded in theROILA academy and also on an external website. The vocabulary for this course is uploaded here. You can also find a similar dictionary in the ROILA academy. We will post videos and power point PDFs of each lesson given at the school. We have removed parts of the video that were only relevant to the students (such as administration of the course, etc).

Lesson 1

Lesson 1 Powerpoint PDF

Homework Lessons requirement: Topic 1, 2, 3, 4

Lesson 1 – November 15, 2010

Lesson 1 – November 19, 2010

Lesson 2

Lesson 2 Powerpoint PDF

Homework Lessons requirement: Topic 5, 6, 7

Lesson 2 – November 22, 2010

http://youtu.be/RkCUcWtyk_g

Publications

We have published several articles about ROILA:

Socially Assistive Robotics (SAR)

Convalescence, rehabilitation, and management of life-long cognitive, social, and physical disorders requires ongoing behavioral therapy, consisting of physical and/or cognitive exercises that must be sustained at the appropriate frequency and correctness. In all cases, the intensity of practice and selfefficacy have been shown to be the keys to recovery and minimization of disability. However, because of the fast-growing demographic trends of many of the affected populations (e.g., autism, ADHD, stroke, TBI, etc., as discussed in Section 1.2), the available health care needed to provide supervision and coaching for such behavior therapy is already lacking and on a recognized steady decline.

Socially assistive robotics (SAR) is a comparatively new field of robotics that focuses on developing robots aimed at addressing precisely this growing need. SAR is developing systems capable of assisting users through social rather than the physical interaction. The robot’s physical embodiment is at the heart of SAR’s assistive effectiveness, as it leverages the inherently human tendency to engage with lifelike (but not necessarily human-like or animal-like) social behavior. People readily ascribe intention, personality, and emotion to even the simplest robots, from LEGO toys to iRobot Roomba vacuum cleaners. SAR uses this engagement toward the development of socially interactive robots capable of monitoring, motivating, encouraging, and sustaining user activities and improving human performance.

SAR thus has the potential to enhance the quality of life for large populations of users, including the elderly, individuals with cognitive impairments, those rehabilitating from stroke and other neuromotor disabilities, and children with socio-developmental disorders such as autism. Robots, then, can help to improve the function of a wide variety of people, and can do so not just functionally but also socially, by embracing and augmenting the emotional connection between human and robot.

Human-Robot Interaction (HRI) for SAR is a growing research area at the intersection of engineering, health sciences, psychology, social science, and cognitive science. An effective socially assistive robot must understand and interact with its environment, exhibit social behavior, focus its attention and communication on the user, sustain engagement with the user, and achieve specific assistive goals.

The robot can do all of this through social rather than physical interaction, and in a way that is safe, ethical and effective for the potentially vulnerable user. Socially assistive robots have been shown to have promise as therapeutic tool for children, the elderly, stroke patients, and other special-needs populations requiring personalized care.

Source: “A Roadmap for US Robotic From Internet to Robotics” (ARUS),, May, 2009,

Relationships: Robots and Humans. MACH

Social phobias affect about 15 million adults in the United States, according to the National Institute of Mental Health, and surveys show that public speaking is high on the list of such phobias. For some people, these fears of social situations can be especially acute: For example, individuals with Asperger’s syndrome often have difficulty making eye contact and reacting appropriately to social cues. But with appropriate training, such difficulties can often be overcome.

New software developed at MIT can be used to help people practice their interpersonal skills until they feel more comfortable with situations such as a job interview or a first date. The software, called MACH (short for My Automated Conversation coacH), uses a computer-generated onscreen face, along with facial, speech, and behavior analysis and synthesis software, to simulate face-to-face conversations. It then provides users with feedback on their interactions.

The research was led by MIT Media Lab doctoral student M. Ehsan Hoque, who says the work could be helpful to a wide range of people. A paper documenting the software’s development and testing has been accepted for presentation at the 2013 International Joint Conference on Pervasive and Ubiquitous Computing, known as UbiComp, to be held in September.

“Interpersonal skills are the key to being successful at work and at home,” Hoque says. “How we appear and how we convey our feelings to others define us. But there isn’t much help out there to improve on that segment of interaction.”

Many people with social phobias, Hoque says, want “the possibility of having some kind of automated system so that they can practice social interactions in their own environment. … They desire to control the pace of the interaction, practice as many times as they wish, and own their data.”

The MACH software offers all those features, Hoque says. In fact, in randomized tests with 90 MIT juniors who volunteered for the research, the software showed its value.

First, the test subjects — all of whom were native speakers of English — were randomly divided into three groups. Each group participated in two simulated job interviews, a week apart, with MIT career counselors.

But between the two interviews, unbeknownst to the counselors, the students received help: One group watched videos of interview advice, while a second group had a practice session with the MACH simulated interviewer, but received no feedback other than a video of their own performance. Finally, a third group used MACH and then saw videos of themselves accompanied by an analysis of such measures as how much they smiled, how well they maintained eye contact, how well they modulated their voices, and how often they used filler words such as “like,” “basically” and “umm.”

Evaluations by another group of career counselors showed statistically significant improvement by members of the third group on measures including “appears excited about the job,” “overall performance,” and “would you recommend hiring this person?” In all of these categories, by comparison, there was no significant change for the other two groups.

The software behind these improvements was developed over two years as part of Hoque’s doctoral thesis work with help from his advisor, professor of media arts and sciences Rosalind Picard, as well as Matthieu Courgeon and Jean-Claude Martin from LIMSI-CNRS in France, Bilge Mutlu from the University of Wisconsin, and MIT undergraduate Sumit Gogia.

Designed to run on an ordinary laptop, the system uses the computer’s webcam to monitor a user’s facial expressions and movements, and its microphone to capture the subject’s speech. The MACH system then analyzes the user’s smiles, head gestures, speech volume and speed, and use of filler words, among other things. The automated interviewer — a life-size, three-dimensional simulated face — can smile and nod in response to the subject’s speech and motions, ask questions and give responses.

“While it may seem odd to use computers to teach us how to better talk to people, such software plays an important [role] in more comprehensive programs for teaching social skills [and] may eventually play an essential step in developing key interpersonal skills,” says Jonathan Gratch, a research associate professor of computer science and psychology at the University of Southern California who was not involved in this research. “Such programs also offer important advantages over the human role-players often used to teach such skills. They can faithfully embody a specific theory of pedagogy, and thus can be more consistent than human role-players.”

One reason the automated system’s feedback is effective, Hoque believes, is precisely because it’s not human: “It’s easier to tell the brutal truth through the [software],” he says, “because it’s objective.”

While this initial implementation was focused on helping job candidates, Hoque says training with the software could be helpful in many kinds of social interactions.

After finishing his doctorate in media arts and sciences this summer, Hoque will become an assistant professor of computer science at the University of Rochester in the fall.

Relationships Robots and Humans. CORBYS

CORBYS, Cognitive Control Framework for Robotic Systems, is an Integrated Project funded by the European Commission under the 7th Framework Program, Area: Cognitive Systems and Robotics. The project was launched on 1st of February 2011 and will run for a total of 48 months.

The CORBYS objective is to design, develop and validate an integrated cognitive robot control architecture for supporting robot-human co-working with high level cognitive capabilities:

– situation-awareness
– attention control
– and goal-setting prioritisation
co
CORBYS focus is on robotic systems that have symbiotic relationship with humans. Such robotic systems have to cope with highly dynamic environments as humans are demanding, curious and often act unpredictably. CORBYS will design and implement a cognitive robot control architecture that allows the integration of high-level cognitive control modules, a semantically-driven self-awareness module and a cognitive framework for anticipation of, and synergy with, human behaviour based on biologically-inspired information-theoretic principles.

These modules, supported with an advanced multi-sensor system to facilitate dynamic environment perception, will endow the robotic systems with high-level cognitive capabilities such as situation-awareness, and attention control. This will enable the adaptation of robot behaviour, to the users variable requirements, to be directed by cognitively adapted control parameters.

CORBYS will provide a flexible and extensible architecture to benefit a wide range of applications; ranging from robotised vehicles and autonomous systems such as robots performing object manipulation tasks in an unstructured environment to systems where robots work in synergy with humans. The latter class of systems will be a special focus of CORBYS innovation as there exist important classes of critical applications where support for humans and robots sharing their cognitive capabilities is a particularly crucial requirement to be met. CORBYS control architecture will be validated within two challenging demonstrators:

  1. a novel mobile robot-assisted gait rehabilitation system CORBYS
  2. an existing autonomous robotic system.

The CORBYS demonstrator to be developed during the project, will be a self-aware system capable of learning and reasoning that enables it to optimally match the requirements of the user at different stages of rehabilitation in a wide range of gait disorders.

CORBYS is funded by the European Commission under the 7th Framework Program

University of Bremen,Germany Project Coordinator,Cognitive robot control architecture,
Cognitive robot control architecture,

BCI detection of cognitive processes

The University of Reading,United Kingdom Situation Assessment Architecture,Evaluation methodology, benchmarking, metrics and proceduresRequirement engineering
University Rehabilitation Institute,Slovenia Evaluation,Clinical tests of subsystems and components,System integration
 The University of Hertfordshire,United Kingdom Self-Organizing Informational Anticipatory Architecture,Anticipation of human behaviour and the creation of synergy with this behaviour
Vrije University Brussels,Belgium Low-level robot control,System integration
 Sintef,Norway System integration and functional testing,Sensor network,Physiological monitoring
Otto Bock Mobility Solutions,Germany Demonstrator development,Design and development of the mobile platform of the CORBYS robot-assisted gait rehabilitation system
 Neurological Rehabilitation Center Friedehorst,Germany Evaluation,End-user requirements and ethical aspects
 Bit&Brain Technologies SL,Spain Sensing systems for assessing dynamic system environments including humans,Brain computer software architecture
SCHUNK,Germany Design and integration of actuation system,smart and safe actuators,Sub-system conformance testing
Otto Bock Healthcare,Germany Design and development of the orthotic system of the CORBYSrobot-assisted gait rehabilitation system,Demonstration, integration and evaluation

Grant agreement No. 270219