Enabling Human-Robot Joint Actions

 

Lab member Chien-Ming Huang gave his talk, “Enabling Human-Robot Joint Actions,” at the Google office in Madison. Drawing on his recently published research on multimodal behaviors (including speech, gaze, and gestures), Chien-Ming highlighted their importance in enabling successful human-robot interaction.

According to his speech, robots are a growing presence in human environments and must coordinate their actions with those of their users. Multimodal behaviors, as shown in Chien-Ming’s study, are a key factor in this join-action behavior. His research implemented a novel approach to modeling human behavior based on mimicking observed humanlike patterns, which is proving to be more useful for successful engagement with humans.

C.-M. Huang and B. Mutlu. Learning-Based Modeling of Multimodal Behaviors for Humanlike Robots. Proceedings of the 2014 ACM/IEEE Conference on Human-Robot Interaction (HRI 2014). March 2014. Bielefeld, Germany.

Source: HCI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s