First Page | Document Content | |
---|---|---|
Date: 2017-05-18 12:57:06Learning Humancomputer interaction Cognition Machine learning Multimodal interaction User interfaces Graphical model Artificial intelligence Robot learning Natural language processing Parsing Dialogue system | Task Learning through Visual Demonstration and Situated Dialogue Changsong Liu and Joyce Y. Chai Nishant Shukla and Song-Chun Zhu Department of Computer Science and Engineering Center for Vision, Cognition, Learning andAdd to Reading ListSource URL: shukla.ioDownload Document from Source WebsiteFile Size: 756,24 KBShare Document on Facebook |
The OpenInterface Framework: A tool for multimodal interaction Marcos Serrano Andrew RamsayDocID: 1seP2 - View Document | |
The Virtual Crepe Factory: 6DoF Haptic Interaction with Fluids Gabriel Cirio INRIA Rennes∗ Maud Marchal INSA/INRIA Rennes∗DocID: 1rrLw - View Document | |
Designing the Unexpected: Endlessly Fascinating Interaction for Interactive InstallationsDocID: 1rrrX - View Document | |
Wearable Laser Pointer Versus Head-Mounted Display for Tele-Guidance Applications? Shahram Jalaliniya IT University of Copenhagen Rued Langgaards Vej 7DocID: 1rqij - View Document | |
Challenges in Shared-Environment Human-Robot Collaboration Bradley Hayes Brian ScassellatiDocID: 1roRq - View Document |