Explore projects
-
Addressee Estimation is the ability to understand to whom a person is directing an utterance. This ability is crucial for social robot engaging in multi-party interaction to understand the basic dynamics of social communication. In this project, we deploy a DL model on an iCub robot to estimate the placement of the addressee from the robot's first person perspective by taking as input visual information of the speaker. Specifically, we extract two visual features: body pose vectors and face images of the speaker from the camera stream of the iCub and use them to feed our model. The model classify the addressee's placement as 'robot', 'left', or 'right, meaning respectfully that the addressee is the robot, or is at the robot's left or right.
Updated -
Updated
-
This repository contains the codes for a HRI experiment with iCub that consists in drawing together with the robot.
Updated -
Updated
-
Updated
-
Sergio Decherchi / kinase_atlas
GNU General Public License v3.0 or laterUpdated -
This project is meant to implement a model to provide the iCub robot with addressee estimation skill through a classifier based on hybrid deep learning neural network.
UpdatedUpdated -
Updated
-
Updated
-
Updated
-
Seyed Mohammadi / PointView-GCN
MIT LicenseThe code and dataset will be available soon here
Updated -
Seyed Mohammadi / 3DSGrasp
MIT LicenseUpdated -
Updated
-
HRII - PUBLIC / open_vico_openpose
MIT LicenseUpdated -
HRII - PUBLIC / open_vico
GNU Affero General Public License v3.0An open-source Gazebo toolkit for multi-vision-based skeleton tracking in human-robot collaboration.
Updated -
HRII - PUBLIC / open_vico_msgs
GNU Affero General Public License v3.0Updated -
Updated