Generating complex, human-like behaviour in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, touch detection), object manipulation (basic and complex motor actions) and social interaction (speech synthesis, joint attention) exposed as C++ library with bindings for Python and Java (Matlab). In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behaviour and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarising themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.
The code in this repository corresponds to a refactored and cleaned version of the original repository produced in the context of the WYSIWYD project ((What You Say Is What You Did project). The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. FP7-ICT-612139 (What You Say Is What You Did project).
Visit the official project documentation.
This code has an associated article published in the Frontiers in Robotics and AI. You may download the paper here. The associated Zenodo repository is .
The citation is as follows (bib file):
T. Fischer, J.-Y. Puigbo, D. Camilleri, P. Nguyen, C. Moulin-Frier, S. Lallee, G. Metta, T. J. Prescott, Y. Demiris, and P. F. M. J. Verschure (2018). iCub-HRI: A software framework for complex human robot interaction scenarios on the iCub humanoid robot. Frontiers in Robotics and AI https://www.frontiersin.org/articles/10.3389/frobt.2018.00022
This code-oriented article complements a previous paper presenting the scientific concepts and results underlying this work (bib file):
Moulin-Frier*, C., Fischer*, T., Petit, M., Pointeau, G., Puigbo, J., Pattacini, U., Low, S.C., Camilleri, D., Nguyen, P., Hoffmann, M., Chang, H.J., Zambelli, M., Mealier, A., Damianou, A., Metta, G., Prescott, T.J., Demiris, Y., Dominey, P.F.Verschure, P. F. M. J. (2017). DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self. IEEE Transactions on Cognitive and Developmental Systems. http://doi.org/10.1109/TCDS.2017.2754143
(* The two first authors contributed equally.)
Please click on the image which refers to a YouTube video.
The icub-hri
library and documentation are distributed under the GPL-2.0.
The full text of the license agreement can be found in: ./LICENSE.
Please read this license carefully before using the icub-hri
code.
icub-hri
depends on the following projects which need to be installed prior to building icub-hri
:
First, follow the installation instructions for yarp
and icub-main
. If you want to use iol
, speech
or the kinect-wrapper
(which are all optional), also install icub-contrib-common
.
OpenCV-3.0.0
or higher (OpenCV-3.2.0
is recommended) is a required dependency to build the iol2opc
module which is responsible for object tracking. More specifically, we need the new tracking features delivered with OpenCV-3.2.0
:
- Download
OpenCV
:git clone https://github.com/opencv/opencv.git
. - Checkout the correct branch:
git checkout 3.2.0
. - Download the external modules:
git clone https://github.com/opencv/opencv_contrib.git
. - Checkout the correct branch:
git checkout 3.2.0
. - Configure
OpenCV
by filling in the cmake varOPENCV_EXTRA_MODULES_PATH
with the path pointing toopencv_contrib/modules
and then toggling on the varBUILD_opencv_tracking
. - Compile
OpenCV
.
For the object tracking, we rely on the iol
pipeline. Please follow the installation instructions. For icub-hri
, not the full list of dependencies is needed. Only install the following dependencies: segmentation
, Hierarchical Image Representation
, and stereo-vision
. Within Hierarchical Image Representation
, we don't need SiftGPU
.
To estimate the size and pose of objects, we rely on the superquadric-model
. Please follow the installation instructions if you want to use the superquadric-model
(optional).
The compilation can be disabled using the ICUBCLIENT_BUILD_IOL2OPC
cmake flag.
To detect the human skeleton, we employ the kinect-wrapper
library. Please follow the installation instructions in the readme. It might be the case that you have also to build kinect-wrapper
with the new OpenCV-3.x.x
library. We have enabled the possibility to build only the client part of the kinect-wrapper
(see updated instructions) which allows to run the agentDetector
module on a separate machine as the one with the Kinect attached and running kinectServer
.
In order to calibrate the Kinect reference frame with that of the iCub, we need to have (at least) three points known in both reference frames. To do that, we employ iol2opc
to get the reference frame of an object in the iCub's root reference frame, and the agentDetector
to manually find the corresponding position in the Kinect's reference frame. The referenceFrameHandler
can then be used to find the transformation matrix between the two frames.
The procedure is as follows:
- Start
iol2opc
(with its dependencies),agentDetector --showImages
(afterkinectServer
) andreferenceFrameHandler
and connect all ports. - Place one object in front of the iCub (or, multiple objects with one of them being called "target"). Make sure this object is reliably detected by
iol2opc
. - Left click the target object in the depth image of the
agentDetector
window. - Move the object and repeat steps 3+4 at least three times.
- Right click the depth image which issues a "cal" and a "save" command to
referenceFrameHandler
. This saves the transformation in a file which will be loaded the next timereferenceFrameHandler
is started.
This requires a Windows machine with the Microsoft speech SDK installed. Then, compile the speech
repository for speech recognition and speech synthesis. If you use version 5.1 of the SDK, use this patch to fix the compilation.
Push and pull actions require the high-level motor primitives generator karmaWYSIWYD, which facilitates users to control iCub push/pull actions on objects. Please follow the installation instructions in the readme.
If you want to use karmaWYSIWYD
, you must install iol
/ iol2opc
. Then, start iol2opc
and its dependencies as well as iolReachingCalibration
and follow the instructions for iolReachingCalibration to calibrate the arms before issuing any commands to karmaWYSIWYD
.
The recognition of faces with SAM requires human-sensing-SAM, to detect and output cropped faces for SAM to classify. Please follow the installation instructions in the readme.
Once all desired dependencies are installed, building the icub-hri
is straightforward:
- Download
icub-hri
:git clone https://github.com/robotology/icub-hri.git
. cd icub-hri
mkdir build
cd build
ccmake ..
and fill in the cmake varOpenCV_DIR
with the path to theOpenCV-3.2.0
build.- Compile
icub-hri
usingmake
.
We provide a Python script to easily update all dependencies of icub-hri
and icub-hri
itself.
Using icub-hri
in your project is straightforward. Simply find_package(icubclient REQUIRED)
in your main CMakeLists.txt
, and use the ${icubclient_INCLUDE_DIRS}
and ${icubclient_LIBRARIES}
cmake variables as you would expect. A working example project can be found here.
The docker
folder contains the dockerfile used to build a fully compiled dopcker image for icub-hri including all extras such as opencv3, iol, kinect and karmaWysiwyd. You can either download a pre-compiled image at https://hub.docker.com/r/dcamilleri13/icub-client or compile it using the provided files.
Prerequisites:
- Install docker: https://docs.docker.com/engine/installation/linux/ubuntulinux/
- Add permissions to user for convenience: http://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo
- Install nvidia-docker. https://github.com/NVIDIA/nvidia-docker
To compile and run the image:
- Run
make configure
. This will install any required libraries on the host system. - Run
make build
. This will compile the dockerfile into a docker image. - Run
make first_run
. This will run the dockerimage for the first time while setting up permissions, audio and video access. At this point the image is not ready to use because environment variables set by the dockerfile are not accessible via ssh. Thus at this point, typeexit
into the command line to close the image at which point a bashrc_iCub is created with all the required paths. - Run
make run
to launch and attach the docker image.
Note: This image was compiled with an Nvidia card present. It has not been tested on CPU only. Also please note that when running the docker image using docker run, ssh service is disabled on the host machine. This is due to the docker container mirroring the host's network configuration, the requirement for an ssh server to run inside of the container to communicate with pc104 and allow modules to be run via yarpmanager and the requirement for a single ssh server to be running on localhost.