Skip to content

Instantly share code, notes, and snippets.

@diegoferigo
Last active November 30, 2018 10:42
Show Gist options
  • Save diegoferigo/9384729f98161f3d920d4eb47715c5e8 to your computer and use it in GitHub Desktop.
Save diegoferigo/9384729f98161f3d920d4eb47715c5e8 to your computer and use it in GitHub Desktop.
How-To YARP + iCub: random notes
# Disable echo ^C when Ctrl+C is pressed
stty -echoctl
# Export variables
export EDITOR="nano" # Set the default editor
shopt -s autocd # Avoid using cd to change directory. Simply: ~# /etc
shopt -s cdspell # Basilar typos autocorrect
shopt -s dirspell # Dir names spelling correction
shopt -s cmdhist # Inline multiline commands
# History handling
export HISTSIZE=10000 # Max history entries
export HISTFILESIZE=10000
export HISTIGNORE="history*:pwd:clear:df*:free*:??:jobs:bg:fg:cd ..:make*:cmake .*:ccmake .*:cd build:kill*:pkill*" # Don't save these commands
export HISTTIMEFORMAT='%F %T '
export PROMPT_COMMAND='history -a' # Sync history always, not only at session closing (PROMPT_COMMAND: execute before PS1 appears)
bind '"\e[A": history-search-backward' # Search commands from first letters
bind '"\e[B": history-search-forward' #
# mappings for Ctrl-left-arrow and Ctrl-right-arrow for word moving
bind '"\e[1;5C": forward-word'
bind '"\e[1;5D": backward-word'
bind '"\e[5C": forward-word'
bind '"\e[5D": backward-word'
bind '"\e\e[C": forward-word'
bind '"\e\e[D": backward-word'
# Custom PS1
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\e[32;1m\]\u\[\e[0m\]@\H:\[\e[0;34m\]\w\[\e[0m\]\[\e[0;32m\]$(__git_ps1 " (%s)")\[\e[32;1m\]$\[\e[0m\]\[\e[1m\] '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\H:\w$(__git_ps1 " (%s)")$ '
fi
# Trap enter to set default color for output (after prompt is white bold)
trap 'echo -ne "\e[0m"' DEBUG
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# Load the iCub custom bashrc
# Do not move this script otherwise it doesn't get called from ssh
ICUBRC_FILE="${HOME}/.bashrc_iCub"
if [ -f "$ICUBRC_FILE" ] ; then
source $ICUBRC_FILE
fi
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
# Load the LOC2 custom bashrc
LOC2RC_FILE="${HOME}/.bashrc_loc2"
if [ -f "$LOC2RC_FILE" ] ; then
source $LOC2RC_FILE
fi
# .bashrc_iCub
# setup the iCub enviroment
echo "Setting up yarp and iCub env vars"
# YARP and iCub enviroment variables
export ROBOT_CODE=/usr/local/src/robot
export ROBOT_LOCAL=/home/icub/.local/share/yarp
export ICUBcontrib_DIR=$ROBOT_CODE/iCubContrib
export YARP_ROOT=$ROBOT_CODE/yarp
export YARP_DIR=$YARP_ROOT/build
export ICUB_ROOT=${ROBOT_CODE}/icub-main
export ICUB_DIR=${ICUB_ROOT}/build
export icub_firmware_shared_DIR=${ROBOT_CODE}/icub-firmware-shared/build
export YARP_DATA_DIRS=${YARP_DIR}/share/yarp:${ICUB_DIR}/share/iCub:${ICUBcontrib_DIR}/share/ICUBcontrib:${ICUBcontrib_DIR}/share/speech
function icubsrv_mounted()
{
ICUBSRV_MOUNTED=0
ICUBSRV_UNMOUNTED=1
ICUBSRV_NOT_REACHABLE=2
if [[ $(mount | grep nfs | grep $ROBOT_CODE | wc -l) -gt 0 ]] && [[ $(mount | grep nfs | grep $ROBOT_LOCAL | wc -l) -gt 0 ]] ; then
if [[ $(ping -c1 -W1 icubsrv 2>/dev/null) ]] ; then
return $ICUBSRV_MOUNTED
else
# Option not currently used
return $ICUBSRV_NOT_REACHABLE
fi
elif [[ $(ping -c1 -W1 icubsrv 2>/dev/null) ]] ; then
# Requires 'user' and 'nolock' mounting options
mount ROBOT_CODE 2>/dev/null
mount $ROBOT_LOCAL 2>/dev/null
return $ICUBSRV_MOUNTED
else
return $ICUBSRV_UNMOUNTED
fi
}
icubsrv_mounted
ICUBSRV_STATUS=$?
# Get the name of the robot
if [ $ICUBSRV_STATUS -eq $ICUBSRV_MOUNTED ] ; then
[ -f ${ROBOT_CODE}/yarp_robot_name.txt ] && export YARP_ROBOT_NAME=$(head --lines=1 ${ROBOT_CODE}/yarp_robot_name.txt)
else
# Set the name of your robot here.
# Please change also the root user password
export YARP_ROBOT_NAME=
fi
echo "Using YARP_ROBOT_NAME=\"$YARP_ROBOT_NAME\""
# Set-up optimizations
export CMAKE_BUILD_TYPE=Release
# Editing the PATH causes bash_completion to hang if the folders are not reached
if [ $ICUBSRV_STATUS -eq $ICUBSRV_MOUNTED ] ; then
export PATH=$PATH:$ICUB_DIR/bin:$YARP_DIR/bin:${ICUBcontrib_DIR}/bin
fi
# DebugStream customization
export YARP_VERBOSE_OUTPUT=0
export YARP_COLORED_OUTPUT=1
export YARP_TRACE_ENABLE=0
export YARP_FORWARD_LOG_ENABLE=0
# To enable tab completion on yarp port names
if [ $ICUBSRV_STATUS -eq $ICUBSRV_MOUNTED ] ; then
[ -f $YARP_ROOT/scripts/yarp_completion ] && source $YARP_ROOT/scripts/yarp_completion
fi
export LUA_PATH=";;;${ROBOT_CODE}/rFSM/?.lua;${ICUBcontrib_DIR}/share/ICUBcontrib/contexts/handover/lua/?.lua"
export LUA_CPATH=";;;${YARP_DIR}/lib/lua/?.so"

F/T Sensors Offset

The Green and Purple iCub (both ETH) have different F/T sensors and to run our software they have to be calibrated (at least offset and range, not strictly the calibration matrix).

A first draft of the procedure is the following:

  1. Turn on the CPU and Motors
  2. From robots-configuration, looking in the hardware folder of the robot, figure out what board is associated to the FT sensor under test
  3. Execute ethLoader from icub-head, and set to Maintainance the boards containing the sensors to calibrate
  4. From canLoader --calib, using the IP address of the board (10.0.1.X) press Calib and then Automatic Offset Adj
  5. Put the board again in Application using ethLoader

Resources

YOGA Demo

Starting with the robot hanged with the ropes:

  1. Follow the instructions of Setup the robot.md
  2. Execute yarpmanager
  3. From the wbd entry launch yarplogger
  4. Turn the motors on
  5. Execute yarprobotinterface (with --config homePoseBalancing.ini if needed)
  6. Put iCub on the YogaPP home position
  7. Check on iCubGui (opened from yarpmanager remembering to attach) the forces on the robot
  8. From yarp /wholebodydynamics/rpc execute calib all 300 and check that the forces on the feet are small
  9. Open matlab and browse to WBIToolboxControllers/controllers/torqueBalancing
  10. There is the simulink model, and to select the correct state machine edit the initTorqueBalancing.m. For minor configurations (e.g. reduced or extended Yoga) browse to app/robot/ROBOT_NAME/initStateMachine.m
  11. Pull the robot down
  12. If the forces are not parallel / equal in size, execute the script codyco-modules/src/script/twoFeetStandingIdleAndCalib.sh while holding the robot (it might be useful raise up the robot of the iCubGui, use yarp write ... /iCubGui/base:i and type 0.0 0.0 0.0 0.0 0.0 2000.0)
  13. Build the simulink model (Ctrl+D) and then play

Resources:

[WIP] gzserver on a remote machine

This how-to describes the steps for executing gzserver in one of our workstations in order to lower down the computational load of the client computer. Let's assume the following variables, that may depend on your current setup:

  • SERVER_IP is the IP address of the server where gzserver will run
  • CLIENT_IP is the IP of your computer, where gzclient will run

Set the server up

As often happens in the guides of this gist, we're going to do use some docker magic to simplify the steps and maintainance of the running system. In order to get most from the computational power provided by the server, exploiting its GPU is required. The setup might be difficult in many cases, and dependent on what GPU is shipped with the server. The server we're going to use is endowed by a Nvidia GF108GL Quadro 600. If you ever worked with systems that requires proprietary drivers you already know how difficult can be having all set properly up, just figure when you add containers technology on top of it. Luckily, nvidia people recently developed nvidia-docker, and the best place to grasp the advantages of GPU containerization is their wiki. In short, you don't need to install any driver in your image, but the system's ones are properly retrieve and shared with the container during its creation, at the cost of using a wrapper to the docker command line.

Packages from default repositories

$ sudo apt install nvidia-375 nvidia-modprobe docker

External packages

Get the correct docker version

The precompiled deb package depends on docker-ce. From the official install page, get the packages with:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo apt-key fingerprint 0EBFCD88
$ sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
       $(lsb_release -cs) \
       stable testing edge"
$ sudo apt-get update
$ sudo apt-get install docker-ce

Follow this issue since now docker.io package seems ready to be used.

Get nvidia-docker

From this page, copy the url of the last deb release, then download and install it:

$ cd /tmp
$ wget https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb
$ sudo dpkg -i nvidia-docker*.deb

Set docker up

# systemctl start docker
# docker pull diegoferigo/nvidia-gazebo

Usage

Execute gzserver

$ xhost +
$ nvidia-docker run -it --rm \
                    -e DISPLAY \
                    -e QT_X11_NO_MITSHM=1 \
                    -p 20000:11345 \
                    -v /tmp/.X11-unix:/tmp/.X11-unix:rw \
                    diegoferigo/nvidia-gazebo bash

Then, from within the container:

# gzserver --verbose

Launch gzclient

The server will be exposed to the network at the ${SERVER_IP}:20000:

$ GAZEBO_MASTER_URI=${SERVER_IP}:20000 gzclient --verbose

Status

Currently this setup doesn't work for some unknown reason. The most simple setup is gzserver running directly installed in the workstation, and gzclient running locally (even inside docker). Further tests are required, and possibly submitting a bug report upstream, trying to reproduce the issues with their official docker containers.

From my tests, working on docker direcly using the X session is the simplest case. When this will be working, we can figure out how to properly configure everything through an SSH connection, handling the DISPLAY variable and, if needed, using xvfb.

Side note, start considering gzweb?

Resources

Gazebo Simulations

This document describes how to setup a Linux computer for launching a Gazebo simulations of the iCub robot.

Visit the force_control_on_icub for a primer.

Below, a brief description of the steps is reported:

  1. Verify that yarp, codyco-superbuild, and gazebo-yarp-plugins are updated and aligned
  2. Execute yarpserver (--write if needed), or if a server is already running, configure the environment with yarp detect --write
  3. Ensure that the GAZEBO_MODEL_PATH GAZEBO_RESOURCE_PATH GAZEBO_PLUGIN_PATH are properly configured. For an example, this Dockerfile is a good starting point
  4. Open gazebo -> Insert -> ${codycosuperbuildfolder} -> iCub (no hands). If the robot doesn't fall, all is properly configured
  5. Execute YARP_ROBOT_NAME="icubGazeboSim" yarprobotinterface --config launch-wholebodydynamics.xml

These steps, after the sources alignment, translate to the following commands:

yarpserver --write &
gazebo &
YARP_ROBOT_NAME="icubGazeboSim" yarprobotinterface --config launch-wholebodydynamics.xml

Special use cases

yarplogger support

By default the program's output is displayed in the console of the machine that runs the process. In order to collect all the output produced by multiple yarp processes, yarplogger can be used. Even though this tool requires a more complex setup, it also allows catching the logs from remote machines. This is accomplished by executing the commands using yarprun, that btw allows starting processes on machines beloging to the same network, and configures the environment for gathering the output messages. This could be handy also when no remote machines are involved, e.g. when processes are executed from different terminals in the same machine.

In addition to the yarpserver, also a yarprun --server process is required. This latter guy is responsible of managing all the processes launched through yarprun. In brief, this is the template:

yarpserver --write &
yarprun --server /yarprunserver &
yarp run --log
         --on /yarprunserver \
         --as ${TAG} \
         --cmd "${CMD}"

${TAG} is just a label to easily identify the command (e.g. for stop it later on).

${CMD} is the command to execute, such as:

Example: yarprobotinterface

yarp run --log
         --on /yarprunserver \
         --as yri \
         --cmd "yarprobotinterface --config launch-wholebodydynamics.xml" \
         --env YARP_ROBOT_NAME=icubGazeboSim 

Example: yarpmotorgui

yarp run --log
         --on /yarprunserver \
         --as ymg \
         --cmd "yarpmotorgui --robot icubSim"

Alternative solution

If for some reason yarprun doesn't work or the output is still not forwarded to yarplogger, it is possible to set the YARP_FORWARD_LOG_ENABLE=1 environment variable in order to have the output of the yarp logging system displayed in yarplogger. It is worth noting that in this way only the output produced by yError(), yDebug(),... is shown, and this means that e.g the output of std::cout is not forwarded.

Other useful commands

(Complete list here)

  • yarprun --on /yarprunserver --sigterm ${TAG}: stop the process identified by ${TAG}
  • yarp run --on /yarprunserver --ps: query the list of the active processes

Set yarp to use the simulation time clock

The simulation in gazebo often is not performed in real-time, and the simulation-time is slower than the system clock (e.g. 0.7 times slower). YARP has the support of counting time (through yarp::os::Time::now()) either gathering the system clock or reading a port that provides the time. gazebo-yarp-plugins contains a clock plugin that gives gazebo the capability to publish its simulation time. The steps to enable this network clock is the following:

  • export YARP_CLOCK=/clock
  • Launch gazebo server with the clock plugin: gazebo -slibgazebo_yarp_clock.so &

[!] If the computation is still too heavy for the CPU, it can happen that a thread that should run at a fixed rate (e.g. 100Hz) actually runs slower, despite the network clock. This is a computational bottleneck, and it could happen even if the CPU is not at its 100%. One possible solution is to slow down the simulation itself, distributing the CPU load on a bigger time. Since YARP uses the network clock as described, the only consequence is that the execution of the program looks slower (from the user timeline). For achieving this, in gazebo change World -> Physics -> real time update rate -> decrease a lot (e.g. to 100).

[!] The default step in gazebo is 1ms. If the software running on the simulated model (and hence that uses the gazebo clock) needs a smaller time granularity, World -> Physics -> max step size can reduce the time step quantum.

Control the robot on gazebo through simulink

This setup is a bit tricky. There are many components in this picture: simulink, gazebo, yarp, each of them with their clock. In the previous step we understood that by using YARP_CLOCK we can tell yarp to use the network clock provided by gazebo instead of the system's clock. The integration of simulink however does affect gazebo. In fact, when simulink runs (with the WB-Toolbox stuff), it disables the internal clock of gazebo and forwards to the physics engine its own stepping clock. By the end of the story, simulink plays and pauses continuously gazebo while providing the new measurements to the robot (the real time factor goes indeed to 0).

Considering the yarp side, before running the simulink simulation, the matlab workspace should set the proper YARP_ROBOT_NAME with setenv in order to get the right configuration of the simulated robot (e.g. the yarpWholeBodyInterface.ini). Moreover, waiting the development of the version 3.0 of the WB-Toolbox, the YARP_CLOCK=/clock variable should be set in matlab. This is required by the signal filtering of the yarp-wholebodyinterface, that despite is spawned by simulink, it uses the default clock of yarp if not instructed otherwise. An alternative to this last step is reading the variables from the ControlBoard (raw variables, e.g. setting readSpeedAccFromControlBoard in the yarpWholeBodyInterface.ini file) instead of calculating an estimation of them using high level numerical derivatives. Note that the low level measurements are generated at a higher rate (but not all the real robots have the encoders at the motor side).

Matlab / Octave Bindings

This file contains the information on how to use the MEX bindings for controlling the iCub gazebo model.

The configuration of all the repositories can be exctracted by the Dockerfile that generates my development setup. What you need are the CMake options passed to yarp-matlab-bindings and idyntree. In particular, the _USES_OCTAVE and _USES_MATLAB flags set the installation of the bindings, and the optional _GENERATE_MATLAB triggers the generation of the bindings. Note that the generation is not mandatory, the repositories contain pregenerated bindings, and this process is required only after an API change. See the last section of this file for more details.

Recently both yarp and idyntree gained the possibility to use a single MEX interface for either matlab and octave. SWIG, using its .i configuration file, generates the yarpMEX.mex and iDynTreeMEX.mex files, and the .m files to call the C function contained in them. Then, for their usage in matlab and octave, the MEX file is linked respectively with the matlab's or octave's headers.

Example: move the robot

In this very simple example we'll try to let the robot move inside the gazebo sending the signals through the Octave bindings (but they work 1:1 also in Matlab). Let's start with:

$ yarpserver &
$ gazebo & # Add the icub (no hands) model
$ YARP_ROBOT_NAME="icubGazeboSim" yarprobotinterface --config launch-wholebodydynamics.xml &

Then, assuming that the ${INSTALL_PREFIX} is the folder containing the binding (with the {octave,matlab}/{+yarp,+iDynTree} folders) open octave with:

$ octave -p ${INSTALL_PREFIX}/octave -p ${INSTALL_PREFIX}/octave/+yarp -p ${INSTALL_PREFIX}/octave/iDynTree

And type:

>> Network.init
>> p = Property
>> p.put('robot', 'icub')
>> p.put('device', 'remote_controlboard')
>> p.put('local', '/octave/left_arm_control')
>> p.put('remote', '/icubSim/left_arm')
>> dev = PolyDriver(p)

Now dev contains all the interfaces that are implemented by the left_arm kinematic chain, and they can be gathered by the iface = dev.view* functions. You can figure out all the configured interfaces by looking the robots-configuration xml files, especially inside wrappers.

For the sake of this example, the robot will move the left arm with

pos = dev.viewIPositionControl
pos.positionMove(0, 50)
Network.fini

The shoulder joint should move.

Optional: YARP / iDynTree bindings generation

In order to generate the wrappers you need a fork of swig that supports matlab. The commands to configure and build it could be found in the Dockerfile linked above. Enabling the _GENERATE_MATLAB CMake option in yarp-matlab-bindings and idyntree, the bindings are generated and put respectively into yarp-matlab-bindings/matlab/autogenerated/ and idyntree/bindings/matlab/autogenerated/.

References

How to setup a new iCub workstation

Remote folders

  • Add the following lines to the fstab and create the folders if necessary:
# NFS folders from the server
10.0.0.1:/exports/code       /usr/local/src/robot         nfs _netdev,auto,hard,intr 0 0
10.0.0.1:/exports/local_yarp /home/icub/.local/share/yarp nfs _netdev,auto,hard,intr 0 0
# icub-head sshfs folders
sshfs#[email protected]:/usr/local/src/robot/         /mnt/icub-head-src   fuse defaults,allow_other 0 0
sshfs#[email protected]:/home/icub/.local/share/yarp  /mnt/icub-head-local fuse defaults,allow_other 0 0
  • Open a terminal and check if the NFS folders from the server are mounted (sudo mount -a)
  • Check if the SSHFS folders from the head are mounted

Install missing dependencies

sudo apt install sshfs colordiff libsqlite3-dev tree liblua5.3-dev swig

Setup the cluster hostnames

The file /etc/hosts of the cluster's machine should contain something similar:

127.0.0.1	localhost
127.0.1.1	iiticublap090
127.0.0.1       icub29
10.0.0.30       icub30
10.0.0.1        icub-srv
10.0.0.2        icub-head

Setup the hostname on the icub-srv

The icub-srv has a bind DNS server that resolves hostname on the local network. In order to resolve the new laptop hostname, modify the following files:

# /etc/bind/db.icub.local - Forward
iiticublap091   IN      A       10.0.0.30
# cat /etc/bind/db.10.0.0
30      IN      PTR     iiticublap030.icub.local.
# Bump the Serial to flush the cache

Restart the bind9 service. To check that everything works, execute:

icub@icub-srv:~$ nslookup iiticublap091.icub.local
Server:		127.0.0.1
Address:	127.0.0.1#53

Name:	iiticublap091.icub.local
Address: 10.0.0.30

Deploy ssh keys

  • From the new workstation, create new a ssh key with ssh-keygen
  • Deploy this key to all the machines of the cluster (to allow yarprun to be executed w/o asking passwords) executing: ssh-copy-id icub@TARGETHOST (note: deploy also in the same machine)

If the machine shows the following error: sign_and_send_pubkey: signing failed: agent refused operation, execute ssh-add and check the operation was successful with ssh-add -l. After these operations, commands executed with ssh icub@TARGETHOSTNAME command shouldn't ask any password (and should load yarp stuff reading ~/.profile).

YARP & co

  • Check if all the software builds without errors (yarp -> icub-firmware-shared -> icub-main -> ...). If there are any issues with dependencies, try a new build folder. Porting only the CMakeCache.txt could be an idea to avoid loosing the current conf (find and change all the /build with the new build, delete lines if dependencies are not met)
  • Install robot-configuration files (files should go inside $ROBOT_CODE/iCubContrib/share/ICUBcontrib/robots/$YARP_ROBOT_NAME/)
  • Setup cluster-config.xml
  • Start a yarpserver in a machine inside the cluster (choose one) and check that all the machines can find it (yarp detect --write)

Final tests:

  • Try to execute devices
  • Try Matlab/Simulink and various demos

Open problems / TODO:

  • System hangs during shutdown if the NFS folders are mounted and the server cannot be reached. Test a NetworkManager dispatcher
  • ROS / Gazebo variables
  • Matlab bindings

Under test

Bashrc

  • If the workstation has the default bash configuration, copy the tweaked ones: .bashrc_iCub .bashrc
  • Check if the variables from .bashrc_iCub are read correctly
  • Install the new .bashrc_loc2 and source it from .bashrc

Remote folders

10.0.0.1:/exports/code       /usr/local/src/robot         nfs noauto,user,exec,x-systemd.automount,timeo=14,x-systemd.requires=network.target,nolock,hard 0 0
10.0.0.1:/exports/local_yarp /home/icub/.local/share/yarp nfs noauto,user,exec,x-systemd.automount,timeo=14,x-systemd.requires=network.target,nolock,hard 0 0

How to setup iCub

This document explains the steps and good practices required to start working on iCub. This is not a substitution of the official wiki, it only aims to provide a step-by-step guide for newcomers.

Most of the iCub robots are equipped with the following machines:

Computer description Hostname IP Address
Server icub-srv 10.0.0.1
PC 104 in the robot's head icub-head 10.0.0.2
Computer for I/O and visualization * 10.0.0.*

The I/O computer has no fixed hostname nor IP address. For the sake of clarity, below it will be referred as icub-viz.

Graphically, the network that will be created has the following topology:

 10.0.0.1                                         10.0.0.2
[icub-srv] -----------> [icub-viz] <----------- [icub-head]
              (ftp)     yarpserver    (sshfs)       DUT

The aim is to run software (the Device Under Test) on the iCub.

1. Where to find and how to handle the sources

There are two different trees of sources+configuration present in this setup: the first stored in the icub-srv, and the second stored in the icub-head. When the icub-viz computer boots, the folders from icub-srv are automatically mounted into icub-viz through ftp (for reference look inside /etc/fstab).

Optionally, for editing files stored in icub-head with graphical tools, it is possible to mount its filesystem into icub-viz using sshfs [1]:

sudo sshfs -o allow_other [email protected]:/usr/local/src/robot/ /home/icub/icub-head
sudo sshfs -o allow_other [email protected]:/home/icub/.local /home/icub/icub-head-local

After these steps, the icub-viz computer will have:

  • /usr/local/src/robot: sources mounted from icub-srv
  • /home/icub/.local: configuration mounted from icub-srv
  • /home/icub/icub-head: sources mounted from icub-head
  • /home/icub/icub-head-local: configuration mounted from icub-head

From now on, icub-srv won't be used anymore.

[!] It is worth noting that for heavy computational tasks, the server could be exploited by accessing to it through ssh. For easy tasks, icub-viz is usually good enough.

2. Build the sources

2.1 Before starting

The sources must be built on the CPU that runs it. This means that the sources that will run in icub-viz must be built from within icub-viz, and the sources that will run on icub-head must be built from within icub-head. The first setup is straightforward, for the second instead accessing icub-head through ssh is required.

The following operation will align the sources to a fixed state (e.g. last commit), and then their configuration and compilation will be performed.

At this point, both icub-viz and icub-head have sources and configuration in the following locations:

  • sources: /usr/local/src/robot
  • configuration: /home/icub/.local

Variables pointing at these folders are defined in ~/.bashrc-iCub, e.g. $ROBOT_CODE that points to the sources. In the same file, other directories are set. If you wonder where files have been put, having a look at this file could help.

The process is exactly the same for either icub-viz and icub-head, and it will be described just one time. The only difference that resides in the latter case is that beforehand you should login in icub-head machine by typing ssh icub-head.

2.2 Update, configure and build the sources

The main repositories stored in the $ROBOT_CODE folder are:

  • yarp: the middleware
  • icub-main: libraries, tools, and software specific for the iCub
  • robots-configuration: set of xml files for configuring the iCub
  • icub-firmware-shared: headers for accessing the low-level hw for client-side usage
  • icub-firmware-builds: binaries generated from icub-firware for (manually) updating the low-level infrastructure
  • codyco-superbuild: tools from the codyco project

A possible order of updating the sources is the following:

(yarp) --> (icub-firmware-shared) --> (icub-main) --> (robots-configuration) --> (codyco-superbuild)

Usually the manual firmware update (icub-firmware-builds) is executed by the firmware guys.

The steps are the usual ones for CMake projects. Here the correct branch of the code should be selected, and it must be aligned between icub-viz and icub-head.

cd $Project
git pull
( ... other git commands )
cd build
cmake .
make

It is worth noting that for the codyco-superbuild project, the command make update-all should be executed before make.

[!] The files should't be installed. The $PATH is configured to use the build-tree files. (TODO Create, test and propose an install strategy)

[!] If you are wondering why cmake ., the reason is that the build folder should not be deleted, and the present CMake cache should be instead used. The configuration of the project is done manually and removing the folder would mean that it should be performed again (TODO Propose a CMake file for initial-cache)

[!] In icub-viz you can build safely with -j4. In icub-head limit jobs to maximum -j2.

3. Open the setup for executing the code

In a yarp network , the yarpserver runs in one of the computer. The utility yarprun instead runs, kills, and monitors application on remote machines. yarpmanager is the responsible of managing multiple programs on a set of machines. yarplogger displays the log of a set of machines. Eventually, yarpmotorgui moves and sets the joints configuration.

A typical workflow has the following consoles opened:

  • T1: icub_cluster.py (in order to load the correct conf file, launch it from /home/icub)
  • T2: yarpmanager (then inside: yarplogger -> yarprobotinterface -> yarpmotorgui)
  • T3: ssh to icub-head
  • T4: console of icub-viz
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment