You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The Green and Purple iCub (both ETH) have different F/T sensors and to run our software they have to be calibrated (at least offset and range, not strictly the calibration matrix).
A first draft of the procedure is the following:
Turn on the CPU and Motors
From robots-configuration, looking in the hardware folder of the robot, figure out what board is associated to the FT sensor under test
Execute ethLoader from icub-head, and set to Maintainance the boards containing the sensors to calibrate
From canLoader --calib, using the IP address of the board (10.0.1.X) press Calib and then Automatic Offset Adj
Put the board again in Application using ethLoader
Execute yarprobotinterface (with --config homePoseBalancing.ini if needed)
Put iCub on the YogaPP home position
Check on iCubGui (opened from yarpmanager remembering to attach) the forces on the robot
From yarp /wholebodydynamics/rpc execute calib all 300 and check that the forces on the feet are small
Open matlab and browse to WBIToolboxControllers/controllers/torqueBalancing
There is the simulink model, and to select the correct state machine edit the initTorqueBalancing.m. For minor configurations (e.g. reduced or extended Yoga) browse to app/robot/ROBOT_NAME/initStateMachine.m
Pull the robot down
If the forces are not parallel / equal in size, execute the script codyco-modules/src/script/twoFeetStandingIdleAndCalib.sh while holding the robot (it might be useful raise up the robot of the iCubGui, use yarp write ... /iCubGui/base:i and type 0.0 0.0 0.0 0.0 0.0 2000.0)
This how-to describes the steps for executing gzserver in one of our workstations in order to lower down the computational load of the client computer. Let's assume the following variables, that may depend on your current setup:
SERVER_IP is the IP address of the server where gzserver will run
CLIENT_IP is the IP of your computer, where gzclient will run
Set the server up
As often happens in the guides of this gist, we're going to do use some docker magic to simplify the steps and maintainance of the running system. In order to get most from the computational power provided by the server, exploiting its GPU is required. The setup might be difficult in many cases, and dependent on what GPU is shipped with the server. The server we're going to use is endowed by a Nvidia GF108GL Quadro 600. If you ever worked with systems that requires proprietary drivers you already know how difficult can be having all set properly up, just figure when you add containers technology on top of it. Luckily, nvidia people recently developed nvidia-docker, and the best place to grasp the advantages of GPU containerization is their wiki. In short, you don't need to install any driver in your image, but the system's ones are properly retrieve and shared with the container during its creation, at the cost of using a wrapper to the docker command line.
Currently this setup doesn't work for some unknown reason. The most simple setup is gzserver running directly installed in the workstation, and gzclient running locally (even inside docker). Further tests are required, and possibly submitting a bug report upstream, trying to reproduce the issues with their official docker containers.
From my tests, working on docker direcly using the X session is the simplest case. When this will be working, we can figure out how to properly configure everything through an SSH connection, handling the DISPLAY variable and, if needed, using xvfb.
Below, a brief description of the steps is reported:
Verify that yarp, codyco-superbuild, and gazebo-yarp-plugins are updated and aligned
Execute yarpserver (--write if needed), or if a server is already running, configure the environment with yarp detect --write
Ensure that the GAZEBO_MODEL_PATHGAZEBO_RESOURCE_PATHGAZEBO_PLUGIN_PATH are properly configured. For an example,
this Dockerfile is a good starting point
Open gazebo -> Insert -> ${codycosuperbuildfolder} -> iCub (no hands). If the robot doesn't fall, all is properly configured
By default the program's output is displayed in the console of the machine that runs the process. In order to collect all the output produced by multiple yarp processes, yarplogger can be used. Even though this tool requires a more complex setup, it also allows catching the logs from remote machines. This is accomplished by executing the commands using yarprun, that btw allows starting processes on machines beloging to the same network, and configures the environment for gathering the output messages. This could be handy also when no remote machines are involved, e.g. when processes are executed from different terminals in the same machine.
In addition to the yarpserver, also a yarprun --server process is required. This latter guy is responsible of managing all the processes launched through yarprun. In brief, this is the template:
If for some reason yarprun doesn't work or the output is still not forwarded to yarplogger, it is possible to set the YARP_FORWARD_LOG_ENABLE=1 environment variable in order to have the output of the yarp logging system displayed in yarplogger. It is worth noting that in this way only the output produced by yError(), yDebug(),... is shown, and this means that e.g the output of std::cout is not forwarded.
yarprun --on /yarprunserver --sigterm ${TAG}: stop the process identified by ${TAG}
yarp run --on /yarprunserver --ps: query the list of the active processes
Set yarp to use the simulation time clock
The simulation in gazebo often is not performed in real-time, and the simulation-time is slower than the system clock (e.g. 0.7 times slower). YARP has the support of counting time (through yarp::os::Time::now()) either gathering the system clock or reading a port that provides the time. gazebo-yarp-plugins contains a clock plugin that gives gazebo the capability to publish its simulation time. The steps to enable this network clock is the following:
export YARP_CLOCK=/clock
Launch gazebo server with the clock plugin: gazebo -slibgazebo_yarp_clock.so &
[!] If the computation is still too heavy for the CPU, it can happen that a thread that should run at a fixed rate (e.g. 100Hz) actually runs slower, despite the network clock. This is a computational bottleneck, and it could happen even if the CPU is not at its 100%. One possible solution is to slow down the simulation itself, distributing the CPU load on a bigger time. Since YARP uses the network clock as described, the only consequence is that the execution of the program looks slower (from the user timeline). For achieving this, in gazebo change World -> Physics -> real time update rate -> decrease a lot (e.g. to 100).
[!] The default step in gazebo is 1ms. If the software running on the simulated model (and hence that uses the gazebo clock) needs a smaller time granularity, World -> Physics -> max step size can reduce the time step quantum.
Control the robot on gazebo through simulink
This setup is a bit tricky. There are many components in this picture: simulink, gazebo, yarp, each of them with their clock. In the previous step we understood that by using YARP_CLOCK we can tell yarp to use the network clock provided by gazebo instead of the system's clock. The integration of simulink however does affect gazebo. In fact, when simulink runs (with the WB-Toolbox stuff), it disables the internal clock of gazebo and forwards to the physics engine its own stepping clock. By the end of the story, simulink plays and pauses continuously gazebo while providing the new measurements to the robot (the real time factor goes indeed to 0).
Considering the yarp side, before running the simulink simulation, the matlab workspace should set the proper YARP_ROBOT_NAME with setenv in order to get the right configuration of the simulated robot (e.g. the yarpWholeBodyInterface.ini). Moreover, waiting the development of the version 3.0 of the WB-Toolbox, the YARP_CLOCK=/clock variable should be set in matlab. This is required by the signal filtering of the yarp-wholebodyinterface, that despite is spawned by simulink, it uses the default clock of yarp if not instructed otherwise. An alternative to this last step is reading the variables from the ControlBoard (raw variables, e.g. setting readSpeedAccFromControlBoard in the yarpWholeBodyInterface.ini file) instead of calculating an estimation of them using high level numerical derivatives. Note that the low level measurements are generated at a higher rate (but not all the real robots have the encoders at the motor side).
This file contains the information on how to use the MEX bindings for controlling the iCub gazebo model.
The configuration of all the repositories can be exctracted by the Dockerfile that generates my development setup. What you need are the CMake options passed to yarp-matlab-bindings and idyntree. In particular, the _USES_OCTAVE and _USES_MATLAB flags set the installation of the bindings, and the optional _GENERATE_MATLAB triggers the generation of the bindings. Note that the generation is not mandatory, the repositories contain pregenerated bindings, and this process is required only after an API change. See the last section of this file for more details.
Recently both yarp and idyntree gained the possibility to use a single MEX interface for either matlab and octave. SWIG, using its .i configuration file, generates the yarpMEX.mex and iDynTreeMEX.mex files, and the .m files to call the C function contained in them. Then, for their usage in matlab and octave, the MEX file is linked respectively with the matlab's or octave's headers.
Example: move the robot
In this very simple example we'll try to let the robot move inside the gazebo sending the signals through the Octave bindings (but they work 1:1 also in Matlab). Let's start with:
$ yarpserver &
$ gazebo &# Add the icub (no hands) model
$ YARP_ROBOT_NAME="icubGazeboSim" yarprobotinterface --config launch-wholebodydynamics.xml &
Then, assuming that the ${INSTALL_PREFIX} is the folder containing the binding (with the {octave,matlab}/{+yarp,+iDynTree} folders) open octave with:
Now dev contains all the interfaces that are implemented by the left_arm kinematic chain, and they can be gathered by the iface = dev.view* functions. You can figure out all the configured interfaces by looking the robots-configuration xml files, especially inside wrappers.
For the sake of this example, the robot will move the left arm with
In order to generate the wrappers you need a fork of swig that supports matlab. The commands to configure and build it could be found in the Dockerfile linked above. Enabling the _GENERATE_MATLAB CMake option in yarp-matlab-bindings and idyntree, the bindings are generated and put respectively into yarp-matlab-bindings/matlab/autogenerated/ and idyntree/bindings/matlab/autogenerated/.
The icub-srv has a bind DNS server that resolves hostname on the local network. In order to resolve the new laptop hostname, modify the following files:
# /etc/bind/db.icub.local - Forward
iiticublap091 IN A 10.0.0.30
# cat /etc/bind/db.10.0.0
30 IN PTR iiticublap030.icub.local.
# Bump the Serial to flush the cache
Restart the bind9 service. To check that everything works, execute:
From the new workstation, create new a ssh key with ssh-keygen
Deploy this key to all the machines of the cluster (to allow yarprun to be executed w/o asking passwords) executing: ssh-copy-id icub@TARGETHOST (note: deploy also in the same machine)
If the machine shows the following error: sign_and_send_pubkey: signing failed: agent refused operation, execute ssh-add and check the operation was successful with ssh-add -l. After these operations, commands executed with ssh icub@TARGETHOSTNAME command shouldn't ask any password (and should load yarp stuff reading ~/.profile).
YARP & co
Check if all the software builds without errors (yarp -> icub-firmware-shared -> icub-main -> ...). If there are any issues with dependencies, try a new build folder. Porting only the CMakeCache.txt could be an idea to avoid loosing the current conf (find and change all the /build with the new build, delete lines if dependencies are not met)
Install robot-configuration files (files should go inside $ROBOT_CODE/iCubContrib/share/ICUBcontrib/robots/$YARP_ROBOT_NAME/)
This document explains the steps and good practices required to start working on iCub. This is not a substitution of the official wiki, it only aims to provide a step-by-step guide for newcomers.
Most of the iCub robots are equipped with the following machines:
Computer description
Hostname
IP Address
Server
icub-srv
10.0.0.1
PC 104 in the robot's head
icub-head
10.0.0.2
Computer for I/O and visualization
*
10.0.0.*
The I/O computer has no fixed hostname nor IP address. For the sake of clarity, below it will be referred as icub-viz.
Graphically, the network that will be created has the following topology:
The aim is to run software (the Device Under Test) on the iCub.
1. Where to find and how to handle the sources
There are two different trees of sources+configuration present in this setup: the first stored in the icub-srv, and the second stored in the icub-head. When the icub-viz computer boots, the folders from icub-srv are automatically mounted into icub-viz through ftp (for reference look inside /etc/fstab).
Optionally, for editing files stored in icub-head with graphical tools, it is possible to mount its filesystem into icub-viz using sshfs [1]:
After these steps, the icub-viz computer will have:
/usr/local/src/robot: sources mounted from icub-srv
/home/icub/.local: configuration mounted from icub-srv
/home/icub/icub-head: sources mounted from icub-head
/home/icub/icub-head-local: configuration mounted from icub-head
From now on, icub-srv won't be used anymore.
[!] It is worth noting that for heavy computational tasks, the server could be exploited by accessing to it through ssh. For easy tasks, icub-viz is usually good enough.
2. Build the sources
2.1 Before starting
The sources must be built on the CPU that runs it. This means that the sources that will run in icub-viz must be built from within icub-viz, and the sources that will run on icub-head must be built from within icub-head. The first setup is straightforward, for the second instead accessing icub-head through ssh is required.
The following operation will align the sources to a fixed state (e.g. last commit), and then their configuration and compilation will be performed.
At this point, both icub-viz and icub-head have sources and configuration in the following locations:
sources: /usr/local/src/robot
configuration: /home/icub/.local
Variables pointing at these folders are defined in ~/.bashrc-iCub, e.g. $ROBOT_CODE that points to the sources. In the same file, other directories are set. If you wonder where files have been put, having a look at this file could help.
The process is exactly the same for either icub-viz and icub-head, and it will be described just one time. The only difference that resides in the latter case is that beforehand you should login in icub-head machine by typing ssh icub-head.
2.2 Update, configure and build the sources
The main repositories stored in the $ROBOT_CODE folder are:
yarp: the middleware
icub-main: libraries, tools, and software specific for the iCub
robots-configuration: set of xml files for configuring the iCub
icub-firmware-shared: headers for accessing the low-level hw for client-side usage
icub-firmware-builds: binaries generated from icub-firware for (manually) updating the low-level infrastructure
codyco-superbuild: tools from the codyco project
A possible order of updating the sources is the following:
Usually the manual firmware update (icub-firmware-builds) is executed by the firmware guys.
The steps are the usual ones for CMake projects. Here the correct branch of the code should be selected, and it must be aligned between icub-viz and icub-head.
cd $Project
git pull
( ... other git commands )
cd build
cmake .
make
It is worth noting that for the codyco-superbuild project, the command make update-all should be executed before make.
[!] The files should't be installed. The $PATH is configured to use the build-tree files. (TODO Create, test and propose an install strategy)
[!] If you are wondering why cmake ., the reason is that the build folder should not be deleted, and the present CMake cache should be instead used. The configuration of the project is done manually and removing the folder would mean that it should be performed again (TODO Propose a CMake file for initial-cache)
[!] In icub-viz you can build safely with -j4. In icub-head limit jobs to maximum -j2.
3. Open the setup for executing the code
In a yarp network , the yarpserver runs in one of the computer. The utility yarprun instead runs, kills, and monitors application on remote machines. yarpmanager is the responsible of managing multiple programs on a set of machines. yarplogger displays the log of a set of machines. Eventually, yarpmotorgui moves and sets the joints configuration.
A typical workflow has the following consoles opened:
T1: icub_cluster.py (in order to load the correct conf file, launch it from /home/icub)