The pick and place simulation repo and the segmentation repo rely on camera functionality. We'd like to test rickstaa/realsense-ros-gazebo on a physical camera.
Questions to answer:
- What is the dimensions of the color and depth channels on the camera?
- Does the ros module work on the physical camera?
- What is the behavior of the aligned depth image?
Ensure the realsense camera is plugged in and functional.
- verify that the camera is working using the realsense viewer
Install and test the realsense2 ros module and test the gazebo module
- checkout https://github.com/rickstaa/realsense-ros-gazebo into your catkin workspace under src
- install dependencies via
rosdep install --from-paths src --ignore-src -r -y
- run
catkin build
- launch the gazebo example in the repo:
roslaunch realsense2_description view_d435_model_rviz_gazebo.launch
- save information from the simulated camera
- use
rostopic list
and note all of the camera topics - use
rostopic echo $TOPIC_NAME | head -n100
and get camera intrinsics information for the color and depth sensors
- use
Run and test the physical camera via the ros module
- read https://github.com/rickstaa/realsense-ros/tree/fa319702d34f1ca71149cddf5768bcb8e644f63b#usage-instructions
- run
roslaunch realsense2_camera rs_camera.launch
- see instructions above and capture information about launched topics
- configure the camera stream in rviz: http://wiki.ros.org/rviz
rosrun rviz rviz
Lastly, run the physical camera via ros module again, but set align_depth
to true.
Find out how to set parameters via roslaunch.
Note down topics and camera intrinsics again for color and depth images.
Camera topics:
Information from simulated camera (infra1):
Information from simulated camera (color):