aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJuneChul Roh2021-04-15 16:53:34 -0500
committerJuneChul Roh2021-04-15 16:53:34 -0500
commit847a57e009f3b805e06694004f88b2e613e88490 (patch)
tree8b0206e6cf829eea06a0df2c3ff9112b5aa0c075
parent8c164ac623d0eacc6856d621ff8bad07d52c7041 (diff)
downloadjacinto_ros_perception-847a57e009f3b805e06694004f88b2e613e88490.tar.gz
jacinto_ros_perception-847a57e009f3b805e06694004f88b2e613e88490.tar.xz
jacinto_ros_perception-847a57e009f3b805e06694004f88b2e613e88490.zip
REL.00.03.00.03. Compatible with: J7 PSDK-RTOS 07_03_00_07, tidl-semseg-model_2.0.0.7.tar.gz, ros-bag_2020_1109.tar.gz
-rw-r--r--CHANGELOG.md21
-rw-r--r--README.md13
-rw-r--r--drivers/zed_capture/CMakeLists.txt8
-rw-r--r--drivers/zed_capture/README.md9
-rw-r--r--drivers/zed_capture/include/nodeletT.h88
-rw-r--r--drivers/zed_capture/package.xml6
-rw-r--r--nodes/ti_estop/README.md115
-rw-r--r--nodes/ti_estop/launch/estop.launch6
-rw-r--r--nodes/ti_sde/README.md63
-rw-r--r--nodes/ti_sde/launch/sde.launch10
-rw-r--r--nodes/ti_semseg_cnn/README.md20
-rw-r--r--nodes/ti_semseg_cnn/launch/semseg_cnn.launch12
12 files changed, 151 insertions, 220 deletions
diff --git a/CHANGELOG.md b/CHANGELOG.md
new file mode 100644
index 0000000..a9cb2f8
--- /dev/null
+++ b/CHANGELOG.md
@@ -0,0 +1,21 @@
1Change Log
2==========
3
4## 0.3.0 (2021-04-15)
5
6* Released with Processor SDK RTOS 7.3.0
7* Enhanced the stereo vision demo application: added point-cloud generation
8* Updated the semantic segmentation demo application: migrated to open-source deep-learning runtime (TVM + Neo-AI-DLR)
9* Added a new demo application: 3D obstacle detection accelerated on deep-learning core (C7/MMA) and hardware accelerators (SDE, LCD, MSC)
10* USB stereo camera ROS driver node for ZED cameras
11* Stereo rectification LDC lookup-table generation tool for ZED cameras
12* A live USB stereo camera support for all three demo applications (stereo vision, semantic segmentation, and 3D obstacle detection)
13
14## 0.1.0 (2020-12-15)
15
16* Released with Processor SDK RTOS 7.1.0
17* TI OpenVX (TIOVX) with ROS development framework
18* TI Vision Apps Library deployed on the J721e target that enables building applications directly on the target
19* Docker container environment on J721e for TIOVX + ROS development framework
20* Demo application: stereo vision processing node accelerated on LDC and SDE
21* Demo application: CNN semantic segmentation node with TIDL running on C7x/MMA \ No newline at end of file
diff --git a/README.md b/README.md
index 710138b..f01d36b 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
1Robotics Software Development Kit 1Robotics Software Development Kit
2================================= 2=================================
3 3
4### Introduction to TI OpenVX + ROS Development Framework 4## Introduction to TI OpenVX + ROS Development Framework
5 5
6<figure class="image"> 6<figure class="image">
7 <center><img src="docker/docs/tiovx_ros_sw_stack.png" style="width:726px; height:398px;"/></center> 7 <center><img src="docker/docs/tiovx_ros_sw_stack.png" style="width:726px; height:398px;"/></center>
@@ -40,9 +40,9 @@ The J721E Processor SDK RTOS 7.3.0 also supports the following open-source deep-
40We provides two demo applications that include a deep-learning model that is implemented on TVM/Neo-AI-DLR runtime library. 40We provides two demo applications that include a deep-learning model that is implemented on TVM/Neo-AI-DLR runtime library.
41 41
42## Setting Up Robotics SDK Docker Container Environment on J7 Target 42## Setting Up Robotics SDK Docker Container Environment on J7 Target
43<a href="https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/ros_perception/j7ros_docker_readme_00_03_00.pdf" download>Click to Download \"j7ros_docker_readme.pdf\"</a> 43<a href="https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/ros_perception/j7ros_docker_readme_00_03_00.pdf" download>Click to Download "j7ros_docker_readme.pdf"</a>
44 44
45<!-- For debugging (Caution: there is formatting issue): [docker/README.md](docker/README.md) --> 45For debugging: [docker/README.md](docker/README.md) (Caution: there are issues in rendering markdown files)
46 46
47## Demo Applications 47## Demo Applications
48 48
@@ -57,14 +57,17 @@ We provides two demo applications that include a deep-learning model that is imp
57 57
58### [3D Obstacle Detection Accelerated on SDE and C7x/MMA](nodes/ti_estop/README.md) 58### [3D Obstacle Detection Accelerated on SDE and C7x/MMA](nodes/ti_estop/README.md)
59 59
60### [USB Stereo Camera Capture Node for ZED Cameras](drivers/zed_capture/README.md)
61## Change Log
62See [CHANGELOG.md](CHANGELOG.md)
60## Limitations and Known Issues 63## Limitations and Known Issues
61 64
621. RViz visualization is displayed on a remote Ubuntu PC. Display from insider a Docker container on the J7 target is not enabled and tested. 651. RViz visualization is displayed on a remote Ubuntu PC. Display from insider a Docker container on the J7 target is not enabled and tested.
632. Ctrl+C termination of a ROS node or a ROS launch session can be sometimes slow. When VX_ERROR happens, it is recommended to reboot the J7 EVM. 662. Ctrl+C termination of a ROS node or a ROS launch session can be sometimes slow.
643. Stereo Vision Demo 673. Stereo Vision Demo
65 * Output disparity map may have artifacts that are common to block-based stereo algorithms. e.g., noise in the sky, texture-less area, repeated patterns, etc. 68 * Output disparity map may have artifacts that are common to block-based stereo algorithms. e.g., noise in the sky, texture-less area, repeated patterns, etc.
66 * While the confidence map from SDE has 8 values between 0 (least confident) to 7 (most confident), the confidence map from the multi-layer SDE refinement has only 2 values, 0 and 7. Therefore, it would not appear as fine as the SDE's confidence map. 69 * While the confidence map from SDE has 8 values between 0 (least confident) to 7 (most confident), the confidence map from the multi-layer SDE refinement has only 2 values, 0 and 7. Therefore, it would not appear as fine as the SDE's confidence map.
674. The semantic segmentation model used in `ti_semseg_cnn` and `ti_estop` nodes was trained with Cityscapes dataset first, and re-trained with a small dataset collected from a particular stereo camera (ZED camera, HD mode) for a limited scenarios with coarse annotation. Therefore, the model can show limited accuracy performance if a different camera model is used and/or when it is applied in different environmental scenes. 704. The semantic segmentation model used in `ti_semseg_cnn` and `ti_estop` nodes was trained with Cityscapes dataset first, and re-trained with a small dataset collected from a particular stereo camera (ZED camera, HD mode) for a limited scenarios with coarse annotation. Therefore, the model can show limited accuracy performance if a different camera model is used and/or when it is applied in different environment scenes.
68 71
69## Questions & Feedback 72## Questions & Feedback
70 73
diff --git a/drivers/zed_capture/CMakeLists.txt b/drivers/zed_capture/CMakeLists.txt
index 635fcd4..aeb5833 100644
--- a/drivers/zed_capture/CMakeLists.txt
+++ b/drivers/zed_capture/CMakeLists.txt
@@ -24,14 +24,6 @@ catkin_package(
24 CATKIN_DEPENDS 24 CATKIN_DEPENDS
25) 25)
26 26
27## nodelet
28add_library(zed_capture_nodelet
29 src/zed_capture_node.cpp
30 src/usb_stereo_camera.cpp
31 src/zed_capture_nodelet.cpp
32)
33target_link_libraries(zed_capture_nodelet ${catkin_LIBRARIES} ${OpenCV_LIBS})
34
35# node 27# node
36add_executable(zed_capture 28add_executable(zed_capture
37 src/zed_capture_node.cpp 29 src/zed_capture_node.cpp
diff --git a/drivers/zed_capture/README.md b/drivers/zed_capture/README.md
index 09ad5a8..26af13b 100644
--- a/drivers/zed_capture/README.md
+++ b/drivers/zed_capture/README.md
@@ -19,7 +19,6 @@ ZED stereo camera ROS node based on OpenCV VideoCapture API for publishing left
19 19
20 Update the ZED camera SN string, `zed_sn_str`, in `<zed_capture>/launch/zed_capture.launch` 20 Update the ZED camera SN string, `zed_sn_str`, in `<zed_capture>/launch/zed_capture.launch`
21 21
22
232. Generate `camera_info` YAML files and undistortion & rectification LUT files (already done for `SN29788442` and `SN5867575`) 222. Generate `camera_info` YAML files and undistortion & rectification LUT files (already done for `SN29788442` and `SN5867575`)
24 23
25 Run the following script: 24 Run the following script:
@@ -40,7 +39,7 @@ ZED stereo camera ROS node based on OpenCV VideoCapture API for publishing left
40 39
41 ``` 40 ```
42 cd $CATKIN_WS 41 cd $CATKIN_WS
43 catkin_make -j1 42 catkin_make
44 ``` 43 ```
45 44
464. Launch the ZED camera node 454. Launch the ZED camera node
@@ -74,8 +73,10 @@ $ roslaunch zed_capture zed_capture.launch
74``` 73```
754. On the second terminal, to capture into ROS bag files, run one of two examples below 744. On the second terminal, to capture into ROS bag files, run one of two examples below
76``` 75```
77$ roslaunch zed_capture recordbag.launch # collect 15 seconds of data and stop itself 76# Collect 15 seconds of data and stop itself
78$ roslaunch zed_capture recordbag_split.launch # save into a series of bag files, each keeping 15 seconds of data, until terminated with Ctrl+C 77$ roslaunch zed_capture recordbag.launch
78# Save into a series of bag files, each keeping 15 seconds of data, until terminated with Ctrl+C
79$ roslaunch zed_capture recordbag_split.launch
79``` 80```
805. (Optional to check the ROS topics) On 3rd terminal, 815. (Optional to check the ROS topics) On 3rd terminal,
81``` 82```
diff --git a/drivers/zed_capture/include/nodeletT.h b/drivers/zed_capture/include/nodeletT.h
deleted file mode 100644
index 8d46565..0000000
--- a/drivers/zed_capture/include/nodeletT.h
+++ /dev/null
@@ -1,88 +0,0 @@
1/*
2 *
3 * Copyright (c) 2021 Texas Instruments Incorporated
4 *
5 * All rights reserved not granted herein.
6 *
7 * Limited License.
8 *
9 * Texas Instruments Incorporated grants a world-wide, royalty-free, non-exclusive
10 * license under copyrights and patents it now or hereafter owns or controls to make,
11 * have made, use, import, offer to sell and sell ("Utilize") this software subject to the
12 * terms herein. With respect to the foregoing patent license, such license is granted
13 * solely to the extent that any such patent is necessary to Utilize the software alone.
14 * The patent license shall not apply to any combinations which include this software,
15 * other than combinations with devices manufactured by or for TI ("TI Devices").
16 * No hardware patent is licensed hereunder.
17 *
18 * Redistributions must preserve existing copyright notices and reproduce this license
19 * (including the above copyright notice and the disclaimer and (if applicable) source
20 * code license limitations below) in the documentation and/or other materials provided
21 * with the distribution
22 *
23 * Redistribution and use in binary form, without modification, are permitted provided
24 * that the following conditions are met:
25 *
26 * * No reverse engineering, decompilation, or disassembly of this software is
27 * permitted with respect to any software provided in binary form.
28 *
29 * * any redistribution and use are licensed by TI for use only with TI Devices.
30 *
31 * * Nothing shall obligate TI to provide you with source code for the software
32 * licensed and provided to you in appCntxtect code.
33 *
34 * If software source code is provided to you, modification and redistribution of the
35 * source code are permitted provided that the following conditions are met:
36 *
37 * * any redistribution and use of the source code, including any resulting derivative
38 * works, are licensed by TI for use only with TI Devices.
39 *
40 * * any redistribution and use of any appCntxtect code compiled from the source code
41 * and any resulting derivative works, are licensed by TI for use only with TI Devices.
42 *
43 * Neither the name of Texas Instruments Incorporated nor the names of its suppliers
44 *
45 * may be used to endorse or promote products derived from this software without
46 * specific prior written permission.
47 *
48 * DISCLAIMER.
49 *
50 * THIS SOFTWARE IS PROVIDED BY TI AND TI'S LICENSORS "AS IS" AND ANY EXPRESS
51 * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
52 * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
53 * IN NO EVENT SHALL TI AND TI'S LICENSORS BE LIABLE FOR ANY DIRECT, INDIRECT,
54 * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
55 * BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
56 * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
57 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
58 * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
59 * OF THE POSSIBILITY OF SUCH DAMAGE.
60 *
61 */
62
63#if !defined(_NODELET_TEMPLATE_H_)
64#define _NODELET_TEMPLATE_H_
65
66#include <memory>
67#include <nodelet/nodelet.h>
68
69namespace template_nodelet_ns
70{
71template <typename T>
72class NodeletT: public nodelet::Nodelet
73{
74 public:
75 virtual void onInit()
76 {
77 m_obj =
78 std::make_unique<T>(getNodeHandle(), getPrivateNodeHandle());
79 }
80
81 private:
82 std::unique_ptr<T> m_obj;
83
84};
85
86}
87
88#endif
diff --git a/drivers/zed_capture/package.xml b/drivers/zed_capture/package.xml
index fb971d3..9114c44 100644
--- a/drivers/zed_capture/package.xml
+++ b/drivers/zed_capture/package.xml
@@ -14,22 +14,16 @@
14 <build_depend>cv_bridge</build_depend> 14 <build_depend>cv_bridge</build_depend>
15 <build_depend>image_transport</build_depend> 15 <build_depend>image_transport</build_depend>
16 <build_depend>camera_info_manager</build_depend> 16 <build_depend>camera_info_manager</build_depend>
17 <build_depend>nodelet</build_depend>
18 17
19 <build_export_depend>roscpp</build_export_depend> 18 <build_export_depend>roscpp</build_export_depend>
20 <build_export_depend>cv_bridge</build_export_depend> 19 <build_export_depend>cv_bridge</build_export_depend>
21 <build_export_depend>image_transport</build_export_depend> 20 <build_export_depend>image_transport</build_export_depend>
22 <build_export_depend>camera_info_manager</build_export_depend> 21 <build_export_depend>camera_info_manager</build_export_depend>
23 <build_export_depend>nodelet</build_export_depend>
24 22
25 <exec_depend>cmake_modules</exec_depend> 23 <exec_depend>cmake_modules</exec_depend>
26 <exec_depend>roscpp</exec_depend> 24 <exec_depend>roscpp</exec_depend>
27 <exec_depend>cv_bridge</exec_depend> 25 <exec_depend>cv_bridge</exec_depend>
28 <exec_depend>image_transport</exec_depend> 26 <exec_depend>image_transport</exec_depend>
29 <exec_depend>camera_info_manager</exec_depend> 27 <exec_depend>camera_info_manager</exec_depend>
30 <exec_depend>nodelet</exec_depend>
31 28
32 <export>
33 <nodelet plugin="${prefix}/zed_capture_nodelet.xml" />
34 </export>
35</package> 29</package>
diff --git a/nodes/ti_estop/README.md b/nodes/ti_estop/README.md
index 96f3589..14b016b 100644
--- a/nodes/ti_estop/README.md
+++ b/nodes/ti_estop/README.md
@@ -46,79 +46,82 @@ roslaunch ti_estop rviz_ogmap.launch
46``` 46```
47 47
48## Launch File Parameters 48## Launch File Parameters
49`estop.launch` file specifies the followings:
50* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to "`rosparam` Parameters" section below. For the descriptions of all parameters, please see `config/params.yaml`
51* Left input topic name to read left images from a stereo camera.
52* Right input topic name to read right images from a stereo camera.
53* Right camera parameter topic name to read width, height, distortion centers and focal length
54* Output semantic segmentation map tensor topic to publish the output tensors from a semantic segmentation network.
55* Output rectified right image topic name to publish rectified right images.
56* Output bounding box topic name to publish the 3D bounding boxes coordinates of detected obstacles.
57* Output disparity topic name to publish raw disparity maps.
58* Output occupancy grid topic name to publish ego-centric occupancy grid map.
59* Output emergency stop topic name to publish emergency stop flag when obstacles are too close to a robot. When this flag is true, a robot is forced to stop moving.
60 49
61 50Parameter | Description | Value
62## `rosparam` Parameters 51-------------------------|----------------------------------------------------------------------------|-------------------
52rosparam file | Algorithm configuration parameters (see "ROSPARAM Parameters" section) | config/params.yaml
53left_input_topic_name | Left input topic name to read left images from a stereo camera | camera/left/image_raw
54right_input_topic_name | Right input topic name to read right images from a stereo camera | camera/right/image_raw
55camera_info_topic | Right camera_info topic name to read relevant camera parameters | camera/right/camera_info
56semseg_cnn_out_image | Publish topic name for semantic segmentation output image | semseg_cnn/out_image
57semseg_cnn_tensor_topic | Publish topic name for semantic segmentation tensor | semseg_cnn/tensor
58rectified_image_topic | Publish topic name for rectified right image | camera/right/image_rect_mono
59bounding_box_topic | Publish topic name for 3D bounding boxes coordinates of detected obstacles | detection3D/3dBB
60raw_disparity_topic_name | Publish topic name for raw disparity map | camera/disparity/raw
61ogmap_topic_name | Publish topic name for ego-centric occupancy grid map | detection3D/ogmap
62estop_topic_name | Publish topic name for binary emergency stop message, indicating whether obstacle(s) is in proximity to the robot or not | detection3D/estop
63ssmap_output_rgb | Flag to indicate if the output semantic segmentation map is published in RGB format | true, false
64_ | The output semantic segmentation map is published in YUV420 if false | _
65
66## ROSPARAM Parameters
63 67
64### Basic input, LDC and SDE Parameters 68### Basic input, LDC and SDE Parameters
65 69
66 Parameter | Description | Value 70Parameter | Description | Value
67--------------------------|------------------------------------------------------------------------------|---------- 71-------------------------|------------------------------------------------------------------------------|----------
68 left_lut_file_path | LDC rectification table path for left image | String 72left_lut_file_path | LDC rectification table path for left image | String
69 right_lut_file_path | LDC rectification table path for right image | String 73right_lut_file_path | LDC rectification table path for right image | String
70 input_format | Input image format, 0: U8, 1: YUV422 | 0, 1 74input_format | Input image format, 0: U8, 1: YUV422 | 0, 1
71 sde_algo_type | SDE algorithm type, 0: single-layer SDE, 1: multi-layer SDE | 0, 1 75sde_algo_type | SDE algorithm type, 0: single-layer SDE, 1: multi-layer SDE | 0, 1
72 num_layers | Number of layers in multi-layer SDE | 2, 3 76num_layers | Number of layers in multi-layer SDE | 2, 3
73 sde_confidence_threshold | Disparity with confidence less than this value is invalidated | 0 ~ 7 77sde_confidence_threshold | Disparity with confidence less than this value is invalidated | 0 ~ 7
74 disparity_min | Minimum disparity to search, 0: 0, 1: -3 | 0, 1 78disparity_min | Minimum disparity to search, 0: 0, 1: -3 | 0, 1
75 disparity_max | Maximum disparity to search, 0: 63, 1: 127, 2: 191 | 0 ~ 2 79disparity_max | Maximum disparity to search, 0: 63, 1: 127, 2: 191 | 0 ~ 2
76 80
77### Camera Parameters 81### Camera Parameters
78 82
79 Parameter | Description | Value 83Parameter | Description | Value
80--------------------------|------------------------------------------------------------------------------|---------- 84-------------------------|------------------------------------------------------------------------------|----------
81 camera_height | Camera mounting height | Float32 85camera_height | Camera mounting height | Float32
82 camera_pitch | Camera pitch angle in radian | Float32 86camera_pitch | Camera pitch angle in radian | Float32
83 87
84### Occupancy Grid Map Parameters 88### Occupancy Grid Map Parameters
85 89
86 Parameter | Description | Value 90Parameter | Description | Value
87--------------------------|------------------------------------------------------------------------------|---------- 91-------------------------|------------------------------------------------------------------------------|----------
88 grid_x_size | Horizontal width of a grid of a OG map in millimeter | Integer 92grid_x_size | Horizontal width of a grid of a OG map in millimeter | Integer
89 grid_y_size | Vertical length of a grid of a OG map in millimeter | Integer 93grid_y_size | Vertical length of a grid of a OG map in millimeter | Integer
90 min_x_range | Minimum horizontal range in millimeter to be covered by a OG map | Integer 94min_x_range | Minimum horizontal range in millimeter to be covered by a OG map | Integer
91 max_x_range | Maximum horizontal range in millimeter to be covered by a OG map | Integer 95max_x_range | Maximum horizontal range in millimeter to be covered by a OG map | Integer
92 min_y_range | Minimum vertical range in millimeter to be covered by a OG map | Integer 96min_y_range | Minimum vertical range in millimeter to be covered by a OG map | Integer
93 max_y_range | Maximum vertical range in millimeter to be covered by a OG map | Integer 97max_y_range | Maximum vertical range in millimeter to be covered by a OG map | Integer
94 98
95The number of grids in one row is defined by (max_x_range - min_x_range) / grid_x_size. Likewise, the number of grids in one column is defined by (max_y_range - min_y_range) / grid_y_size. 99The number of grids in one row is defined by (max_x_range - min_x_range) / grid_x_size. Likewise, the number of grids in one column is defined by (max_y_range - min_y_range) / grid_y_size.
96 100
97### Obstacle Detection Parameters 101### Obstacle Detection Parameters
98 102Parameter | Description | Value
99 Parameter | Description | Value 103------------------------------|-------------------------------------------------------------------------|----------
100-------------------------------|-------------------------------------------------------------------------|---------- 104min_pixel_count_grid | Minimum number of pixels for a grid to be occupied | Integer
101 min_pixel_count_grid | Minimum number of pixels for a grid to be occupied | Integer 105min_pixel_count_object | Minimum number of pixels for connected grids to be an object | Integer
102 min_pixel_count_object | Minimum number of pixels for connected grids to be an object | Integer 106max_object_to_detect | Maximum number of objects to detect in a frame | Integer
103 max_object_to_detect | Maximum number of objects to detect in a frame | Integer 107num_neighbor_grid | Number of neighboring grids to check for connected component analysis | 8, 24
104 num_neighbor_grid | Number of neighboring grids to check for connected component analysis | 8, 24 108enable_spatial_obj_merge | Enabling flag of merging spatially close objects | 0, 1
105 enable_spatial_obj_merge | Enabling flag of merging spatially close objects | 0, 1 109enable_temporal_obj_merge | Enabling flag of use of temporal information | 0, 1
106 enable_temporal_obj_merge | Enabling flag of use of temporal information | 0, 1 110enable_temporal_obj_smoothing | Enabling flag of use of a corresponding object in a previous frame to compute an object position | 0, 1
107 enable_temporal_obj_smoothing | Enabling flag of use of a corresponding object in a previous frame to compute an object position | 0, 1 111object_distance_mode | Method to compute distance between objects (0: distance between centers, 1: distance between corners) | 0, 1
108 object_distance_mode | Method to compute distance between objects (0: distance between centers, 1: distance between corners) | 0, 1
109 112
110### e-Stop Parameters 113### e-Stop Parameters
111 114
112 Parameter | Description | Value 115Parameter | Description | Value
113-------------------------------|-------------------------------------------------------------------------|---------- 116------------------------------|-------------------------------------------------------------------------|----------
114 min_estop_distance | Minimum distance of e-Stop area. Should be 0 | 0 117min_estop_distance | Minimum distance of e-Stop area. Should be 0 | 0
115 max_estop_distance | Maximum distance of e-Stop area in millimeter | Integer 118max_estop_distance | Maximum distance of e-Stop area in millimeter | Integer
116 min_estop_width | Width of e-Stop area in millimeter at min_estop_distance | Integer 119min_estop_width | Width of e-Stop area in millimeter at min_estop_distance | Integer
117 max_estop_width | Width of e-Stop area in millimeter at max_estop_distance | Integer 120max_estop_width | Width of e-Stop area in millimeter at max_estop_distance | Integer
118 min_free_frame_run | Minimum number of consecutive frames without any obstacle in e-Stop area to be determined free | Integer 121min_free_frame_run | Minimum number of consecutive frames without any obstacle in e-Stop area to be determined free | Integer
119 min_obs_frame_run | Minimum number of consecutive frames with any obstacle in e-Stop area to be determined infringed | Integer 122min_obs_frame_run | Minimum number of consecutive frames with any obstacle in e-Stop area to be determined infringed | Integer
120 123
121e-Stop area forms a trapezoid defined by first four values. When obstacles are detected in e-Stop area, a robot is forced to stop. 124e-Stop area forms a trapezoid defined by the first four parameters. When obstacles are detected in the e-Stop area, `detection3D/estop` topic is turned on `1`, so that the robot can be forced to stop.
122 125
123 126
124## Camera Setup 127## Camera Setup
@@ -127,4 +130,4 @@ e-Stop area forms a trapezoid defined by first four values. When obstacles are d
127To create LDC-format LUT for ZED camera, please refer to [zed_capture/README.md](../../drivers/zed_capture/README.md). 130To create LDC-format LUT for ZED camera, please refer to [zed_capture/README.md](../../drivers/zed_capture/README.md).
128 131
129### Camera Mounting 132### Camera Mounting
130For accurate obstacle detection, it is important to mount properly and correct values of `camera_height` and `camera_pitch` should be provided. For example, incorrect values of camera pitch angle result in 3D object boxes being overlaid in front of or behind obstacles on images. It is recommended to install the stereo camera parallel to the ground plane or slightly tilted downward, e.g., 0° ~ 10°. In general, camera pitch angle should be close to 0 when a camera's height is low, while camera pitch angle can be larger to some extent when the camera is mounted higher. 133For accurate obstacle detection, it is important to mount properly and correct values of `camera_height` and `camera_pitch` should be provided. For example, incorrect values of camera pitch angle result in 3D object boxes being overlaid in front of or behind obstacles on images. It is recommended to install the stereo camera parallel to the ground plane or slightly tilted downward, e.g., between 0° and 10°. In general, camera pitch angle should be close to 0 when a camera's height is low, while camera pitch angle can be larger to some extent when the camera is mounted higher.
diff --git a/nodes/ti_estop/launch/estop.launch b/nodes/ti_estop/launch/estop.launch
index ada70dd..da7f9d0 100644
--- a/nodes/ti_estop/launch/estop.launch
+++ b/nodes/ti_estop/launch/estop.launch
@@ -14,10 +14,10 @@
14 <!-- Right camera parameter topic name to subscribe to --> 14 <!-- Right camera parameter topic name to subscribe to -->
15 <param name = "camera_info_topic" value = "camera/right/camera_info"/> 15 <param name = "camera_info_topic" value = "camera/right/camera_info"/>
16 16
17 <!-- Output topic name to publish to --> 17 <!-- Output topic name for semantic segmentation output image -->
18 <param name = "semseg_cnn_out_image" value = "semseg_cnn/out_image"/> 18 <param name = "semseg_cnn_out_image" value = "semseg_cnn/out_image"/>
19 19
20 <!-- Output topic name to publish to --> 20 <!-- Output topic name for semantic segmentation output tensor -->
21 <param name = "semseg_cnn_tensor_topic" value = "semseg_cnn/tensor"/> 21 <param name = "semseg_cnn_tensor_topic" value = "semseg_cnn/tensor"/>
22 22
23 <!-- Output recitified image topic name to publish to --> 23 <!-- Output recitified image topic name to publish to -->
@@ -35,7 +35,7 @@
35 <!-- Output EStop topic name to publish to --> 35 <!-- Output EStop topic name to publish to -->
36 <param name = "estop_topic_name" value = "detection3D/estop"/> 36 <param name = "estop_topic_name" value = "detection3D/estop"/>
37 37
38 <!-- Flag to indcate if the output should be published in RGB format --> 38 <!-- Flag to indicate if the output should be published in RGB format -->
39 <param name = "ssmap_output_rgb" value = "true"/> 39 <param name = "ssmap_output_rgb" value = "true"/>
40 </node> 40 </node>
41 41
diff --git a/nodes/ti_sde/README.md b/nodes/ti_sde/README.md
index 015a0c5..86118d3 100644
--- a/nodes/ti_sde/README.md
+++ b/nodes/ti_sde/README.md
@@ -50,40 +50,43 @@ roslaunch ti_sde zed_sde_pcl.launch
50roslaunch ti_sde rviz_pcl.launch 50roslaunch ti_sde rviz_pcl.launch
51``` 51```
52## Launch File Parameters 52## Launch File Parameters
53`sde.launch` file specifies the followings:
54* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to "`rosparam` Parameters" section below. For the descriptions of all parameters, please see `config/params.yaml`.
55* Left input topic name to read left images from a stereo camera.
56* Right input topic name to read right images from a stereo camera.
57* Right camera parameter topic name to read width, height, distortion centers and focal length
58* Output disparity topic name to publish raw disparity maps.
59* Output point cloud topic name to publish point cloud data.
60 53
61## `rosparam` Parameters 54Parameter | Description | Value
55-------------------|---------------------------------------------------------------------------|-------------------
56rosparam file | Algorithm configuration parameters (see "ROSPARAM Parameters" section) | config/params.yaml
57enable_pc | Enable point-cloud. This overrides setting in `config/params.yaml` | 0, 1
58left_input_topic | Subscribe topic name for left camera image | camera/left/image_raw
59right_input_topic | Subscribe topic name for right camera image | camera/right/image_raw
60camera_info_topic | Subscribe topic name for right camera info | camera/right/camera_info
61disparity_topic | Publish topic name topic for raw disparity | camera/disparity/raw
62point_cloud_topic | Publish topic name for point cloud | point_cloud
63
64## ROSPARM Parameters
62 65
63### Basic input, LDC and SDE Parameters 66### Basic input, LDC and SDE Parameters
64 Parameter | Description | Value 67Parameter | Description | Value
65--------------------------|------------------------------------------------------------------------------|---------- 68-------------------------|---------------------------------------------------------------------|----------
66 left_lut_file_path | LDC rectification table path for left image | String 69left_lut_file_path | LDC rectification table path for left image | String
67 right_lut_file_path | LDC rectification table path for right image | String 70right_lut_file_path | LDC rectification table path for right image | String
68 input_format | Input image format, 0: U8, 1: YUV422 | 0, 1 71input_format | Input image format, 0: U8, 1: YUV422 | 0, 1
69 sde_algo_type | SDE algorithm type, 0: single-layer SDE, 1: multi-layer SDE | 0, 1 72sde_algo_type | SDE algorithm type, 0: single-layer SDE, 1: multi-layer SDE | 0, 1
70 num_layers | Number of layers in multi-layer SDE | 2, 3 73num_layers | Number of layers in multi-layer SDE | 2, 3
71 disparity_min | Minimum disparity to search, 0: 0, 1: -3 | 0, 1 74disparity_min | Minimum disparity to search, 0: 0, 1: -3 | 0, 1
72 disparity_max | Maximum disparity to search, 0: 63, 1: 127, 2: 191 | 0, 1, 2 75disparity_max | Maximum disparity to search, 0: 63, 1: 127, 2: 191 | 0, 1, 2
73 stereo_baseline | Stereo camera baseline in meter | Float32 76stereo_baseline | Stereo camera baseline in meter | Float32
74 77
75### Point Cloud Parameters 78### Point Cloud Parameters
76 Parameter | Description | Value 79Parameter | Description | Value
77--------------------------|------------------------------------------------------------------------------|---------- 80-------------------------|---------------------------------------------------------------------|----------
78 enable_pc | Flag to enable/disable point cloud creation | 0, 1 81enable_pc | Flag to enable/disable point cloud creation | 0, 1
79 use_pc_config | Flag to use the following point cloud configurations | 0, 1 82use_pc_config | Flag to use the following point cloud configurations | 0, 1
80 sde_confidence_threshold | Disparity with confidence less than this value is invalidated | Integer, [0, 7] 83sde_confidence_threshold | Disparity with confidence less than this value is invalidated | Integer, [0, 7]
81 point_low_x | Min X position of a point to be rendered | Float32 84point_low_x | Min X position of a point to be rendered | Float32
82 point_high_x | Max X position of a point to be rendered | Float32 85point_high_x | Max X position of a point to be rendered | Float32
83 point_low_y | Min Y position of a point to be rendered | Float32 86point_low_y | Min Y position of a point to be rendered | Float32
84 point_high_y | Max Y position of a point to be rendered | Float32 87point_high_y | Max Y position of a point to be rendered | Float32
85 point_low_z | Min Z position of a point to be rendered | Float32 88point_low_z | Min Z position of a point to be rendered | Float32
86 point_high_z | Max Z position of a point to be rendered | Float32 89point_high_z | Max Z position of a point to be rendered | Float32
87 90
88## Processing Blocks 91## Processing Blocks
89 92
@@ -163,7 +166,7 @@ The triangulation process produces point cloud form the raw disparity map, which
163 <figcaption> <center>Figure 2. Triangulation process </center></figcaption> 166 <figcaption> <center>Figure 2. Triangulation process </center></figcaption>
164</figure> 167</figure>
165 168
166The color conversion block converts the format of the rectified right image to RGB, and the RGB image goes to the triangulation block. The triangulation block takes this RGB image and raw disparity map as inputs to produce the point cloud in the (X,Y,Z,R,G,B) format. 169The color conversion block converts the format of the rectified right image to RGB, and the RGB image goes to the triangulation block. The triangulation block takes this RGB image and raw disparity map as inputs to produce the point cloud in the `(X, Y, Z, R, G, B)` format.
167 170
168Every disparity value whose confidence is larger than or equal to `sde_confidence_threshold` is mapped to 3D position with the corresponding color information. For a pixel at `(x, y)` on image, let's say `d` is its disparity, `b` is baseline, and `(dcx, dcy)` is distortion center. Then, its 3D position. `(X, Y, Z)` is computed as follows: 171Every disparity value whose confidence is larger than or equal to `sde_confidence_threshold` is mapped to 3D position with the corresponding color information. For a pixel at `(x, y)` on image, let's say `d` is its disparity, `b` is baseline, and `(dcx, dcy)` is distortion center. Then, its 3D position. `(X, Y, Z)` is computed as follows:
169``` 172```
diff --git a/nodes/ti_sde/launch/sde.launch b/nodes/ti_sde/launch/sde.launch
index e104f7b..97214d8 100644
--- a/nodes/ti_sde/launch/sde.launch
+++ b/nodes/ti_sde/launch/sde.launch
@@ -10,19 +10,19 @@
10 enable_pc: $(arg enable_pc) 10 enable_pc: $(arg enable_pc)
11 </rosparam> 11 </rosparam>
12 12
13 <!-- Left input topic name to subscribe to --> 13 <!-- Left input topic name to subscribe -->
14 <param name = "left_input_topic" value = "camera/left/image_raw"/> 14 <param name = "left_input_topic" value = "camera/left/image_raw"/>
15 15
16 <!-- Right input topic name to subscribe to --> 16 <!-- Right input topic name to subscribe -->
17 <param name = "right_input_topic" value = "camera/right/image_raw"/> 17 <param name = "right_input_topic" value = "camera/right/image_raw"/>
18 18
19 <!-- Right camera parameter topic name to subscribe to --> 19 <!-- Right camera parameter topic name to subscribe -->
20 <param name = "camera_info_topic" value = "camera/right/camera_info"/> 20 <param name = "camera_info_topic" value = "camera/right/camera_info"/>
21 21
22 <!-- Ouput raw dispairty topic to publish to --> 22 <!-- Output raw disparity topic to publish -->
23 <param name = "disparity_topic" value = "camera/disparity/raw"/> 23 <param name = "disparity_topic" value = "camera/disparity/raw"/>
24 24
25 <!-- Ouput point cloud topic to publish to --> 25 <!-- Output point cloud topic to publish -->
26 <param name = "point_cloud_topic" value = "point_cloud"/> 26 <param name = "point_cloud_topic" value = "point_cloud"/>
27 </node> 27 </node>
28 28
diff --git a/nodes/ti_semseg_cnn/README.md b/nodes/ti_semseg_cnn/README.md
index 6d82b73..dc655d1 100644
--- a/nodes/ti_semseg_cnn/README.md
+++ b/nodes/ti_semseg_cnn/README.md
@@ -45,16 +45,18 @@ roslaunch ti_semseg_cnn bag_semseg_cnn.launch ratefactor:=2.0
45``` 45```
46 46
47## Launch File Parameters 47## Launch File Parameters
48`semseg_cnn.launch` file specifies the followings:
49 48
50* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the description of all parameters, please see a yaml file. 49Parameter | Description | Value
51* Input topic name to read input images. 50------------------------|---------------------------------------------------------------------------|-------------------
52* Output undistorted or rectified image topic name. 51rosparam file | Algorithm configuration parameters (see "ROSPARAM Parameters" section) | config/params.yaml
53* Output semantic segmentation image topic name when an color-coded semantic segmentation map is published. 52input_topic_name | Subscribe topic name for input camera image | camera/right/image_raw
54* Flag that indicates the color-coded semantic segmentation map is published in RGB format. If this flag is false, it is published in YUV420 format. 53rectified_image_topic | Publish topic name for output rectified image | camera/right/image_rect_mono
55* Output semantic segmentation tensor topic name when the output tensor is published. 54semseg_cnn_out_image | Publish topic name for semantic segmentation output image | semseg_cnn/out_image
56 55output_rgb | Flag for indicating color-coded semantic segmentation map is published in RGB format | true, false
57## `rosparam` Parameters 56_ | If this flag is false, it is published in YUV420 format | _
57semseg_cnn_tensor_topic | Publish topic name for output semantic segmentation tensor | semseg_cnn/tensor
58
59## ROSPARAM Parameters
58The table below describes the parameters in `config/params.yaml`: 60The table below describes the parameters in `config/params.yaml`:
59 61
60 62
diff --git a/nodes/ti_semseg_cnn/launch/semseg_cnn.launch b/nodes/ti_semseg_cnn/launch/semseg_cnn.launch
index 3020098..208c10d 100644
--- a/nodes/ti_semseg_cnn/launch/semseg_cnn.launch
+++ b/nodes/ti_semseg_cnn/launch/semseg_cnn.launch
@@ -3,13 +3,16 @@
3 <!-- openVX CNN graph node --> 3 <!-- openVX CNN graph node -->
4 <node pkg = "ti_semseg_cnn" type = "semseg_cnn" name = "semseg_cnn" output = "screen" args = "" required = "true"> 4 <node pkg = "ti_semseg_cnn" type = "semseg_cnn" name = "semseg_cnn" output = "screen" args = "" required = "true">
5 5
6 <!-- Input topic name to subscribe to --> 6 <!-- Configuration file for the openVX CNN graph -->
7 <rosparam file="$(find ti_semseg_cnn)/config/params.yaml" subst_value="true" />
8
9 <!-- Input topic name to subscribe -->
7 <param name = "input_topic_name" value = "camera/right/image_raw"/> 10 <param name = "input_topic_name" value = "camera/right/image_raw"/>
8 11
9 <!-- Output recitified image topic name to publish to --> 12 <!-- Output rectified image topic name to publish -->
10 <param name = "rectified_image_topic" value = "camera/right/image_rect_mono"/> 13 <param name = "rectified_image_topic" value = "camera/right/image_rect_mono"/>
11 14
12 <!-- Output semantic segmentation image topic name to publish to --> 15 <!-- Output semantic segmentation image topic name to publish -->
13 <param name = "semseg_cnn_out_image" value = "semseg_cnn/out_image"/> 16 <param name = "semseg_cnn_out_image" value = "semseg_cnn/out_image"/>
14 17
15 <!-- Flag to indcate if the output should be published in RGB format --> 18 <!-- Flag to indcate if the output should be published in RGB format -->
@@ -18,9 +21,6 @@
18 <!-- Output semantic segmentation tensor topic name to publish to --> 21 <!-- Output semantic segmentation tensor topic name to publish to -->
19 <param name = "semseg_cnn_tensor_topic" value = "semseg_cnn/tensor"/> 22 <param name = "semseg_cnn_tensor_topic" value = "semseg_cnn/tensor"/>
20 23
21 <!-- Configuration file for the openVX CNN graph -->
22 <rosparam file="$(find ti_semseg_cnn)/config/params.yaml" subst_value="true" />
23
24 </node> 24 </node>
25 25
26</launch> 26</launch>