aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJuneChul Roh2020-12-13 22:19:43 -0600
committerJuneChul Roh2020-12-13 22:19:43 -0600
commit8b0ae04f3e05fe01f2efdaea51dc59e2364c9f0b (patch)
treee819e2f7fd1badd4e72caf51a4b1ff017305e962
parentfcdaaf63e1999343d5c3258a06a101660f8ad639 (diff)
downloadjacinto_ros_perception-8b0ae04f3e05fe01f2efdaea51dc59e2364c9f0b.tar.gz
jacinto_ros_perception-8b0ae04f3e05fe01f2efdaea51dc59e2364c9f0b.tar.xz
jacinto_ros_perception-8b0ae04f3e05fe01f2efdaea51dc59e2364c9f0b.zip
Updated .md files for better formatting
-rw-r--r--README.md17
-rw-r--r--docker/README.md80
-rw-r--r--nodes/ti_sde/README.md13
-rw-r--r--nodes/ti_semseg_cnn/README.md5
4 files changed, 55 insertions, 60 deletions
diff --git a/README.md b/README.md
index ababb37..18f75ca 100644
--- a/README.md
+++ b/README.md
@@ -4,14 +4,14 @@ TI OpenVX + ROS Framework & Applications
4### Introduction to TI OpenVX + ROS Development Framework 4### Introduction to TI OpenVX + ROS Development Framework
5 5
6<figure class="image"> 6<figure class="image">
7 <center><img src="docker/docs/tiovx_ros_sw_stack.png" style="width:726px;"/></center> 7 <center><img src="docker/docs/tiovx_ros_sw_stack.png" style="width:726px; height:398px;"/></center>
8 <figcaption> <center>Figure 1. TI OpenVX + ROS Framework: Software Stack </center></figcaption> 8 <figcaption> <center>Figure 1. TI OpenVX + ROS Framework: Software Stack </center></figcaption>
9</figure> 9</figure>
10 10
11The TI OpenVX + ROS development framework is enabled in a Docker container environment on J7 Processor SDK Linux. We provide detailed steps for setting a Docker container environment for ROS Melodic together with the TI Vision Apps Library (see next section). The TI OpenVX + ROS development framework allows: 11The TI OpenVX + ROS development framework is enabled in a Docker container environment on J7 Processor SDK Linux. We provide detailed steps for setting a Docker container environment for ROS Melodic together with the TI Vision Apps Library (see next section). The TI OpenVX + ROS development framework allows:
12 12
13- Optimized software implementation of computation-intensive software blocks (including deep-learning, vision, perception, and ADAS) on deep-learning core (C7x/MMA), DSP cores, hardware accelerators built-in on the Jacinto 7 processor 13* Optimized software implementation of computation-intensive software blocks (including deep-learning, vision, perception, and ADAS) on deep-learning core (C7x/MMA), DSP cores, hardware accelerators built-in on the Jacinto 7 processor
14- Application softwares can be complied directly on the Jacinto 7 processor in a Docker container using APIs optimized on Jacinto 7 processor along with many open-source libraries and packages including, for example. OpenCV and Point-Cloud Library (PCL). 14* Application softwares can be complied directly on the Jacinto 7 processor in a Docker container using APIs optimized on Jacinto 7 processor along with many open-source libraries and packages including, for example. OpenCV and Point-Cloud Library (PCL).
15 15
16Figure below is a representative vision application that can be developed in TI OpenVX + ROS framework. 16Figure below is a representative vision application that can be developed in TI OpenVX + ROS framework.
17 17
@@ -23,17 +23,16 @@ Figure below is a representative vision application that can be developed in TI
23### TI Vision Apps Library 23### TI Vision Apps Library
24The TI Vision Apps Library is a set of APIs for the target deployment that are derived from the Jacinto 7 Processor SDK RTOS, which includes: 24The TI Vision Apps Library is a set of APIs for the target deployment that are derived from the Jacinto 7 Processor SDK RTOS, which includes:
25 25
26- TI OpenVX kernels and infrastructure 26* TI OpenVX kernels and infrastructure
27- TI deep learning (TIDL) applications 27* TI deep learning (TIDL) applications
28- Imaging and vision applications 28* Imaging and vision applications
29- Advanced driver-assistance systems (ADAS) applications 29* Advanced driver-assistance systems (ADAS) applications
30- Perception applications 30* Perception applications
31 31
32The TI Vision Apps Library is included in the pre-built package of [J721E Processor SDK RTOS 7.1.0](https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/index_FDS.html). 32The TI Vision Apps Library is included in the pre-built package of [J721E Processor SDK RTOS 7.1.0](https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/index_FDS.html).
33 33
34## How to Set Up TI OpenVX + ROS Docker Container Environment on J7 Target 34## How to Set Up TI OpenVX + ROS Docker Container Environment on J7 Target
35See [docker/README.md](docker/README.md). 35See [docker/README.md](docker/README.md).
36<!-- or [docker/README.pdf](docker/README.pdf) (in case there is some formatting issue in reading `docker/README.md` on your web browser). -->
37 36
38## TI OpenVX + ROS Demo Applications 37## TI OpenVX + ROS Demo Applications
39 38
diff --git a/docker/README.md b/docker/README.md
index 4cbc84e..4f02a35 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -19,8 +19,7 @@ This TI OpenVX + ROS development framework works with
19### Ubuntu PC 19### Ubuntu PC
20A Ubuntu (18.04 recommended) PC is required. For RViz visualization of input/output topics published from the J7, it is assumed that ROS (Melodic recommended) is installed on the Ubuntu PC. 20A Ubuntu (18.04 recommended) PC is required. For RViz visualization of input/output topics published from the J7, it is assumed that ROS (Melodic recommended) is installed on the Ubuntu PC.
21 21
22Once finding the IP address assigned to J7 EVM (using a serial port communications program, for example, `minicom`), connect to J7 Linux with SSH: 22Once finding the IP address assigned to J7 EVM (using a serial port communications program, for example, `minicom`), connect to J7 Linux with SSH:<br>
23
24``` 23```
25ssh root@<J7_IP_address> 24ssh root@<J7_IP_address>
26``` 25```
@@ -38,7 +37,8 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
38 37
39<!-- ================================================================================= --> 38<!-- ================================================================================= -->
40## Clone Git Repository 39## Clone Git Repository
411. Set up the project directory and the catkin workspace: 40
411. Set up the project directory and the catkin workspace:<br>
42 ``` 42 ```
43 WORK_DIR=$HOME/j7ros_home 43 WORK_DIR=$HOME/j7ros_home
44 CATKIN_WS=$WORK_DIR/catkin_ws 44 CATKIN_WS=$WORK_DIR/catkin_ws
@@ -46,19 +46,19 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
46 mkdir -p $CATKIN_WS/src 46 mkdir -p $CATKIN_WS/src
47 cd $CATKIN_WS/src 47 cd $CATKIN_WS/src
48 ``` 48 ```
49 492. Clone the project GIT repository:<br>
502. Clone the project GIT repository:
51 ``` 50 ```
52 git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git 51 git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git
53 ``` 52 ```
53
54## Download TIDL Model & ROSBAG File 54## Download TIDL Model & ROSBAG File
55 55
561. For convenience, set up following soft-links: 561. For convenience, set up following soft-links:<br>
57 ``` 57 ```
58 cd $WORK_DIR 58 cd $WORK_DIR
59 ln -s $CATKIN_WS/src/jacinto_ros_perception/docker/Makefile 59 ln -s $CATKIN_WS/src/jacinto_ros_perception/docker/Makefile
60 ``` 60 ```
612. To download data files, run the following in `$WORK_DIR`: 612. To download data files, run the following in `$WORK_DIR`:<br>
62 ``` 62 ```
63 make data_download 63 make data_download
64 ``` 64 ```
@@ -69,13 +69,12 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
69 69
701. Following [this link](https://docs.docker.com/get-started/#test-docker-installation), 701. Following [this link](https://docs.docker.com/get-started/#test-docker-installation),
71check that Docker and network work correctly on the J7 host Linux. 71check that Docker and network work correctly on the J7 host Linux.
72 722. To generate bash scripts for building and running a Docker image for the project:<br>
732. To generate bash scripts for building and running a Docker image for the project:
74 ``` 73 ```
75 make scripts 74 make scripts
76 ``` 75 ```
77 Make sure that two bash scripts named `docker_build.sh` and `docker_run.sh` are generated. 76 Make sure that two bash scripts named `docker_build.sh` and `docker_run.sh` are generated.
783. To build the Docker image, at `$WORK_DIR` run: 773. To build the Docker image, at `$WORK_DIR` run:<br>
79 ``` 78 ```
80 ./docker_build.sh 79 ./docker_build.sh
81 ``` 80 ```
@@ -85,90 +84,93 @@ check that Docker and network work correctly on the J7 host Linux.
85 84
86<!-- ================================================================================= --> 85<!-- ================================================================================= -->
87## Set Up Remote PC for Visualization 86## Set Up Remote PC for Visualization
88
89Open another terminal on Ubuntu PC to set up environment for RViz visualization. 87Open another terminal on Ubuntu PC to set up environment for RViz visualization.
90 88
911. Clone GIT repository: 891. Clone GIT repository:<br>
92 ```sh 90 ```
93 CATKIN_WS=$HOME/j7ros_home/catkin_ws 91 CATKIN_WS=$HOME/j7ros_home/catkin_ws
94 mkdir -p $CATKIN_WS/src 92 mkdir -p $CATKIN_WS/src
95 cd $CATKIN_WS/src 93 cd $CATKIN_WS/src
96 git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git 94 git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git
97 ``` 95 ```
982. Build ROS nodes: 962. Build ROS nodes:<br>
99 ``` 97 ```
100 cd $CATKIN_WS 98 cd $CATKIN_WS
101 catkin_make 99 catkin_make
102 ``` 100 ```
103 1013. ROS network setting: For convenience, set up a soft-link:<br>
1043. ROS network setting: For convenience, set up a soft-link: 102 ```
105 ```sh
106 ln -s src/jacinto_ros_perception/setup_env_pc.sh 103 ln -s src/jacinto_ros_perception/setup_env_pc.sh
107 ``` 104 ```
108 105
109 Update the following lines in `setup_env_pc.sh`: 106 Update the following lines in `setup_env_pc.sh`:<br>
110 ``` 107 ```
111 PC_IP_ADDR=<PC_IP_address> 108 PC_IP_ADDR=<PC_IP_address>
112 J7_IP_ADDR=<J7_IP_address> 109 J7_IP_ADDR=<J7_IP_address>
113 ``` 110 ```
114 `<J7_IP_address>` can be found by running "`make ip_show`" on **J7 terminal**. 111 `<J7_IP_address>` can be found by running `make ip_show` on **J7 terminal**.
115 112
116 To set up the PC environment, run the following: 113 To set up the PC environment, run the following:<br>
117 ``` 114 ```
118 source setup_env_pc.sh 115 source setup_env_pc.sh
119 ``` 116 ```
120After launching ROS nodes on the J7, we can check the all the ROS topics by running "`rostopic list`". 117
118After launching ROS nodes on the J7, we can check the all the ROS topics by running `rostopic list`.
121 119
122<!-- ================================================================================= --> 120<!-- ================================================================================= -->
123## Build Demo ROS Applications 121## Build Demo ROS Applications
1241. To run the docker image: 122
1231. To run the docker image:<br>
125 ``` 124 ```
126 ./docker_run.sh 125 ./docker_run.sh
127 ``` 126 ```
1282. To build ROS applications, inside the Docker container: 1272. To build ROS applications, inside the Docker container:<br>
129 ```sh 128 ```
130 cd $CATKIN_WS 129 cd $CATKIN_WS
131 catkin_make 130 catkin_make
132 source devel/setup.bash 131 source devel/setup.bash
133 ``` 132 ```
134 133
135## Run Stereo Vision Application 134## Run Stereo Vision Application
1361. **[J7]** To launch `ti_sde` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux: 135
137 ```sh 1361. **[J7]** To launch `ti_sde` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:<br>
137 ```
138 ./docker_run.sh roslaunch ti_sde bag_sde.launch 138 ./docker_run.sh roslaunch ti_sde bag_sde.launch
139 ``` 139 ```
140 Alternatively, you can run the following `roslaunch` command **inside** the Docker container: 140 Alternatively, you can run the following `roslaunch` command **inside** the Docker container:<br>
141 ```sh 141 ```
142 roslaunch ti_sde bag_sde.launch 142 roslaunch ti_sde bag_sde.launch
143 ``` 143 ```
1442. **[Remote PC]** For visualization, on the PC: 1442. **[Remote PC]** For visualization, on the PC:<br>
145 ``` 145 ```
146 roslaunch ti_sde rviz.launch 146 roslaunch ti_sde rviz.launch
147 ``` 147 ```
148 148
149## Run CNN Semantic Segmentation Application 149## Run CNN Semantic Segmentation Application
1501. **[J7]** To launch `ti_semseg_cnn` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux: 150
151 ```sh 1511. **[J7]** To launch `ti_semseg_cnn` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:<br>
152 ```
152 ./docker_run.sh roslaunch ti_semseg_cnn bag_semseg_cnn.launch 153 ./docker_run.sh roslaunch ti_semseg_cnn bag_semseg_cnn.launch
153 ``` 154 ```
154 Alternatively, you can run the following `roslaunch` command **inside** the Docker container: 155 Alternatively, you can run the following `roslaunch` command **inside** the Docker container:<br>
155 ```sh 156 ```
156 roslaunch ti_semseg_cnn bag_semseg_cnn.launch 157 roslaunch ti_semseg_cnn bag_semseg_cnn.launch
157 ``` 158 ```
1582. **[Remote PC]** For visualization, on the PC: 1592. **[Remote PC]** For visualization, on the PC:<br>
159 ``` 160 ```
160 roslaunch ti_semseg_cnn rviz.launch 161 roslaunch ti_semseg_cnn rviz.launch
161 ``` 162 ```
162## Run Stereo Vision and CNN Semantic Segmentation Together 163## Run Stereo Vision and CNN Semantic Segmentation Together
1631. **[J7]** To launch `ti_sde` and `ti_semseg_cnn` tigether with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux: 164
164 ```sh 1651. **[J7]** To launch `ti_sde` and `ti_semseg_cnn` tigether with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:<br>
166 ```
165 ./docker_run.sh roslaunch ti_sde bag_sde_semseg.launch 167 ./docker_run.sh roslaunch ti_sde bag_sde_semseg.launch
166 ``` 168 ```
167 Alternatively, you can run the following `roslaunch` command **inside** the Docker container: 169 Alternatively, you can run the following `roslaunch` command **inside** the Docker container:<br>
168 ```sh 170 ```
169 roslaunch ti_sde bag_sde_semseg.launch 171 roslaunch ti_sde bag_sde_semseg.launch
170 ``` 172 ```
1712. **[Remote PC]** For visualization, on the PC: 1732. **[Remote PC]** For visualization, on the PC:<br>
172 ``` 174 ```
173 roslaunch ti_sde rviz_sde_semseg.launch 175 roslaunch ti_sde rviz_sde_semseg.launch
174 ``` 176 ```
diff --git a/nodes/ti_sde/README.md b/nodes/ti_sde/README.md
index 91402d7..63fdbd2 100644
--- a/nodes/ti_sde/README.md
+++ b/nodes/ti_sde/README.md
@@ -46,6 +46,7 @@ roslaunch ti_sde sde.launch
46It is recommended to launch bag_sde.launch file if a ROSBAG file needs to be played as well. 46It is recommended to launch bag_sde.launch file if a ROSBAG file needs to be played as well.
47 47
48`sde.launch` file specifies the followings: 48`sde.launch` file specifies the followings:
49
49* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the descriptions of all parameters, please see a yaml file. 50* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the descriptions of all parameters, please see a yaml file.
50* Left input topic name to read left images from a stereo camera. 51* Left input topic name to read left images from a stereo camera.
51* Right input topic name to read right images from a stereo camera. 52* Right input topic name to read right images from a stereo camera.
@@ -74,15 +75,12 @@ roslaunch ti_viz_nodes viz_disparity.launch
74 75
75## LDC (Lense Distortion Correction) 76## LDC (Lense Distortion Correction)
76 77
77As shown in Figure 1, We use the LDC HWA to rectify left and right images. In order to use LDC, the rectification tables should be provided in the format that LDC support. It is a two-step process to create the rectification table in the LDC format. 78As shown in Figure 1, We use the LDC HWA to rectify left and right images. In order to use LDC, the rectification tables should be provided in the format that LDC support. It is a two-step process to create the rectification table in the LDC format.br
78
791. Generation of raw rectification table
80 79
801. Generation of raw rectification table<br>
81 A raw look-up table has `width x height x 2` entries in it, where width and height are the horizontal and vertical sizes of an image, to specify the horizontal and vertical pixel position in a source image that every pixel in a target image maps to. It may consist of two look-up tables of `width x height` for horizontal position and vertical position, respectively. A target image (i.e. rectified image) is created by fetching the pixel in a source image (i.e. unrectified image), which is specified by a raw look up table, for every pixel. For example, OpenCV stereo rectification function generates such a raw rectification table for given camera parameters. 81 A raw look-up table has `width x height x 2` entries in it, where width and height are the horizontal and vertical sizes of an image, to specify the horizontal and vertical pixel position in a source image that every pixel in a target image maps to. It may consist of two look-up tables of `width x height` for horizontal position and vertical position, respectively. A target image (i.e. rectified image) is created by fetching the pixel in a source image (i.e. unrectified image), which is specified by a raw look up table, for every pixel. For example, OpenCV stereo rectification function generates such a raw rectification table for given camera parameters.
82 822. Convention of raw rectification table to LDC format<br>
832. Convention of raw rectification table to LDC format 83 A raw rectification table is converted to the LDC format by the following pseudo code.<br>
84
85 A raw rectification table is converted to the LDC format by the following pseudo code.
86 84
87 ``` 85 ```
88 // mapX is a raw LUT for horizontal pixel position in Q3 format. Its size is width x height 86 // mapX is a raw LUT for horizontal pixel position in Q3 format. Its size is width x height
@@ -128,7 +126,6 @@ As shown in Figure 1, We use the LDC HWA to rectify left and right images. In or
128 } 126 }
129 ``` 127 ```
130 128
131
132## SDE (Stereo Depth Engine) 129## SDE (Stereo Depth Engine)
133 130
134When `sde_algo_type = 0` in params.yaml, the output disparity map is simply the disparity map generated by the SDE HWA without any post processing. 131When `sde_algo_type = 0` in params.yaml, the output disparity map is simply the disparity map generated by the SDE HWA without any post processing.
diff --git a/nodes/ti_semseg_cnn/README.md b/nodes/ti_semseg_cnn/README.md
index 28069d3..df151c2 100644
--- a/nodes/ti_semseg_cnn/README.md
+++ b/nodes/ti_semseg_cnn/README.md
@@ -47,6 +47,7 @@ roslaunch ti_semseg_cnn semseg_cnn.launch
47It is recommended to launch bag_semseg_cnn.launch file if a ROSBAG file needs to be played as well. 47It is recommended to launch bag_semseg_cnn.launch file if a ROSBAG file needs to be played as well.
48 48
49`semseg_cnn.launch` file specifies the followings: 49`semseg_cnn.launch` file specifies the followings:
50
50* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the description of all parameters, please see a yaml file. 51* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the description of all parameters, please see a yaml file.
51* Input topic name to read input images. 52* Input topic name to read input images.
52* Output undistorted or rectified image topic name. 53* Output undistorted or rectified image topic name.
@@ -82,13 +83,9 @@ roslaunch ti_viz_nodes viz_semseg.launch
82Please refer to Figure 1 for the following descriptions of the processing blocks implemented for this application. 83Please refer to Figure 1 for the following descriptions of the processing blocks implemented for this application.
83 84
841. When input images are distorted or unrectified, they are undistorted or rectified by the J7 LDC (Lens Distortion Correction) HWA. Pseudo codes to create LDC tables for rectification are described [here](../ti_sde/README.md). Note that the LDC HWA not only removes lens distortion or rectifies, but also changes image format. Input image to the application is of YUV422 (UYVY) format, and YUV422 input is converted to YUV420 (NV12) by LDC. 851. When input images are distorted or unrectified, they are undistorted or rectified by the J7 LDC (Lens Distortion Correction) HWA. Pseudo codes to create LDC tables for rectification are described [here](../ti_sde/README.md). Note that the LDC HWA not only removes lens distortion or rectifies, but also changes image format. Input image to the application is of YUV422 (UYVY) format, and YUV422 input is converted to YUV420 (NV12) by LDC.
85
862. Input images are resized to a smaller resolution, which is specified by `dl_width` and `dl_height` in `params.yaml`, for the TIDL semantic segmentation network. The MSC (Multi-Scaler) HWA is used to resize input images. 862. Input images are resized to a smaller resolution, which is specified by `dl_width` and `dl_height` in `params.yaml`, for the TIDL semantic segmentation network. The MSC (Multi-Scaler) HWA is used to resize input images.
87
883. The pre-processing block, which runs on C6x, converts YUV420 to RGB, so that the TIDL semantic segmentation network can read input images. 873. The pre-processing block, which runs on C6x, converts YUV420 to RGB, so that the TIDL semantic segmentation network can read input images.
89
904. The TIDL semantic segmentation network is accelerated by C7x/MMA and outputs a tensor that has class information for every pixel. 884. The TIDL semantic segmentation network is accelerated by C7x/MMA and outputs a tensor that has class information for every pixel.
91
925. The post-processing block, which runs on C6x, creates a color-coded semantic segmentation map image from the output tensor. It can be enabled or disabled by configuring `enable_post_proc` parameter in `params.yaml`. Only if the post-processing block is enabled, the color-coded semantic segmentation map is created and published. Its format is YUV420. When `output_rgb` is true in the launch file, it is published in RGB format after conversion. If the post-processing, the semantic segmentation output tensor from the TIDL network is published instead. 895. The post-processing block, which runs on C6x, creates a color-coded semantic segmentation map image from the output tensor. It can be enabled or disabled by configuring `enable_post_proc` parameter in `params.yaml`. Only if the post-processing block is enabled, the color-coded semantic segmentation map is created and published. Its format is YUV420. When `output_rgb` is true in the launch file, it is published in RGB format after conversion. If the post-processing, the semantic segmentation output tensor from the TIDL network is published instead.
93 90
94## Known Issue 91## Known Issue