From 8b0ae04f3e05fe01f2efdaea51dc59e2364c9f0b Mon Sep 17 00:00:00 2001
From: JuneChul Roh
Date: Sun, 13 Dec 2020 22:19:43 -0600
Subject: Updated .md files for better formatting
---
README.md | 17 +++++----
docker/README.md | 80 ++++++++++++++++++++++---------------------
nodes/ti_sde/README.md | 13 +++----
nodes/ti_semseg_cnn/README.md | 5 +--
4 files changed, 55 insertions(+), 60 deletions(-)
diff --git a/README.md b/README.md
index ababb37..18f75ca 100644
--- a/README.md
+++ b/README.md
@@ -4,14 +4,14 @@ TI OpenVX + ROS Framework & Applications
### Introduction to TI OpenVX + ROS Development Framework
The TI OpenVX + ROS development framework is enabled in a Docker container environment on J7 Processor SDK Linux. We provide detailed steps for setting a Docker container environment for ROS Melodic together with the TI Vision Apps Library (see next section). The TI OpenVX + ROS development framework allows:
-- Optimized software implementation of computation-intensive software blocks (including deep-learning, vision, perception, and ADAS) on deep-learning core (C7x/MMA), DSP cores, hardware accelerators built-in on the Jacinto 7 processor
-- Application softwares can be complied directly on the Jacinto 7 processor in a Docker container using APIs optimized on Jacinto 7 processor along with many open-source libraries and packages including, for example. OpenCV and Point-Cloud Library (PCL).
+* Optimized software implementation of computation-intensive software blocks (including deep-learning, vision, perception, and ADAS) on deep-learning core (C7x/MMA), DSP cores, hardware accelerators built-in on the Jacinto 7 processor
+* Application softwares can be complied directly on the Jacinto 7 processor in a Docker container using APIs optimized on Jacinto 7 processor along with many open-source libraries and packages including, for example. OpenCV and Point-Cloud Library (PCL).
Figure below is a representative vision application that can be developed in TI OpenVX + ROS framework.
@@ -23,17 +23,16 @@ Figure below is a representative vision application that can be developed in TI
### TI Vision Apps Library
The TI Vision Apps Library is a set of APIs for the target deployment that are derived from the Jacinto 7 Processor SDK RTOS, which includes:
-- TI OpenVX kernels and infrastructure
-- TI deep learning (TIDL) applications
-- Imaging and vision applications
-- Advanced driver-assistance systems (ADAS) applications
-- Perception applications
+* TI OpenVX kernels and infrastructure
+* TI deep learning (TIDL) applications
+* Imaging and vision applications
+* Advanced driver-assistance systems (ADAS) applications
+* Perception applications
The TI Vision Apps Library is included in the pre-built package of [J721E Processor SDK RTOS 7.1.0](https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/index_FDS.html).
## How to Set Up TI OpenVX + ROS Docker Container Environment on J7 Target
See [docker/README.md](docker/README.md).
-
## TI OpenVX + ROS Demo Applications
diff --git a/docker/README.md b/docker/README.md
index 4cbc84e..4f02a35 100644
--- a/docker/README.md
+++ b/docker/README.md
@@ -19,8 +19,7 @@ This TI OpenVX + ROS development framework works with
### Ubuntu PC
A Ubuntu (18.04 recommended) PC is required. For RViz visualization of input/output topics published from the J7, it is assumed that ROS (Melodic recommended) is installed on the Ubuntu PC.
-Once finding the IP address assigned to J7 EVM (using a serial port communications program, for example, `minicom`), connect to J7 Linux with SSH:
-
+Once finding the IP address assigned to J7 EVM (using a serial port communications program, for example, `minicom`), connect to J7 Linux with SSH:
```
ssh root@
```
@@ -38,7 +37,8 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
## Clone Git Repository
-1. Set up the project directory and the catkin workspace:
+
+1. Set up the project directory and the catkin workspace:
```
WORK_DIR=$HOME/j7ros_home
CATKIN_WS=$WORK_DIR/catkin_ws
@@ -46,19 +46,19 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS/src
```
-
-2. Clone the project GIT repository:
+2. Clone the project GIT repository:
```
git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git
```
+
## Download TIDL Model & ROSBAG File
-1. For convenience, set up following soft-links:
+1. For convenience, set up following soft-links:
```
cd $WORK_DIR
ln -s $CATKIN_WS/src/jacinto_ros_perception/docker/Makefile
```
-2. To download data files, run the following in `$WORK_DIR`:
+2. To download data files, run the following in `$WORK_DIR`:
```
make data_download
```
@@ -69,13 +69,12 @@ Figure 1 shows the hardware setup and high-level installation steps on the J7 ta
1. Following [this link](https://docs.docker.com/get-started/#test-docker-installation),
check that Docker and network work correctly on the J7 host Linux.
-
-2. To generate bash scripts for building and running a Docker image for the project:
+2. To generate bash scripts for building and running a Docker image for the project:
```
make scripts
```
Make sure that two bash scripts named `docker_build.sh` and `docker_run.sh` are generated.
-3. To build the Docker image, at `$WORK_DIR` run:
+3. To build the Docker image, at `$WORK_DIR` run:
```
./docker_build.sh
```
@@ -85,90 +84,93 @@ check that Docker and network work correctly on the J7 host Linux.
## Set Up Remote PC for Visualization
-
Open another terminal on Ubuntu PC to set up environment for RViz visualization.
-1. Clone GIT repository:
- ```sh
+1. Clone GIT repository:
+ ```
CATKIN_WS=$HOME/j7ros_home/catkin_ws
mkdir -p $CATKIN_WS/src
cd $CATKIN_WS/src
git clone https://git.ti.com/git/processor-sdk-vision/jacinto_ros_perception.git
```
-2. Build ROS nodes:
+2. Build ROS nodes:
```
cd $CATKIN_WS
catkin_make
```
-
-3. ROS network setting: For convenience, set up a soft-link:
- ```sh
+3. ROS network setting: For convenience, set up a soft-link:
+ ```
ln -s src/jacinto_ros_perception/setup_env_pc.sh
```
- Update the following lines in `setup_env_pc.sh`:
+ Update the following lines in `setup_env_pc.sh`:
```
PC_IP_ADDR=
J7_IP_ADDR=
```
- `` can be found by running "`make ip_show`" on **J7 terminal**.
+ `` can be found by running `make ip_show` on **J7 terminal**.
- To set up the PC environment, run the following:
+ To set up the PC environment, run the following:
```
source setup_env_pc.sh
```
-After launching ROS nodes on the J7, we can check the all the ROS topics by running "`rostopic list`".
+
+After launching ROS nodes on the J7, we can check the all the ROS topics by running `rostopic list`.
## Build Demo ROS Applications
-1. To run the docker image:
+
+1. To run the docker image:
```
./docker_run.sh
```
-2. To build ROS applications, inside the Docker container:
- ```sh
+2. To build ROS applications, inside the Docker container:
+ ```
cd $CATKIN_WS
catkin_make
source devel/setup.bash
```
## Run Stereo Vision Application
-1. **[J7]** To launch `ti_sde` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
- ```sh
+
+1. **[J7]** To launch `ti_sde` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
+ ```
./docker_run.sh roslaunch ti_sde bag_sde.launch
```
- Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
- ```sh
+ Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
+ ```
roslaunch ti_sde bag_sde.launch
```
-2. **[Remote PC]** For visualization, on the PC:
+2. **[Remote PC]** For visualization, on the PC:
```
roslaunch ti_sde rviz.launch
```
## Run CNN Semantic Segmentation Application
-1. **[J7]** To launch `ti_semseg_cnn` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
- ```sh
+
+1. **[J7]** To launch `ti_semseg_cnn` node with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
+ ```
./docker_run.sh roslaunch ti_semseg_cnn bag_semseg_cnn.launch
```
- Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
- ```sh
+ Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
+ ```
roslaunch ti_semseg_cnn bag_semseg_cnn.launch
```
-2. **[Remote PC]** For visualization, on the PC:
+2. **[Remote PC]** For visualization, on the PC:
```
roslaunch ti_semseg_cnn rviz.launch
```
## Run Stereo Vision and CNN Semantic Segmentation Together
-1. **[J7]** To launch `ti_sde` and `ti_semseg_cnn` tigether with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
- ```sh
+
+1. **[J7]** To launch `ti_sde` and `ti_semseg_cnn` tigether with playing back a ROSBAG file, run the following in `$WORK_DIR` on the J7 host Linux:
+ ```
./docker_run.sh roslaunch ti_sde bag_sde_semseg.launch
```
- Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
- ```sh
+ Alternatively, you can run the following `roslaunch` command **inside** the Docker container:
+ ```
roslaunch ti_sde bag_sde_semseg.launch
```
-2. **[Remote PC]** For visualization, on the PC:
+2. **[Remote PC]** For visualization, on the PC:
```
roslaunch ti_sde rviz_sde_semseg.launch
```
diff --git a/nodes/ti_sde/README.md b/nodes/ti_sde/README.md
index 91402d7..63fdbd2 100644
--- a/nodes/ti_sde/README.md
+++ b/nodes/ti_sde/README.md
@@ -46,6 +46,7 @@ roslaunch ti_sde sde.launch
It is recommended to launch bag_sde.launch file if a ROSBAG file needs to be played as well.
`sde.launch` file specifies the followings:
+
* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the descriptions of all parameters, please see a yaml file.
* Left input topic name to read left images from a stereo camera.
* Right input topic name to read right images from a stereo camera.
@@ -74,15 +75,12 @@ roslaunch ti_viz_nodes viz_disparity.launch
## LDC (Lense Distortion Correction)
-As shown in Figure 1, We use the LDC HWA to rectify left and right images. In order to use LDC, the rectification tables should be provided in the format that LDC support. It is a two-step process to create the rectification table in the LDC format.
-
-1. Generation of raw rectification table
+As shown in Figure 1, We use the LDC HWA to rectify left and right images. In order to use LDC, the rectification tables should be provided in the format that LDC support. It is a two-step process to create the rectification table in the LDC format.br
+1. Generation of raw rectification table
A raw look-up table has `width x height x 2` entries in it, where width and height are the horizontal and vertical sizes of an image, to specify the horizontal and vertical pixel position in a source image that every pixel in a target image maps to. It may consist of two look-up tables of `width x height` for horizontal position and vertical position, respectively. A target image (i.e. rectified image) is created by fetching the pixel in a source image (i.e. unrectified image), which is specified by a raw look up table, for every pixel. For example, OpenCV stereo rectification function generates such a raw rectification table for given camera parameters.
-
-2. Convention of raw rectification table to LDC format
-
- A raw rectification table is converted to the LDC format by the following pseudo code.
+2. Convention of raw rectification table to LDC format
+ A raw rectification table is converted to the LDC format by the following pseudo code.
```
// mapX is a raw LUT for horizontal pixel position in Q3 format. Its size is width x height
@@ -128,7 +126,6 @@ As shown in Figure 1, We use the LDC HWA to rectify left and right images. In or
}
```
-
## SDE (Stereo Depth Engine)
When `sde_algo_type = 0` in params.yaml, the output disparity map is simply the disparity map generated by the SDE HWA without any post processing.
diff --git a/nodes/ti_semseg_cnn/README.md b/nodes/ti_semseg_cnn/README.md
index 28069d3..df151c2 100644
--- a/nodes/ti_semseg_cnn/README.md
+++ b/nodes/ti_semseg_cnn/README.md
@@ -47,6 +47,7 @@ roslaunch ti_semseg_cnn semseg_cnn.launch
It is recommended to launch bag_semseg_cnn.launch file if a ROSBAG file needs to be played as well.
`semseg_cnn.launch` file specifies the followings:
+
* YAML file that includes algorithm configuration parameters. For the descriptions of important parameters, refer to Parameter section below. For the description of all parameters, please see a yaml file.
* Input topic name to read input images.
* Output undistorted or rectified image topic name.
@@ -82,13 +83,9 @@ roslaunch ti_viz_nodes viz_semseg.launch
Please refer to Figure 1 for the following descriptions of the processing blocks implemented for this application.
1. When input images are distorted or unrectified, they are undistorted or rectified by the J7 LDC (Lens Distortion Correction) HWA. Pseudo codes to create LDC tables for rectification are described [here](../ti_sde/README.md). Note that the LDC HWA not only removes lens distortion or rectifies, but also changes image format. Input image to the application is of YUV422 (UYVY) format, and YUV422 input is converted to YUV420 (NV12) by LDC.
-
2. Input images are resized to a smaller resolution, which is specified by `dl_width` and `dl_height` in `params.yaml`, for the TIDL semantic segmentation network. The MSC (Multi-Scaler) HWA is used to resize input images.
-
3. The pre-processing block, which runs on C6x, converts YUV420 to RGB, so that the TIDL semantic segmentation network can read input images.
-
4. The TIDL semantic segmentation network is accelerated by C7x/MMA and outputs a tensor that has class information for every pixel.
-
5. The post-processing block, which runs on C6x, creates a color-coded semantic segmentation map image from the output tensor. It can be enabled or disabled by configuring `enable_post_proc` parameter in `params.yaml`. Only if the post-processing block is enabled, the color-coded semantic segmentation map is created and published. Its format is YUV420. When `output_rgb` is true in the launch file, it is published in RGB format after conversion. If the post-processing, the semantic segmentation output tensor from the TIDL network is published instead.
## Known Issue
--
cgit v1.2.3-54-g00ecf