The TI OpenVX + ROS development framework runs in a Docker container environment on J7 Processor SDK Linux. We provide detailed steps for setting a Docker container environment for ROS Melodic along with the TI Vision Apps Library (see next section). The TI OpenVX + ROS development framework allows:
- Optimized software implementation of computation-intensive software blocks (including deep-learning, vision, perception, and ADAS) on deep-learning core (C7x/MMA), DSP cores, hardware accelerators built-in on the Jacinto 7 processor
- Application softwares can be complied directly on the Jacinto 7 target using APIs optimized on the Jacinto 7 cores and hardware accelerators along with many open-source libraries and packages including, for example, OpenCV and Point-Cloud Library (PCL).
Figure below is a representative vision application developed in TI OpenVX + ROS framework.
The TI Vision Apps Library is a set of APIs for the target deployment that are derived from the Jacinto 7 Processor SDK RTOS which includes:
- TI OpenVX kernels and infrastructure
- TI deep-learning (TIDL) applications
- Imaging and vision applications
- Advanced driver-assistance systems (ADAS) applications
- Perception applications
The TI Vision Apps Library is included in the pre-built package of J721E Processor SDK RTOS 7.3.0.
The J721E Processor SDK RTOS 7.3.0 also supports the following open-source deep-learning runtime: * TVM/Neo-AI-DLR * TFLite Runtime * ONNX Runtime
For more details on open-source deep-learning runtime on J7/TDA4x, please check TI Edge AI Cloud. We provides two demo applications that include a deep-learning model that is implemented in the TVM/Neo-AI-DLR workflow.
For debugging: docker/README.md (Caution: git.ti.com has issues in rendering markdown files)
- RViz visualization is displayed on a remote Ubuntu PC. Display from insider a Docker container on the J7 target is not enabled and tested.
- Ctrl+C termination of a ROS node or a ROS launch session can be sometimes slow.
- Stereo Vision Demo
- Output disparity map may have artifacts that are common to block-based stereo algorithms. e.g., noise in the sky, texture-less area, repeated patterns, etc.
- While the confidence map from SDE has 8 values between 0 (least confident) to 7 (most confident), the confidence map from the multi-layer SDE refinement has only 2 values, 0 and 7. Therefore, it would not appear as fine as the SDE's confidence map.
- The semantic segmentation model used in
ti_estopnodes was trained with Cityscapes dataset first, and re-trained with a small dataset collected from a particular stereo camera (ZED camera, HD mode) for a limited scenarios with coarse annotation. Therefore, the model can show limited accuracy performance if a different camera model is used and/or when it is applied in different environment scenes.
If you have questions or feedback, please use TI E2E.