summary | shortlog | log | commit | commitdiff | tree
raw | patch | inline | side by side (parent: b4a9ce7)
raw | patch | inline | side by side (parent: b4a9ce7)
author | Ajay Jayaraj <ajayj@ti.com> | |
Wed, 5 Sep 2018 16:56:37 +0000 (11:56 -0500) | ||
committer | Ajay Jayaraj <ajayj@ti.com> | |
Thu, 6 Sep 2018 15:21:31 +0000 (10:21 -0500) |
Changes:
* Overview chapter, includes a Terminology section.
* Section on different use cases in the "Using the API" chapter.
* Updated the Examples chapter to reflect new examples and AM5749
benchmarking.
* Added the two_eo_per_frame_opt example to illustrate double buffering.
(MCT-1043)
* Overview chapter, includes a Terminology section.
* Section on different use cases in the "Using the API" chapter.
* Updated the Examples chapter to reflect new examples and AM5749
benchmarking.
* Added the two_eo_per_frame_opt example to illustrate double buffering.
(MCT-1043)
18 files changed:
index 5104df8d6ae02cc9210d08d89c85f1a51fb426c8..8527cfc6bcc00abd55ce716f626815d58ba8fae5 100644 (file)
VERSION:'{{ release|e }}',
COLLAPSE_INDEX:false,
FILE_SUFFIX:'{{ '' if no_search_suffix else file_suffix }}',
- HAS_SOURCE: {{ has_source|lower }}
+ HAS_SOURCE: {{ has_source|lower }},
+ SOURCELINK_SUFFIX: '{{ sourcelink_suffix }}'
};
</script>
{%- for scriptfile in script_files %}
diff --git a/docs/source/api.rst b/docs/source/api.rst
index 608bea09d66f1ad1587ee6fd93b630e861c99786..d6033a8700b12d04905f77c23020a212873d37c7 100644 (file)
--- a/docs/source/api.rst
+++ b/docs/source/api.rst
.. doxygenclass:: tidl::Configuration
:members:
+Configuration file
+==================
+
+TIDL API allows the user to create a Configuration object by reading from a file or by initializing it directly. Configuration settings supported by ``Configuration::ReadFromFile``:
+
+ * numFrames
+ * inWidth
+ * inHeight
+ * inNumChannels
+ * preProcType
+ * layerIndex2LayerGroupId
+
+ * inData
+ * outData
+
+ * netBinFile
+ * paramsBinFile
+
+ * enableTrace
+
+An example configuration file:
+
+.. literalinclude:: ../../examples/layer_output/j11_v2_trace.txt
+ :language: bash
+
+
+.. _layer-group-override:
+
+Overriding layer group assignment
+=================================
+The `TIDL device translation tool`_ assigns layer group ids to layers during the translation process. TIDL API 1.1 and higher allows the user to override this assignment by specifying explicit mappings. There are two ways for the user to provide an updated mapping:
+
+1. Specify a mapping in the configuration file to indicate that layers 12, 13 and 14 are assigned to layer group 2:
+
+.. code-block:: c++
+
+ layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} }
+
+
+2. User can also provide the layer index to group mapping in the code:
+
+.. code-block:: c++
+
+ Configuration c;
+ c.ReadFromFile("test.cfg");
+ c.layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} };
+
+
+.. role:: cpp(code)
+ :language: c++
+
+
.. _api-ref-executor:
Executor
.. refer https://breathe.readthedocs.io/en/latest/directives.html
+
+.. _TIDL device translation tool: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html#import-process
index 2ebbdac6402b4b6228d5e9362bccfac915a6be7f..2c37404d7adfa2848bfe333b605b5244fbade759 100644 (file)
--- a/docs/source/example.rst
+++ b/docs/source/example.rst
Examples
********
-+---------------------+----------------------------------------------------------------+
-| Example | Description |
-+---------------------+----------------------------------------------------------------+
-| one_eo_per_frame | Simple example to illustrate processing a single |
-| | frame with one :term:`EO` using the j11_v2 network. |
-| | Per-frame processing time for this network is farily similar |
-| | across EVE and C66x DSP. The enables frame processing to be |
-| | parallelized by distributing frames across all available EVE |
-| | and C66x cores. |
-+---------------------+----------------------------------------------------------------+
-| two_eo_per_frame | Simple example to illustrate processing a single |
-| | frame with two :term:`EOs<EO>` using the j11_v2 network. |
-+---------------------+----------------------------------------------------------------+
-| imagenet | Classification |
-+---------------------+----------------------------------------------------------------+
-| segmentation | Pixel level segmentation |
-+---------------------+----------------------------------------------------------------+
-| ssd_multibox | Object detection |
-+---------------------+----------------------------------------------------------------+
-| tidl_classification | Classification |
-+---------------------+----------------------------------------------------------------+
-| layer_output | Illustrates using TIDL APIs to access output buffers |
-| | of intermediate :term:`Layer`s in the network. |
-+---------------------+----------------------------------------------------------------+
-| test | Unit test. Tests supported networks on C66x and EVE |
-+---------------------+----------------------------------------------------------------+
-
-The examples included in the tidl-api package demonstrate three categories of
-deep learning networks: classification, segmentation and object detection.
-``imagenet`` and ``segmentation`` can run on AM57x processors with either EVE or C66x cores.
-``ssd_multibox`` requires AM57x processors with both EVE and C66x. The performance
-numbers that we present here were obtained on an AM5729 EVM, which
-includes 2 Arm Cortex-A15 cores running at 1.5GHz, 2 EVE cores at 650MHz, and
-2 DSP cores at 750MHz.
-
-For each example, we report device processing time, host processing time,
-and TIDL API overhead. **Device processing time** is measured on the device,
-from the moment processing starts for a frame till processing finishes.
-**Host processing time** is measured on the host, from the moment
-``ProcessFrameStartAsync()`` is called till ``ProcessFrameWait()`` returns
-in user application. It includes the TIDL API overhead, the OpenCL runtime
-overhead, and the time to copy between user input/output data and
-the padded TIDL internal buffers.
+.. list-table:: TIDL API Examples
+ :header-rows: 1
+ :widths: 12 43 20 25
+
+ * - Example
+ - Description
+ - Compute cores
+ - Input image
+ * - one_eo_per_frame
+ - Processes a single frame with one :term:`EO` using the j11_v2 network. Throughput is increased by distributing frame processing across EOs. Refer :ref:`use-case-1`.
+ - EVE or C66x
+ - Pre-processed image read from file.
+ * - two_eo_per_frame
+ - Processes a single frame with an :term:`EOP` using the j11_v2 network to reduce per-frame processing latency. Also increases throughput by distributing frame processing across EOPs. The EOP consists of two EOs. Refer :ref:`use-case-2`.
+ - EVE and C66x (network is split across both EVE and C66x)
+ - Pre-processed image read from file.
+ * - two_eo_per_frame_opt
+ - Builds on ``two_eo_per_frame``. Adds double buffering to improve performance. Refer :ref:`use-case-3`.
+ - EVE and C66x (network is split across both EVE and C66x)
+ - Pre-processed image read from file.
+
+ * - imagenet
+ - Classification example
+ - EVE or C66x
+ - OpenCV used to read input image from file or capture from camera.
+ * - segmentation
+ - Pixel level segmentation example
+ - EVE or C66x
+ - OpenCV used to read input image from file or capture from camera.
+ * - ssd_multibox
+ - Object detection
+ - EVE and C66x (network is split across both EVE and C66x)
+ - OpenCV used to read input image from file or capture from camera.
+ * - classification
+ - Classification example, called from the Matrix GUI.
+ -
+ - OpenCV used to read input image from file or capture from camera.
+ * - layer_output
+ - Illustrates using TIDL APIs to access output buffers of intermediate :term:`layers<Layer>` in the network.
+ - EVE or C66x
+ - Pre-processed image read from file.
+ * - test
+ - This example is used to test pre-converted networks included in the TIDL API package (``test/testvecs/config/tidl_models``). When run without any arguments, the program ``test_tidl`` will run all available networks on the C66x DSPs and EVEs available on the SoC. Use the ``-c`` option to specify a single network. Run ``test_tidl -h`` for details.
+ - C66x and EVEs (if available)
+ - Pre-processed image read from file.
+
+The included examples demonstrate three categories of deep learning networks: classification, segmentation and object detection. ``imagenet`` and ``segmentation`` can run on AM57x processors with either EVE or C66x cores. ``ssd_multibox`` requires AM57x processors with both EVE and C66x. The examples are available at ``/usr/share/ti/tidl/examples`` on the EVM file system and in the linux devkit.
+
+The performance numbers were obtained using:
+
+* `AM574x IDK EVM`_ with the Sitara `AM5749`_ Processor - 2 Arm Cortex-A15 cores running at 1.0GHz, 2 EVE cores at 650MHz, and 2 C66x cores at 750MHz.
+* `Processor SDK Linux`_ v5.1 with TIDL API v1.1
+
+For each example, device processing time, host processing time,
+and TIDL API overhead is reported.
+
+* **Device processing time** is measured on the device, from the moment processing starts for a frame till processing finishes.
+* **Host processing time** is measured on the host, from the moment ``ProcessFrameStartAsync()`` is called till ``ProcessFrameWait()`` returns in user application. It includes the TIDL API overhead, the OpenCL runtime overhead, and the time to copy user input data into padded TIDL internal buffers. ``Host processing time = Device processing time + TIDL API overhead``.
+
Imagenet
--------
The imagenet example takes an image as input and outputs 1000 probabilities.
Each probability corresponds to one object in the 1000 objects that the
-network is pre-trained with. Our example outputs top 5 predictions
-as the most likely objects that the input image can be.
+network is pre-trained with. The example outputs top 5 predictions for a given input image.
The following figure and tables shows an input image, top 5 predicted
-objects as output, and the processing time on either EVE or DSP.
+objects as output, and the processing time on either EVE or C66x.
.. image:: ../../examples/test/testvecs/input/objects/cat-pet-animal-domestic-104827.jpeg
:width: 600
-.. table::
-
- ==== ==============
- Rank Object Classes
- ==== ==============
- 1 tabby
- 2 Egyptian_cat
- 3 tiger_cat
- 4 lynx
- 5 Persian_cat
- ==== ==============
-
-.. table::
-
- ====================== ==================== ============
- Device Processing Time Host Processing Time API Overhead
- ====================== ==================== ============
- EVE: 103.5 ms 104.8 ms 1.21 %
- **OR**
- DSP: 117.4 ms 118.4 ms 0.827 %
- ====================== ==================== ============
-
-The particular network that we ran in this category, jacintonet11v2,
-has 14 layers. Input to the network is RGB image of 224x224.
-User can specify whether to run the network on EVE or DSP
-for acceleration. We can see that EVE time is slightly faster than DSP time.
-We can also see that the overall overhead is less than 1.3%.
+
+==== ==============
+Rank Object Classes
+==== ==============
+1 tabby
+2 Egyptian_cat
+3 tiger_cat
+4 lynx
+5 Persian_cat
+==== ==============
+
+======= ====================== ==================== ============
+Device Device Processing Time Host Processing Time API Overhead
+======= ====================== ==================== ============
+EVE 106.5 ms 107.9 ms 1.37 %
+C66x 117.9 ms 118.7 ms 0.93 %
+======= ====================== ==================== ============
+
+The :term:`network<Network>` used in the example is jacintonet11v2. It has
+14 layers. Input to the network is RGB image of 224x224. Users can specify whether to run the network on EVE or C66x.
.. note::
The predicitions reported here are based on the output of the softmax
.. image:: images/pexels-photo-972355-seg.jpg
:width: 600
-The network we ran in this category is jsegnet21v2, which has 26 layers.
+The :term:`network<Network>` used in the example is jsegnet21v2. It has
+26 layers. Users can specify whether to run the network on EVE or C66x.
Input to the network is RGB image of size 1024x512. The output is 1024x512
values, each value indicates which pre-trained category the current pixel
belongs to. The example will take the network output, create an overlay,
and blend the overlay onto the original input image to create an output image.
From the reported time in the following table, we can see that this network
-runs significantly faster on EVE than on DSP. The API overhead is less than
-1.1%.
-
-.. table::
+runs significantly faster on EVE than on C66x.
- ====================== ==================== ============
- Device Processing Time Host Processing Time API Overhead
- ====================== ==================== ============
- EVE: 248.7 ms 251.3 ms 1.02 %
- **OR**
- DSP: 813.2 ms 815.5 ms 0.281 %
- ====================== ==================== ============
+======= ====================== ==================== ============
+Device Device Processing Time Host Processing Time API Overhead
+======= ====================== ==================== ============
+EVE 251.8 ms 254.2 ms 0.96 %
+C66x 812.7 ms 815.0 ms 0.27 %
+======= ====================== ==================== ============
.. _ssd-example:
which pre-trained category that the object inside the box belongs to.
The example will take the network output, draw boxes accordingly,
and create an output image.
-The network can be run entirely on either EVE or DSP. But the best
+The network can be run entirely on either EVE or C66x. However, the best
performance comes with running the first 30 layers as a group on EVE
-and the next 13 layers as another group on DSP.
-Note the **AND** in the following table for the reported time.
-The overall API overhead is about 1.61%.
-Our end-to-end example shows how easy it is to assign a layers group id
-to an *Executor* and how easy it is to construct an *ExecutionObjectPipeline*
-to connect the output of one *Executor*'s *ExecutionObject*
+and the next 13 layers as another group on C66x.
+Our end-to-end example shows how easy it is to assign a :term:`Layer Group` id
+to an :term:`Executor` and how easy it is to construct an :term:`ExecutionObjectPipeline` to connect the output of one *Executor*'s :term:`ExecutionObject`
to the input of another *Executor*'s *ExecutionObject*.
-.. table::
-
- ====================== ==================== ============
- Device Processing Time Host Processing Time API Overhead
- ====================== ==================== ============
- EVE: 148.0 ms 150.1 ms 1.33 %
- **AND**
- DSP: 22.27 ms 23.06 ms 3.44 %
- **TOTAL**
- EVE+DSP: 170.3 ms 173.1 ms 1.61 %
- ====================== ==================== ============
-
-Test
-----
-This example is used to test pre-converted networks included in the TIDL API package (``test/testvecs/config/tidl_models``). When run without any arguments, the program ``test_tidl`` will run all available networks on the C66x DSPs and EVEs available on the SoC. Use the ``-c`` option to specify a single network. Run ``test_tidl -h`` for details.
+======== ====================== ==================== ============
+Device Device Processing Time Host Processing Time API Overhead
+======== ====================== ==================== ============
+EVE+C66x 169.5ms 172.0ms 1.68 %
+======== ====================== ==================== ============
Running Examples
----------------
The examples are located in ``/usr/share/ti/tidl/examples`` on
-the EVM file system. Each example needs to be run its own directory.
+the EVM file system. **Each example needs to be run in its own directory** due to relative paths to configuration files.
Running an example with ``-h`` will show help message with option set.
-The following code section shows how to run the examples, and
-the test program that tests all supported TIDL network configs.
+The following listing illustrates how to build and run the examples.
.. code-block:: shell
- root@am57xx-evm:~# cd /usr/share/ti/tidl-api/examples/imagenet/
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/imagenet# make -j4
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/imagenet# ./imagenet -t d
+ root@am57xx-evm:~/tidl-api/examples/imagenet# ./imagenet
Input: ../test/testvecs/input/objects/cat-pet-animal-domestic-104827.jpeg
- frame[0]: Time on device: 117.9ms, host: 119.3ms API overhead: 1.17 %
- 1: tabby, prob = 0.996
- 2: Egyptian_cat, prob = 0.977
- 3: tiger_cat, prob = 0.973
- 4: lynx, prob = 0.941
- 5: Persian_cat, prob = 0.922
+ frame[ 0]: Time on EVE0: 106.50 ms, host: 107.96 ms API overhead: 1.35 %
+ 1: tabby
+ 2: Egyptian_cat
+ 3: tiger_cat
+ 4: lynx
+ 5: Persian_cat
+ Loop total time (including read/write/opencv/print/etc): 202.6ms
imagenet PASSED
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/imagenet# cd ../segmentation/; make -j4
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/segmentation# ./segmentation -i ../test/testvecs/input/roads/pexels-photo-972355.jpeg
- Input: ../test/testvecs/input/roads/pexels-photo-972355.jpeg
- frame[0]: Time on device: 296.5ms, host: 303.2ms API overhead: 2.21 %
+ root@am57xx-evm:~/tidl-api/examples/segmentation# ./segmentation
+ Input: ../test/testvecs/input/000100_1024x512_bgr.y
+ frame[ 0]: Time on EVE0: 251.74 ms, host: 258.02 ms API overhead: 2.43 %
+ Saving frame 0 to: frame_0.png
Saving frame 0 overlayed with segmentation to: overlay_0.png
+ frame[ 1]: Time on EVE0: 251.76 ms, host: 255.79 ms API overhead: 1.58 %
+ Saving frame 1 to: frame_1.png
+ Saving frame 1 overlayed with segmentation to: overlay_1.png
+ ...
+ frame[ 8]: Time on EVE0: 251.75 ms, host: 254.21 ms API overhead: 0.97 %
+ Saving frame 8 to: frame_8.png
+ Saving frame 8 overlayed with segmentation to: overlay_8.png
+ Loop total time (including read/write/opencv/print/etc): 4809ms
segmentation PASSED
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/segmentation# cd ../ssd_multibox/; make -j4
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/ssd_multibox# ./ssd_multibox -i ../test/testvecs/input/roads/pexels-photo-378570.jpeg
- Input: ../test/testvecs/input/roads/pexels-photo-378570.jpeg
- frame[0]: Time on EVE: 175.2ms, host: 179ms API overhead: 2.1 %
- frame[0]: Time on DSP: 21.06ms, host: 22.43ms API overhead: 6.08 %
+ root@am57xx-evm:~/tidl-api/examples/ssd_multibox# ./ssd_multibox
+ Input: ../test/testvecs/input/preproc_0_768x320.y
+ frame[ 0]: Time on EVE0+DSP0: 169.44 ms, host: 173.56 ms API overhead: 2.37 %
+ Saving frame 0 to: frame_0.png
Saving frame 0 with SSD multiboxes to: multibox_0.png
- Loop total time (including read/write/print/etc): 423.8ms
+ Loop total time (including read/write/opencv/print/etc): 320.2ms
ssd_multibox PASSED
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/ssd_multibox# cd ../test; make -j4
- root@am57xx-evm:/usr/share/ti/tidl-api/examples/test# ./test_tidl
- API Version: 01.00.00.d91e442
- Running dense_1x1 on 2 devices, type EVE
- frame[0]: Time on device: 134.3ms, host: 135.6ms API overhead: 0.994 %
- dense_1x1 : PASSED
- Running j11_bn on 2 devices, type EVE
- frame[0]: Time on device: 176.2ms, host: 177.7ms API overhead: 0.835 %
- j11_bn : PASSED
- Running j11_cifar on 2 devices, type EVE
- frame[0]: Time on device: 53.86ms, host: 54.88ms API overhead: 1.85 %
- j11_cifar : PASSED
- Running j11_controlLayers on 2 devices, type EVE
- frame[0]: Time on device: 122.9ms, host: 123.9ms API overhead: 0.821 %
- j11_controlLayers : PASSED
- Running j11_prelu on 2 devices, type EVE
- frame[0]: Time on device: 300.8ms, host: 302.1ms API overhead: 0.437 %
- j11_prelu : PASSED
- Running j11_v2 on 2 devices, type EVE
- frame[0]: Time on device: 124.1ms, host: 125.6ms API overhead: 1.18 %
- j11_v2 : PASSED
- Running jseg21 on 2 devices, type EVE
- frame[0]: Time on device: 367ms, host: 374ms API overhead: 1.88 %
- jseg21 : PASSED
- Running jseg21_tiscapes on 2 devices, type EVE
- frame[0]: Time on device: 302.2ms, host: 308.5ms API overhead: 2.02 %
- frame[1]: Time on device: 301.9ms, host: 312.5ms API overhead: 3.38 %
- frame[2]: Time on device: 302.7ms, host: 305.9ms API overhead: 1.04 %
- frame[3]: Time on device: 301.9ms, host: 305ms API overhead: 1.01 %
- frame[4]: Time on device: 302.7ms, host: 305.9ms API overhead: 1.05 %
- frame[5]: Time on device: 301.9ms, host: 305.5ms API overhead: 1.17 %
- frame[6]: Time on device: 302.7ms, host: 305.9ms API overhead: 1.06 %
- frame[7]: Time on device: 301.9ms, host: 305ms API overhead: 1.02 %
- frame[8]: Time on device: 297ms, host: 300.3ms API overhead: 1.09 %
- Comparing frame: 0
- jseg21_tiscapes : PASSED
- Running smallRoi on 2 devices, type EVE
- frame[0]: Time on device: 2.548ms, host: 3.637ms API overhead: 29.9 %
- smallRoi : PASSED
- Running squeeze1_1 on 2 devices, type EVE
- frame[0]: Time on device: 292.9ms, host: 294.6ms API overhead: 0.552 %
- squeeze1_1 : PASSED
-
- Multiple Executor...
- Running network tidl_config_j11_v2.txt on EVEs: 1 in thread 0
- Running network tidl_config_j11_cifar.txt on EVEs: 0 in thread 1
- Multiple executors: PASSED
- Running j11_bn on 2 devices, type DSP
- frame[0]: Time on device: 170.5ms, host: 171.5ms API overhead: 0.568 %
- j11_bn : PASSED
- Running j11_controlLayers on 2 devices, type DSP
- frame[0]: Time on device: 416.4ms, host: 417.1ms API overhead: 0.176 %
- j11_controlLayers : PASSED
- Running j11_v2 on 2 devices, type DSP
- frame[0]: Time on device: 118ms, host: 119.2ms API overhead: 1.01 %
- j11_v2 : PASSED
- Running jseg21 on 2 devices, type DSP
- frame[0]: Time on device: 1123ms, host: 1128ms API overhead: 0.443 %
- jseg21 : PASSED
- Running jseg21_tiscapes on 2 devices, type DSP
- frame[0]: Time on device: 812.3ms, host: 817.3ms API overhead: 0.614 %
- frame[1]: Time on device: 812.6ms, host: 818.6ms API overhead: 0.738 %
- frame[2]: Time on device: 812.3ms, host: 815.1ms API overhead: 0.343 %
- frame[3]: Time on device: 812.7ms, host: 815.2ms API overhead: 0.312 %
- frame[4]: Time on device: 812.3ms, host: 815.1ms API overhead: 0.353 %
- frame[5]: Time on device: 812.6ms, host: 815.1ms API overhead: 0.302 %
- frame[6]: Time on device: 812.2ms, host: 815.1ms API overhead: 0.357 %
- frame[7]: Time on device: 812.6ms, host: 815.2ms API overhead: 0.315 %
- frame[8]: Time on device: 812ms, host: 815ms API overhead: 0.367 %
- Comparing frame: 0
- jseg21_tiscapes : PASSED
- Running smallRoi on 2 devices, type DSP
- frame[0]: Time on device: 14.21ms, host: 14.94ms API overhead: 4.89 %
- smallRoi : PASSED
- Running squeeze1_1 on 2 devices, type DSP
- frame[0]: Time on device: 960ms, host: 961.1ms API overhead: 0.116 %
- squeeze1_1 : PASSED
- tidl PASSED
Image input
^^^^^^^^^^^
/usr/share/ti/tidl/examples/ssd_multibox
Removing stale PID file /var/run/matrix-gui-2.0.pid.
Starting Matrix GUI application.
+
+
+.. _AM574x IDK EVM: http://www.ti.com/tool/tmdsidk574
+.. _AM5749: http://www.ti.com/product/AM5749/
+.. _Processor SDK Linux: http://software-dl.ti.com/processor-sdk-linux/esd/AM57X/latest/index_FDS.html
diff --git a/docs/source/images/tidl-frame-across-eos-opt.png b/docs/source/images/tidl-frame-across-eos-opt.png
new file mode 100755 (executable)
index 0000000..3d5bac1
Binary files /dev/null and b/docs/source/images/tidl-frame-across-eos-opt.png differ
index 0000000..3d5bac1
Binary files /dev/null and b/docs/source/images/tidl-frame-across-eos-opt.png differ
diff --git a/docs/source/images/tidl-frame-across-eos.png b/docs/source/images/tidl-frame-across-eos.png
new file mode 100755 (executable)
index 0000000..9c6a098
Binary files /dev/null and b/docs/source/images/tidl-frame-across-eos.png differ
index 0000000..9c6a098
Binary files /dev/null and b/docs/source/images/tidl-frame-across-eos.png differ
diff --git a/docs/source/images/tidl-one-eo-per-frame.png b/docs/source/images/tidl-one-eo-per-frame.png
new file mode 100755 (executable)
index 0000000..ff61781
Binary files /dev/null and b/docs/source/images/tidl-one-eo-per-frame.png differ
index 0000000..ff61781
Binary files /dev/null and b/docs/source/images/tidl-one-eo-per-frame.png differ
diff --git a/docs/source/intro.rst b/docs/source/intro.rst
index 112157768b70cc27f98d1ac54743d76b4c5b41a3..224aca26595f8c353c9818a882ceb56c2e1a790b 100644 (file)
--- a/docs/source/intro.rst
+++ b/docs/source/intro.rst
Introduction
************
-TI Deep Learning (TIDL) API brings deep learning to the edge by enabling applications to leverage TI's proprietary, highly optimized CNN/DNN implementation on the EVE and C66x DSP compute engines. TIDL will initially target Vision/2D use cases on AM57x Sitara Processors.
+TI Deep Learning (TIDL) API brings deep learning to the edge by enabling applications to leverage TI's proprietary, highly optimized CNN/DNN implementation on the EVE and C66x DSP compute engines. TIDL will initially target Vision/2D use cases on AM57x Sitara |(TM)| Processors.
-The TIDL API leverages TI's `OpenCL`_ product to offload deep learning applications to both EVE(s) and DSP(s). The TIDL API significantly improves the out-of-box deep learning experience for users and enables them to focus on their overall use case. They do not have to spend time on the mechanics of ARM ↔ DSP/EVE communication or implementing optimized network layers on EVE(s) and/or DSP(s). The API allows customers to easily integrate frameworks such as OpenCV and rapidly prototype deep learning applications.
+The TIDL API leverages TI's `OpenCL`_ |(TM)| product to offload deep learning applications to both EVE(s) and DSP(s). The TIDL API significantly improves the out-of-box deep learning experience for users and enables them to focus on their overall use case. They do not have to spend time on the mechanics of Arm |(R)| ↔ DSP/EVE communication or implementing optimized network layers on EVE(s) and/or DSP(s). The API allows customers to easily integrate frameworks such as OpenCV and rapidly prototype deep learning applications.
.. note::
**Ease of use**
* Easily integrate TIDL APIs into other frameworks such as `OpenCV`_
-* Provides a common host abstraction for user applications across multiple compute engines (EVEs and C66x DSPs)
+* Provides simple host abstractions for user applications to run a network across multiple compute cores (EVEs and C66x DSPs). Refer :ref:`use-case-1` and :ref:`use-case-2` for details.
**Low overhead**
.. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
.. _OpenCV: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components.html#opencv
.. _OpenCL: http://software-dl.ti.com/mctools/esd/docs/opencl/index.html
+
+.. |(TM)| unicode:: U+2122
+ :ltrim:
+
+.. |(R)| unicode:: U+00AE
+ :ltrim:
index ab9e20fa4d775123552d444743127349a014f18f..ae57c59b2607feb950b02067a8c26280ed63932d 100644 (file)
--- a/docs/source/overview.rst
+++ b/docs/source/overview.rst
Software Architecture
+++++++++++++++++++++
-:numref:`TIDL API Software Architecture` shows the TIDL API software architecture and how it fits into the software ecosystem on AM57x. The TIDL API leverages OpenCL APIs to:
+:numref:`TIDL API Software Architecture` shows the TIDL API software architecture and how it fits into the software ecosystem on AM57x. The TIDL API leverages OpenCL APIs to deploy translated network models. It provides the following services:
* Make the application's input data available in memories associated with the :term:`compute core`.
-* Initialize and run the layer groups associated with the network on compute cores
+* Initialize and run the :term:`layer groups<Layer group>` associated with the network on compute cores
* Make the output data available to the application
.. _`TIDL API Software Architecture`:
TIDL API Software Architecture
+The TIDL API consists of 4 C++ classes and associated methods: ``Configuration``, ``Executor``, ``ExecutionObject``, and ``ExecutionObjectPipeline``. Refer :ref:`using-tidl-api` and :ref:`api-documentation` for details.
Terminology
+++++++++++
.. glossary::
:sorted:
- Network binary
- A binary description of the layers used in a Deep Learning model and the connections between the layers. The network is generated by the TIDL import tool and used by the TIDL API.
+ Network
+ A description of the layers used in a Deep Learning model and the connections between the layers. The network is generated by the TIDL import tool and used by the TIDL API. Refer `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ for creating TIDL network and parameter binary files from TensorFlow and Caffe. A network consists of one or more Layer Groups.
Parameter binary
A binary file with weights generated by the TIDL import tool and used by the TIDL API.
Layer
- A layer consists of mathematical operations such as filters, rectification linear unit (ReLU) operations, downsampling operations (usually called average pooling, max pooling or striding), elementwise additions, concatenations, batch normalization and fully connected matrix multiplications. Refer XXX for a list of supported layers.
+ A layer consists of mathematical operations such as filters, rectification linear unit (ReLU) operations, downsampling operations (usually called average pooling, max pooling or striding), elementwise additions, concatenations, batch normalization and fully connected matrix multiplications. Refer `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ for a list of supported layers.
- Layer group
+ Layer Group
A collection of interconnected layers. Forms a unit of execution. The Execution Object "runs" a layer group on a compute core i.e. it performs the mathematical operations associated with the layers in the layer group on the input and generates one or more outputs.
Compute core
- A single EVE or C66x core. A layer group is executed on a compute core.
+ A single EVE or C66x DSP. An Execution Object manages execution on one compute core. Also referred to as a **device** in OpenCL. Sitara AM5749 has 4 compute cores: EVE1, EVE2, DSP1 and DSP2.
Executor
A TIDL API class. The executor is responsible for initializing Execution Objects with a Configuration. The Executor is also responsible for initialzing the OpenCL runtime. Refer :ref:`api-ref-executor` for available methods.
- Execution Object
+ ExecutionObject
EO
- A TIDL API class. Manages the execution of a layer group on a compute core. There is an EO associated with each compute core. The EO leverages the OpenCL runtime to manage execution. Implementation of these classes will call into OpenCL runtime to offload network processing abstracting these details from the user. Refer :ref:`api-ref-eo` for available methods.
+ A TIDL API class. Manages the execution of a layer group on a compute core. There is an EO associated with each compute core. The EO leverages the OpenCL runtime to manage execution. TIDL API implementation leverages the OpenCL runtime to offload network processing. Refer :ref:`api-ref-eo` for a description of the ExecutionObject class and methods.
ExecutionObjectPipeline
- A TIDL API class. Used to pipeline execution of a single input frame across multiple Execution Objects. Refer :ref:`api-ref-eop` for available methods.
-
+ EOP
+ A TIDL API class. Used to pipeline execution of a single input frame across multiple Execution Objects. Refer :ref:`api-ref-eop` for a description of the ExecutionObjectPipeline class and methods.
Configuration
- A TIDL API class. Used to specify a configuration for the Executor, including pointers to the network and parameter binary files. Refer :ref:`api-ref-configuration` for available methods.
+ A TIDL API class. Used to specify a configuration for the Executor, including pointers to the network and parameter binary files. Refer :ref:`api-ref-configuration` for a description of the Configuration class and methods.
+
+ Frame
+ A buffer representing 2D data, typically an image.
+
+
+.. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
index 34419bad4524c9642429ce155d3f13b14d72092d..37ebc34b1fa0190a5afa3b28752a20f01bef6370 100644 (file)
Using the API
*************
-TIDL API provides 4 C++ classes: ``Configuration``, ``Executor``, ``ExecutionObject``, and ``ExecutionObjectPipeline``. These classes can be used to support multiple use cases.
+This section illustrates using TIDL APIs to leverage deep learning in user applications. The overall flow is as follows:
-Use case 1: Each EO runs a :term:`Layer group`
+* Create a :term:`Configuration` object to specify the set of parameters required for network exectution.
+* Create :term:`Executor` objects - one to manage overall execution on the EVEs, the other for C66x DSPs.
+* Use the :term:`Execution Objects<EO>` (EO) created by the Executor to process :term:`frames<Frame>`. There are two approaches to processing frames using Execution Objects:
-Deploying a TIDL network
-++++++++++++++++++++++++
+ #. Each EO processes a single frame.
+ #. Split processing frame across multiple EOs.
-This section illustrates how easy it is to use TIDL APIs to leverage deep learning application in user applications. In this example, a configuration object is created from reading a TIDL network config file. An executor object is created with two EVE devices. It uses the configuration object to setup and initialize TIDL network on EVEs. Each of the two execution objects dispatches TIDL processing to a different EVE core. Because the OpenCL kernel execution is asynchronous, we can pipeline the frames across two EVEs. When one frame is being processed by a EVE, the next frame can be processed by another EVE.
+Refer Section :ref:`api-documentation` for API documentation.
+Use Cases
++++++++++
-``ReadFrameInput`` and ``WriteFrameOutput`` functions are used to read an input frame and write the result of processing. For example, with OpenCV, ``ReadFrameInput`` is implemented using OpenCV APIs to capture a frame. To execute the same network on DSPs, the only change to :numref:`simple-example` is to replace ``DeviceType::EVE`` with ``DeviceType::DSP``.
+.. _use-case-1:
+Each EO processes a single frame
+================================
-Step 1
-======
+In this approach, the :term:`network<Network>` is set up as a single :term:`Layer Group`. An :term:`EO` runs the entire layer group on a single frame. To increase throughput, frame processing can be pipelined across available EOs. For example, on AM5749, frames can be processed by 4 EOs: one each on EVE1, EVE2, DSP1, and DSP2.
-Determine if there are any TIDL capable devices on the AM57x SoC:
-.. code-block:: c++
+.. figure:: images/tidl-one-eo-per-frame.png
+ :align: center
+ :scale: 80
- uint32_t num_eve = Executor::GetNumDevices(DeviceType::EVE);
- uint32_t num_dsp = Executor::GetNumDevices(DeviceType::DSP);
+ Processing a frame with one EO. Not to scale. Fn: Frame n, LG: Layer Group.
-.. note::
- By default, the OpenCL runtime is configured with sufficient global memory
- (via CMEM) to offload TIDL networks to 2 OpenCL devices. On devices where
- ``Executor::GetNumDevices`` returns 4 (E.g. AM5729 with 4 EVE OpenCL
- devices) the amount of memory available to the runtime must be increased.
- Refer :ref:`opencl-global-memory` for details
+#. Determine if there are any TIDL capable :term:`compute cores<Compute core>` on the AM57x Processor:
-Step 2
-======
-Create a Configuration object by reading it from a file or by initializing it directly. The example below parses a configuration file and initializes the Configuration object. See ``examples/test/testvecs/config/infer`` for examples of configuration files.
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 64-65
+ :linenos:
-.. code::
+#. Create a Configuration object by reading it from a file or by initializing it directly. The example below parses a configuration file and initializes the Configuration object. See ``examples/test/testvecs/config/infer`` for examples of configuration files.
- Configuration configuration;
- bool status = configuration.ReadFromFile(config_file);
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 92-94
+ :linenos:
-.. note::
- Refer `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ for creating TIDL network and parameter binary files from TensorFlow and Caffe.
+#. Create Executor on C66x and EVE. In this example, all available C66x and EVE cores are used (lines 1-2 and :ref:`CreateExecutor`).
+#. Create a vector of available ExecutionObjects from both Executors (lines 7-8 and :ref:`CollectEOs`).
+#. Allocate input and output buffers for each ExecutionObject (:ref:`AllocateMemory`)
+#. Run the network on each input frame. The frames are processed with available execution objects in a pipelined manner. The additional num_eos iterations are required to flush the pipeline (lines 15-26).
-Step 3
-======
-Create an Executor with the appropriate device type, set of devices and a configuration. In the snippet below, an Executor is created on 2 EVEs.
+ * Wait for the EO to finish processing. If the EO is not processing a frame (the first iteration on each EO), the call to ``ProcessFrameWait`` returns false. ``ReportTime`` is used to report host and device execution times.
+ * Read a frame and start running the network. ``ProcessFrameStartAsync`` is asynchronous and returns before processing is complete. ``ReadFrame`` is application specific and used to read an input frame for processing. For example, with OpenCV, ``ReadFrame`` is implemented using OpenCV APIs to capture a frame from the camera.
-.. code-block:: c++
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 108-127,129,133-139
+ :linenos:
- DeviceIds ids = {DeviceId::ID0, DeviceId::ID1};
- Executor executor(DeviceType::EVE, ids, configuration);
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 154-163
+ :linenos:
+ :caption: CreateExecutor
+ :name: CreateExecutor
-Step 4
-======
-Get the set of available ExecutionObjects and allocate input and output buffers for each ExecutionObject.
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 166-172
+ :linenos:
+ :caption: CollectEOs
+ :name: CollectEOs
-.. code-block:: c++
+ .. literalinclude:: ../../examples/common/utils.cpp
+ :language: c++
+ :lines: 197-212
+ :linenos:
+ :caption: AllocateMemory
+ :name: AllocateMemory
- const ExecutionObjects& execution_objects = executor.GetExecutionObjects();
- int num_eos = execution_objects.size();
+The complete example is available at ``/usr/share/ti/tidl/examples/one_eo_per_frame/main.cpp``.
- // Allocate input and output buffers for each execution object
- std::vector<void *> buffers;
- for (auto &eo : execution_objects)
- {
- ArgInfo in = { ArgInfo(malloc(frame_sz), frame_sz)};
- ArgInfo out = { ArgInfo(malloc(frame_sz), frame_sz)};
- eo->SetInputOutputBuffer(in, out);
+.. _use-case-2:
- buffers.push_back(in.ptr());
- buffers.push_back(out.ptr());
- }
+Frame split across EOs
+======================
+This approach is typically used to reduce the latency of processing a single frame. Certain network layers such as Softmax and Pooling run faster on the C66x vs. EVE. Running these layers on C66x can lower the per-frame latency.
-Step 5
-======
-Run the network on each input frame. The frames are processed with available execution objects in a pipelined manner with additional num_eos iterations to flush the pipeline (epilogue).
+Time to process a single frame 224x224x3 frame on AM574x IDK EVM (Arm @ 1GHz, C66x @ 0.75GHz, EVE @ 0.65GHz) with JacintoNet11 (tidl_net_imagenet_jacintonet11v2.bin), TIDL API v1.1:
-.. code-block:: c++
+====== ======= ===================
+EVE C66x EVE + C66x
+====== ======= ===================
+~112ms ~120ms ~64ms :sup:`1`
+====== ======= ===================
+
+:sup:`1` BatchNorm and Convolution layers run on EVE are placed in a :term:`Layer Group` and run on EVE. Pooling, InnerProduct, SoftMax layers are placed in a second :term:`Layer Group` and run on C66x. The EVE layer group takes ~57.5ms, C66x layer group takes ~6.5ms.
+
+.. _frame-across-eos:
+.. figure:: images/tidl-frame-across-eos.png
+ :align: center
+ :scale: 80
+
+ Processing a frame across EOs. Not to scale. Fn: Frame n, LG: Layer Group.
+
+The network consists of 2 :term:`Layer Groups<Layer Group>`. :term:`Execution Objects<EO>` are organized into :term:`Execution Object Pipelines<EOP>` (EOP). Each :term:`EOP` processes a frame. The API manages inter-EO synchronization.
+
+#. Determine if there are any TIDL capable :term:`compute cores<Compute core>` on the AM57x Processor:
+
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 64-65
+ :linenos:
+
+#. Create a Configuration object by reading it from a file or by initializing it directly. The example below parses a configuration file and initializes the Configuration object. See ``examples/test/testvecs/config/infer`` for examples of configuration files.
+
+ .. literalinclude:: ../../examples/one_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 92-94
+ :linenos:
+
+#. Update the default layer group index assignment. Pooling (layer 12), InnerProduct (layer 13) and SoftMax (layer 14) are added to a second layer group. Refer :ref:`layer-group-override` for details.
+
+ .. literalinclude:: ../../examples/two_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 101-102
+ :linenos:
+
+#. Create :term:`Executors<Executor>` on C66x and EVE. The EVE Executor runs layer group 1, the C66x executor runs layer group 2.
+
+#. Create two :term:`Execution Object Pipelines<EOP>`. Each EOP contains one EVE and one C66x :term:`Execution Object<EO>` respectively.
+#. Allocate input and output buffers for each ExecutionObject in the EOP. (:ref:`AllocateMemory2`)
+#. Run the network on each input frame. The frames are processed with available EOPs in a pipelined manner. For ease of use, EOP and EO present the same interface to the user.
+
+ * Wait for the EOP to finish processing. If the EOP is not processing a frame (the first iteration on each EOP), the call to ``ProcessFrameWait`` returns false. ``ReportTime`` is used to report host and device execution times.
+ * Read a frame and start running the network. ``ProcessFrameStartAsync`` is asynchronous and returns before processing is complete. ``ReadFrame`` is application specific and used to read an input frame for processing. For example, with OpenCV, ``ReadFrame`` is implemented using OpenCV APIs to capture a frame from the camera.
+
+
+ .. literalinclude:: ../../examples/two_eo_per_frame/main.cpp
+ :language: c++
+ :lines: 110-138,140,147-153
+ :linenos:
+
+ .. literalinclude:: ../../examples/common/utils.cpp
+ :language: c++
+ :lines: 225-240
+ :linenos:
+ :caption: AllocateMemory
+ :name: AllocateMemory2
+
+
+The complete example is available at ``/usr/share/ti/tidl/examples/two_eo_per_frame/main.cpp``. Another example of using the EOP is :ref:`ssd-example`.
- for (int frame_idx = 0; frame_idx < configuration.numFrames + num_eos; frame_idx++)
- {
- ExecutionObject* eo = execution_objects[frame_idx % num_eos].get();
+.. _use-case-3:
- // Wait for previous frame on the same eo to finish processing
- if (eo->ProcessFrameWait())
- WriteFrame(*eo, output_data_file);
+Frame split across EOs with double buffering
+============================================
+The timeline shown in :numref:`frame-across-eos` indicates that EO-EVE1 waits for processing on E0-DSP1 to complete before it starts processing its next frame. It is possible to optimize the example further and overlap processing F :sub:`n-2` on EO-DSP1 and F :sub:`n` on E0-EVE1. This is illustrated in :numref:`frame-across-eos-opt`.
- // Read a frame and start processing it with current eo
- if (ReadFrame(*eo, frame_idx, configuration, input_data_file))
- eo->ProcessFrameStartAsync();
- }
+.. _frame-across-eos-opt:
+.. figure:: images/tidl-frame-across-eos-opt.png
+ :align: center
+ :scale: 80
-Section :ref:`using-tidl-api` contains details on using the APIs. The APIs themselves are documented in section :ref:`api-documentation`.
+ Optimizing using double buffered EOPs. Not to scale. Fn: Frame n, LG: Layer Group.
-Sometimes it is beneficial to partition a network and run different parts on different cores because some types of layers could run faster on EVEs while other types could run faster on DSPs. TIDL APIs provide the flexibility to run partitioned network across EVEs and DSPs. Refer the :ref:`ssd-example` example for details.
+EOP1 and EOP2 use the same :term:`EOs<EO>`: E0-EVE1 and E0-DSP1. Each :term:`EOP` has it's own input and output buffer. This enables EOP2 to read an input frame when EOP1 is processing its input frame. This in turn enables EOP2 to start processing on EO-EVE1 as soon as EOP1 completes processing on E0-EVE1.
+The only change in the code compared to :ref:`use-case-2` is to create an additional set of EOPs for double buffering:
-For a complete example of using the API, refer any of the examples available at ``/usr/share/ti/tidl/examples`` on the EVM file system.
+.. literalinclude:: ../../examples/two_eo_per_frame_opt/main.cpp
+ :language: c++
+ :lines: 117-129
+ :linenos:
+ :caption: Setting up EOPs for double buffering
+ :name: test-code
+
+.. note::
+ EOP1 in :numref:`frame-across-eos-opt` -> EOPs[0] in :numref:`test-code`.
+ EOP2 in :numref:`frame-across-eos-opt` -> EOPs[1] in :numref:`test-code`.
+ EOP3 in :numref:`frame-across-eos-opt` -> EOPs[2] in :numref:`test-code`.
+ EOP4 in :numref:`frame-across-eos-opt` -> EOPs[3] in :numref:`test-code`.
+
+The complete example is available at ``/usr/share/ti/tidl/examples/two_eo_per_frame_opt/main.cpp``.
Sizing device side heaps
++++++++++++++++++++++++
The memory for parameter and network heaps is itself allocated from OpenCL global memory (CMEM). Refer :ref:`opencl-global-memory` for details.
-Configuration file
-++++++++++++++++++
-TIDL API allows the user to create a Configuration object by reading from a file or by initializing it directly. Configuration settings supported by ``Configuration::ReadFromFile``:
-
- * numFrames
- * inWidth
- * inHeight
- * inNumChannels
- * preProcType
- * layerIndex2LayerGroupId
-
- * inData
- * outData
-
- * netBinFile
- * paramsBinFile
-
- * enableTrace
-
-An example configuration file:
-
-.. literalinclude:: ../../examples/layer_output/j11_v2_trace.txt
- :language: bash
-
-.. note::
-
- Refer :ref:`api-documentation` for the complete set of parameters in the ``Configuration`` class and their description.
-
-
-Overriding layer group assignment
-+++++++++++++++++++++++++++++++++
-The `TIDL device translation tool`_ assigns layer group ids to layers during the translation process. TIDL API 1.1 and higher allows the user to override this assignment by specifying explicit mappings. There are two ways for the user to provide an updated mapping:
-
-1. Specify a mapping in the configuration file to indicate that layers 12, 13 and 14 are assigned to layer group 2:
-
-.. code-block:: c++
-
- layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} }
-
-
-2. User can also provide the layer index to group mapping in the code:
-
-.. code-block:: c++
-
- Configuration c;
- c.ReadFromFile("test.cfg");
- c.layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} };
-
-
-.. role:: cpp(code)
- :language: c++
-
Accessing outputs of network layers
+++++++++++++++++++++++++++++++++++
@@ -236,4 +267,3 @@ See ``examples/layer_output/main.cpp, ProcessTrace()`` for examples of using the
.. _Processor SDK Linux Software Developer's Guide: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/index.html
.. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
-.. _TIDL device translation tool: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html#import-process
diff --git a/examples/Makefile b/examples/Makefile
index cf3bbc3dc68ae8306a41bf5244d8e352bc408f81..52b3ff6508b233093e8ff3b83a20f9b473351146 100644 (file)
--- a/examples/Makefile
+++ b/examples/Makefile
MFS = $(wildcard */Makefile)
DIRS = $(patsubst %/Makefile,%,$(MFS))
+# classification cannot be run from command line without attached display
+RUN_DIRS := $(filter-out classification, $(DIRS))
+
define make_in_dirs
@for dir in $(1); do \
echo "=============== " $$dir " =================" ; \
.PHONY: run
run:
- $(call make_in_dirs, $(DIRS), run)
+ $(call make_in_dirs, $(RUN_DIRS), run)
.PHONY: clean
clean:
index bc0ab3e88bf95f959859ca6b7aa3fcdc6a26fe1b..43810a9773809e475188ee93ed2d1fcc6c2c7c72 100644 (file)
using std::ostream;
using std::vector;
+
+// Create an Executor with the specified type and number of EOs
+Executor* CreateExecutor(DeviceType dt, int num, const Configuration& c,
+ int layer_group_id)
+{
+ if (num == 0) return nullptr;
+
+ DeviceIds ids;
+ for (int i = 0; i < num; i++)
+ ids.insert(static_cast<DeviceId>(i));
+
+ return new Executor(dt, ids, c, layer_group_id);
+}
static bool read_frame_helper(char* ptr, size_t size, istream& input_file);
bool ReadFrame(ExecutionObject* eo,
index deef59a4a7d2fbea95ef84281985fe9e5d6919ca..57c570d31aede1d20d0e5e62081753d552304d13 100644 (file)
--- a/examples/common/utils.h
+++ b/examples/common/utils.h
using tidl::ExecutionObject;
using tidl::ExecutionObjectPipeline;
using tidl::Configuration;
+using tidl::DeviceType;
+
+Executor* CreateExecutor(DeviceType dt, int num, const Configuration& c,
+ int layer_group_id);
bool ReadFrame(ExecutionObject* eo,
int frame_idx,
index b84dcf616a1b85456d9cdba779de57a42a9c3837..f11e050f5d197641c37f706f3da9e01de368b1af 100644 (file)
{
if (frame_idx >= opts.num_frames)
return false;
+
eo.SetFrameIndex(frame_idx);
char* frame_buffer = eo.GetInputBufferPtr();
index 482e9f72fa37d4d2f70f445334be69de3e86d53f..4bcc707984bae927bf2c521e45c2138a89061872 100644 (file)
const Configuration& c, const cmdline_opts_t& opts,
VideoCapture &cap)
{
- if (frame_idx >= opts.num_frames)
+ if ((uint32_t)frame_idx >= opts.num_frames)
return false;
+
eop.SetFrameIndex(frame_idx);
char* frame_buffer = eop.GetInputBufferPtr();
index 2291d97f9ab543aed19d147b7464b2742cd937fc..5cc618feec4515aeb02783e06ff6c3e1afa88e88 100644 (file)
bool Run(int num_eve,int num_dsp, const char* ref_output);
-Executor* CreateExecutor(DeviceType dt, int num, const Configuration& c,
- int layer_group_id);
-
-
int main(int argc, char *argv[])
{
// Catch ctrl-c to ensure a clean exit
c.PARAM_HEAP_SIZE = (3 << 20); // 3MB
c.NETWORK_HEAP_SIZE = (20 << 20); // 20MB
+ // Run this example for 16 input frames
c.numFrames = 16;
- // Assign layers 12, 13 and 14 to layer group 2
- c.layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} };
+ // Assign layers 12, 13 and 14 to the DSP layer group
+ const int EVE_LG = 1;
+ const int DSP_LG = 2;
+ c.layerIndex2LayerGroupId = { {12, DSP_LG}, {13, DSP_LG}, {14, DSP_LG} };
// Open input file for reading
std::ifstream input(c.inData, std::ios::binary);
try
{
// Create Executors - use all the DSP and EVE cores available
- // Layer group 1 will be executed on EVE, 2 on DSP
- unique_ptr<Executor> eve(CreateExecutor(DeviceType::EVE,num_eve,c,1));
- unique_ptr<Executor> dsp(CreateExecutor(DeviceType::DSP,num_dsp,c,2));
+ // Specify layer group id for each Executor
+ unique_ptr<Executor> eve(CreateExecutor(DeviceType::EVE,
+ num_eve, c, EVE_LG));
+ unique_ptr<Executor> dsp(CreateExecutor(DeviceType::DSP,
+ num_dsp, c, DSP_LG));
// Create pipelines. Each pipeline has 1 EVE and 1 DSP. If there are
// more EVEs than DSPs, the DSPs are shared across multiple
{
EOP* eop = EOPs[frame_idx % num_eops];
- // Wait for previous frame on the same eo to finish processing
+ // Wait for previous frame on the same EOP to finish processing
if (eop->ProcessFrameWait())
{
ReportTime(eop);
return status;
}
-// Create an Executor with the specified type and number of EOs
-Executor* CreateExecutor(DeviceType dt, int num, const Configuration& c,
- int layer_group_id)
-{
- if (num == 0) return nullptr;
-
- DeviceIds ids;
- for (int i = 0; i < num; i++)
- ids.insert(static_cast<DeviceId>(i));
-
- return new Executor(dt, ids, c, layer_group_id);
-}
diff --git a/examples/two_eo_per_frame_opt/Makefile b/examples/two_eo_per_frame_opt/Makefile
--- /dev/null
@@ -0,0 +1,37 @@
+# Copyright (c) 2018 Texas Instruments Incorporated - http://www.ti.com/
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+# * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+# * Redistributions in binary form must reproduce the above copyright
+# notice, this list of conditions and the following disclaimer in the
+# documentation and/or other materials provided with the distribution.
+# * Neither the name of Texas Instruments Incorporated nor the
+# names of its contributors may be used to endorse or promote products
+# derived from this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+EXE = two_eo_per_frame_opt
+
+include ../make.common
+
+CXXFLAGS += -I../common
+
+SOURCES = main.cpp ../common/utils.cpp
+
+$(EXE): $(TIDL_API_LIB) $(HEADERS) $(SOURCES)
+ $(CXX) $(CXXFLAGS) $(SOURCES) $(TIDL_API_LIB) $(LDFLAGS) $(LIBS) -o $@
+
diff --git a/examples/two_eo_per_frame_opt/main.cpp b/examples/two_eo_per_frame_opt/main.cpp
--- /dev/null
@@ -0,0 +1,170 @@
+/******************************************************************************
+ * Copyright (c) 2017-2018 Texas Instruments Incorporated - http://www.ti.com/
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * * Neither the name of Texas Instruments Incorporated nor the
+ * names of its contributors may be used to endorse or promote products
+ * derived from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ *****************************************************************************/
+
+//
+// This example illustrates using multiple EOs to process a single frame
+// For details, refer http://downloads.ti.com/mctools/esd/docs/tidl-api/
+//
+#include <signal.h>
+#include <iostream>
+#include <fstream>
+#include <cassert>
+#include <string>
+
+#include "executor.h"
+#include "execution_object.h"
+#include "execution_object_pipeline.h"
+#include "configuration.h"
+#include "utils.h"
+
+using namespace tidl;
+using std::string;
+using std::unique_ptr;
+using std::vector;
+
+using EOP = tidl::ExecutionObjectPipeline;
+
+bool Run(int num_eve,int num_dsp, const char* ref_output);
+
+int main(int argc, char *argv[])
+{
+ // Catch ctrl-c to ensure a clean exit
+ signal(SIGABRT, exit);
+ signal(SIGTERM, exit);
+
+ // This example requires both EVE and C66x
+ uint32_t num_eve = Executor::GetNumDevices(DeviceType::EVE);
+ uint32_t num_dsp = Executor::GetNumDevices(DeviceType::DSP);
+ if (num_eve == 0 || num_dsp == 0)
+ {
+ std::cout << "TI DL not supported on this SoC." << std::endl;
+ return EXIT_SUCCESS;
+ }
+
+ string ref_file ="../test/testvecs/reference/j11_v2_ref.bin";
+ unique_ptr<const char> reference_output(ReadReferenceOutput(ref_file));
+
+ bool status = Run(num_eve, num_dsp, reference_output.get());
+
+ if (!status)
+ {
+ std::cout << "FAILED" << std::endl;
+ return EXIT_FAILURE;
+ }
+
+ std::cout << "PASSED" << std::endl;
+ return EXIT_SUCCESS;
+}
+
+bool Run(int num_eve, int num_dsp, const char* ref_output)
+{
+ string config_file ="../test/testvecs/config/infer/tidl_config_j11_v2.txt";
+
+ Configuration c;
+ if (!c.ReadFromFile(config_file))
+ return false;
+
+ // Heap sizes for this network determined using Configuration::showHeapStats
+ c.PARAM_HEAP_SIZE = (3 << 20); // 3MB
+ c.NETWORK_HEAP_SIZE = (20 << 20); // 20MB
+
+ // Run this example for 16 input frames
+ c.numFrames = 16;
+
+ // Assign layers 12, 13 and 14 to the DSP layer group
+ const int EVE_LG = 1;
+ const int DSP_LG = 2;
+ c.layerIndex2LayerGroupId = { {12, DSP_LG}, {13, DSP_LG}, {14, DSP_LG} };
+
+ // Open input file for reading
+ std::ifstream input(c.inData, std::ios::binary);
+
+ bool status = true;
+ try
+ {
+ // Create Executors - use all the DSP and EVE cores available
+ // Specify layer group id for each Executor
+ unique_ptr<Executor> eve(CreateExecutor(DeviceType::EVE,
+ num_eve, c, EVE_LG));
+ unique_ptr<Executor> dsp(CreateExecutor(DeviceType::DSP,
+ num_dsp, c, DSP_LG));
+
+ // On AM5749, create a total of 4 pipelines (EOPs):
+ // EOPs[0] : { EVE1, DSP1 }
+ // EOPs[1] : { EVE1, DSP1 } for double buffering
+ // EOPs[2] : { EVE2, DSP2 }
+ // EOPs[3] : { EVE2, DSP2 } for double buffering
+
+ const uint32_t pipeline_depth = 2; // 2 EOs in EOP => depth 2
+ std::vector<EOP *> EOPs;
+ uint32_t num_pipe = std::max(num_eve, num_dsp);
+ for (uint32_t i = 0; i < num_pipe; i++)
+ for (uint32_t j = 0; j < pipeline_depth; j++)
+ EOPs.push_back(new EOP( { (*eve)[i % num_eve],
+ (*dsp)[i % num_dsp] } ));
+
+ AllocateMemory(EOPs);
+
+ // Process frames with EOs in a pipelined manner
+ // additional num_eos iterations to flush the pipeline (epilogue)
+ int num_eops = EOPs.size();
+ for (int frame_idx = 0; frame_idx < c.numFrames + num_eops; frame_idx++)
+ {
+ EOP* eop = EOPs[frame_idx % num_eops];
+
+ // Wait for previous frame on the same EOP to finish processing
+ if (eop->ProcessFrameWait())
+ {
+ ReportTime(eop);
+
+ // The reference output is valid only for the first frame
+ // processed on each EOP
+ if (frame_idx < num_eops && !CheckFrame(eop, ref_output))
+ status = false;
+ }
+
+ // Read a frame and start processing it with current eo
+ if (ReadFrame(eop, frame_idx, c, input))
+ eop->ProcessFrameStartAsync();
+ }
+
+ FreeMemory(EOPs);
+
+ }
+ catch (tidl::Exception &e)
+ {
+ std::cerr << e.what() << std::endl;
+ status = false;
+ }
+
+ input.close();
+
+ return status;
+}
+
+
diff --git a/readme.md b/readme.md
index 092f09b7e4670107c000555617f96af32999a0e4..f6bde0322b6bcbe00d97b5a71e24068ee785c2c2 100644 (file)
--- a/readme.md
+++ b/readme.md
TI Deep Learning (TIDL) API
---------------------------
-TIDL API brings Deep Learning to the edge and enables Linux applications to leverage TI’s proprietary CNN/DNN implementation on EVEs and C66x DSPs in AM57x SoCs. It requires OpenCL v1.1.15.1 or newer. Refer the User's Guide for details: http://software-dl.ti.com/mctools/esd/docs/tidl-api/index.html
+TIDL API brings Deep Learning to the edge and enables Linux applications to leverage TI’s proprietary CNN/DNN implementation on EVEs and C66x DSPs in AM57x SoCs. Refer to the TIDL API User's Guide for details: http://software-dl.ti.com/mctools/esd/docs/tidl-api/index.html