03a100293f455fe8fdf27a1c7c5e808f151c1969
1 .. _examples:
3 ********
4 Examples
5 ********
7 .. list-table:: TIDL API Examples
8 :header-rows: 1
9 :widths: 12 43 20 25
11 * - Example
12 - Description
13 - Compute cores
14 - Input image
15 * - one_eo_per_frame
16 - Processes a single frame with one :term:`EO` using the j11_v2 network. Throughput is increased by distributing frame processing across EOs. Refer :ref:`use-case-1`.
17 - EVE or C66x
18 - Pre-processed image read from file.
19 * - two_eo_per_frame
20 - Processes a single frame with an :term:`EOP` using the j11_v2 network to reduce per-frame processing latency. Also increases throughput by distributing frame processing across EOPs. The EOP consists of two EOs. Refer :ref:`use-case-2`.
21 - EVE and C66x (network is split across both EVE and C66x)
22 - Pre-processed image read from file.
23 * - two_eo_per_frame_opt
24 - Builds on ``two_eo_per_frame``. Adds double buffering to improve performance. Refer :ref:`use-case-3`.
25 - EVE and C66x (network is split across both EVE and C66x)
26 - Pre-processed image read from file.
28 * - imagenet
29 - Classification example
30 - EVE or C66x
31 - OpenCV used to read input image from file or capture from camera.
32 * - segmentation
33 - Pixel level segmentation example
34 - EVE or C66x
35 - OpenCV used to read input image from file or capture from camera.
36 * - ssd_multibox
37 - Object detection
38 - EVE and C66x (network is split across both EVE and C66x)
39 - OpenCV used to read input image from file or capture from camera.
40 * - mnist
41 - handwritten digits recognition (MNIST). This example illustrates
42 low TIDL API overhead (~1.8%) for small networks with low compute
43 requirements (<5ms).
44 - EVE
45 - Pre-processed white-on-black images read from file, with or without
46 MNIST database file headers.
47 * - classification
48 - Classification example, called from the Matrix GUI.
49 - EVE or C66x
50 - OpenCV used to read input image from file or capture from camera.
51 * - mcbench
52 - Used to benchmark supported networks. Refer ``mcbench/scripts`` for command line options.
53 - EVE or C66x
54 - Pre-processed image read from file.
55 * - layer_output
56 - Illustrates using TIDL APIs to access output buffers of intermediate :term:`layers<Layer>` in the network.
57 - EVE or C66x
58 - Pre-processed image read from file.
59 * - test
60 - This example is used to test pre-converted networks included in the TIDL API package (``test/testvecs/config/tidl_models``). When run without any arguments, the program ``test_tidl`` will run all available networks on the C66x DSPs and EVEs available on the SoC. Use the ``-c`` option to specify a single network. Run ``test_tidl -h`` for details.
61 - C66x and EVEs (if available)
62 - Pre-processed image read from file.
64 The included examples demonstrate three categories of deep learning networks: classification, segmentation and object detection. ``imagenet`` and ``segmentation`` can run on AM57x processors with either EVE or C66x cores. ``ssd_multibox`` requires AM57x processors with both EVE and C66x. The examples are available at ``/usr/share/ti/tidl/examples`` on the EVM file system and in the linux devkit.
66 The performance numbers were obtained using:
68 * `AM574x IDK EVM`_ with the Sitara `AM5749`_ Processor - 2 Arm Cortex-A15 cores running at 1.0GHz, 2 EVE cores at 650MHz, and 2 C66x cores at 750MHz.
69 * `Processor SDK Linux`_ v5.1 with TIDL API v1.1
71 For each example, device processing time, host processing time,
72 and TIDL API overhead is reported.
74 * **Device processing time** is measured on the device, from the moment processing starts for a frame till processing finishes.
75 * **Host processing time** is measured on the host, from the moment ``ProcessFrameStartAsync()`` is called till ``ProcessFrameWait()`` returns in user application. It includes the TIDL API overhead, the OpenCL runtime overhead, and the time to copy user input data into padded TIDL internal buffers. ``Host processing time = Device processing time + TIDL API overhead``.
78 Imagenet
79 --------
81 The imagenet example takes an image as input and outputs 1000 probabilities.
82 Each probability corresponds to one object in the 1000 objects that the
83 network is pre-trained with. The example outputs top 5 predictions for a given input image.
85 The following figure and tables shows an input image, top 5 predicted
86 objects as output, and the processing time on either EVE or C66x.
88 .. image:: ../../examples/test/testvecs/input/objects/cat-pet-animal-domestic-104827.jpeg
89 :width: 600
92 ==== ==============
93 Rank Object Classes
94 ==== ==============
95 1 tabby
96 2 Egyptian_cat
97 3 tiger_cat
98 4 lynx
99 5 Persian_cat
100 ==== ==============
102 ======= ====================== ==================== ============
103 Device Device Processing Time Host Processing Time API Overhead
104 ======= ====================== ==================== ============
105 EVE 106.5 ms 107.9 ms 1.37 %
106 C66x 117.9 ms 118.7 ms 0.93 %
107 ======= ====================== ==================== ============
109 The :term:`network<Network>` used in the example is jacintonet11v2. It has
110 14 layers. Input to the network is RGB image of 224x224. Users can specify whether to run the network on EVE or C66x.
112 The example code sets ``buffer_factor`` to 2 to create duplicated
113 ExecutionObjectPipelines with identical ExecutionObjects to
114 perform double buffering, so that host pre/post-processing can be overlapped
115 with device processing (see comments in the code for details).
116 The following table shows the loop overall time over 10 frames
117 with single buffering and double buffering,
118 ``./imagenet -f 10 -d <num> -e <num>``.
120 .. list-table:: Loop overall time over 10 frames
121 :header-rows: 1
123 * - Device(s)
124 - Single Buffering (buffer_factor=1)
125 - Double Buffering (buffer_factor=2)
126 * - 1 EVE
127 - 1744 ms
128 - 1167 ms
129 * - 2 EVEs
130 - 966 ms
131 - 795 ms
132 * - 1 C66x
133 - 1879 ms
134 - 1281 ms
135 * - 2 C66xs
136 - 1021 ms
137 - 814 ms
139 .. note::
140 The predicitions reported here are based on the output of the softmax
141 layer in the network, which are not normalized to the real probabilities.
143 Segmentation
144 ------------
146 The segmentation example takes an image as input and performs pixel-level
147 classification according to pre-trained categories. The following figures
148 show a street scene as input and the scene overlaid with pixel-level
149 classifications as output: road in green, pedestrians in red, vehicles
150 in blue and background in gray.
152 .. image:: ../../examples/test/testvecs/input/roads/pexels-photo-972355.jpeg
153 :width: 600
155 .. image:: images/pexels-photo-972355-seg.jpg
156 :width: 600
158 The :term:`network<Network>` used in the example is jsegnet21v2. It has
159 26 layers. Users can specify whether to run the network on EVE or C66x.
160 Input to the network is RGB image of size 1024x512. The output is 1024x512
161 values, each value indicates which pre-trained category the current pixel
162 belongs to. The example will take the network output, create an overlay,
163 and blend the overlay onto the original input image to create an output image.
164 From the reported time in the following table, we can see that this network
165 runs significantly faster on EVE than on C66x.
167 ======= ====================== ==================== ============
168 Device Device Processing Time Host Processing Time API Overhead
169 ======= ====================== ==================== ============
170 EVE 251.8 ms 254.2 ms 0.96 %
171 C66x 812.7 ms 815.0 ms 0.27 %
172 ======= ====================== ==================== ============
174 The example code sets ``buffer_factor`` to 2 to create duplicated
175 ExecutionObjectPipelines with identical ExecutionObjects to
176 perform double buffering, so that host pre/post-processing can be overlapped
177 with device processing (see comments in the code for details).
178 The following table shows the loop overall time over 10 frames
179 with single buffering and double buffering,
180 ``./segmentation -f 10 -d <num> -e <num>``.
182 .. list-table:: Loop overall time over 10 frames
183 :header-rows: 1
185 * - Device(s)
186 - Single Buffering (buffer_factor=1)
187 - Double Buffering (buffer_factor=2)
188 * - 1 EVE
189 - 5233 ms
190 - 3017 ms
191 * - 2 EVEs
192 - 3032 ms
193 - 3015 ms
194 * - 1 C66x
195 - 10890 ms
196 - 8416 ms
197 * - 2 C66xs
198 - 5742 ms
199 - 4638 ms
201 .. _ssd-example:
203 SSD
204 ---
206 SSD is the abbreviation for Single Shot multi-box Detector.
207 The ssd_multibox example takes an image as input and detects multiple
208 objects with bounding boxes according to pre-trained categories.
209 The following figures show another street scene as input and the scene
210 with recognized objects boxed as output: pedestrians in red,
211 vehicles in blue and road signs in yellow.
213 .. image:: ../../examples/test/testvecs/input/roads/pexels-photo-378570.jpeg
214 :width: 600
216 .. image:: images/pexels-photo-378570-ssd.jpg
217 :width: 600
219 The network we ran in this category is jdenet_ssd, which has 43 layers.
220 Input to the network is RGB image of size 768x320. Output is a list of
221 boxes (up to 20), each box has information about the box coordinates, and
222 which pre-trained category that the object inside the box belongs to.
223 The example will take the network output, draw boxes accordingly,
224 and create an output image.
225 The network can be run entirely on either EVE or C66x. However, the best
226 performance comes with running the first 30 layers as a group on EVE
227 and the next 13 layers as another group on C66x.
228 Our end-to-end example shows how easy it is to assign a :term:`Layer Group` id
229 to an :term:`Executor` and how easy it is to construct an :term:`ExecutionObjectPipeline` to connect the output of one *Executor*'s :term:`ExecutionObject`
230 to the input of another *Executor*'s *ExecutionObject*.
232 ======== ====================== ==================== ============
233 Device Device Processing Time Host Processing Time API Overhead
234 ======== ====================== ==================== ============
235 EVE+C66x 169.5ms 172.0ms 1.68 %
236 ======== ====================== ==================== ============
238 The example code sets ``pipeline_depth`` to 2 to create duplicated
239 ExecutionObjectPipelines with identical ExecutionObjects to
240 perform pipelined execution at the ExecutionObject level.
241 The side effect is that it also overlaps host pre/post-processing
242 with device processing (see comments in the code for details).
243 The following table shows the loop overall time over 10 frames
244 with pipelining at ExecutionObjectPipeline level
245 versus ExecutionObject level.
246 ``./ssd_multibox -f 10 -d <num> -e <num>``.
248 .. list-table:: Loop overall time over 10 frames
249 :header-rows: 1
251 * - Device(s)
252 - pipeline_depth=1
253 - pipeline_depth=2
254 * - 1 EVE + 1 C66x
255 - 2900 ms
256 - 1735 ms
257 * - 2 EVEs + 2 C66xs
258 - 1630 ms
259 - 1408 ms
261 .. _mnist-example:
263 MNIST
264 -----
266 The MNIST example takes a pre-processed 28x28 white-on-black frame from
267 a file as input and predicts the hand-written digit in the frame.
268 For example, the example will predict 0 for the following frame.
270 .. code-block:: none
272 root@am57xx-evm:~/tidl/examples/mnist# hexdump -v -e '28/1 "%2x" "\n"' -n 784 ../test/testvecs/input/digits10_images_28x28.y
273 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
274 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
275 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 314 8 0 0 0 0 0 0 0 0 0 0
276 0 0 0 0 0 0 0 0 0 0 0 0319bdfeec1671b 0 0 0 0 0 0 0 0 0
277 0 0 0 0 0 0 0 0 0 0 01ed5ffd2a4e4ec89 0 0 0 0 0 0 0 0 0
278 0 0 0 0 0 0 0 0 0 0 1bcffee2a 031e6e225 0 0 0 0 0 0 0 0
279 0 0 0 0 0 0 0 0 0 05ff7ffbf 2 0 078ffa1 0 0 0 0 0 0 0 0
280 0 0 0 0 0 0 0 0 0 0b2f2f34e 0 0 015e0d8 0 0 0 0 0 0 0 0
281 0 0 0 0 0 0 0 0 0148deab2 0 0 0 0 0bdec 2 0 0 0 0 0 0 0
282 0 0 0 0 0 0 0 0 0 084f845 0 0 0 0 0a4f222 0 0 0 0 0 0 0
283 0 0 0 0 0 0 0 0 0 0c4d3 5 0 0 0 0 096f21c 0 0 0 0 0 0 0
284 0 0 0 0 0 0 0 0 052f695 0 0 0 0 0 0a7ed 8 0 0 0 0 0 0 0
285 0 0 0 0 0 0 0 0 09af329 0 0 0 0 0 0d1cf 0 0 0 0 0 0 0 0
286 0 0 0 0 0 0 0 0 2d4c8 0 0 0 0 0 01ae9a2 0 0 0 0 0 0 0 0
287 0 0 0 0 0 0 0 038fa9a 0 0 0 0 0 062ff76 0 0 0 0 0 0 0 0
288 0 0 0 0 0 0 0 07afe5d 0 0 0 0 0 0a9e215 0 0 0 0 0 0 0 0
289 0 0 0 0 0 0 0 0bdec1d 0 0 0 0 017e7aa 0 0 0 0 0 0 0 0 0
290 0 0 0 0 0 0 0 1e7d6 0 0 0 0 0 096f85a 0 0 0 0 0 0 0 0 0
291 0 0 0 0 0 0 01df2bf 0 0 0 0 015e1ca 0 0 0 0 0 0 0 0 0 0
292 0 0 0 0 0 0 061fc95 0 0 0 0 084f767 0 0 0 0 0 0 0 0 0 0
293 0 0 0 0 0 0 06eff8b 0 0 0 033e8ca 4 0 0 0 0 0 0 0 0 0 0
294 0 0 0 0 0 0 060fc9e 0 0 0 092d63e 0 0 0 0 0 0 0 0 0 0 0
295 0 0 0 0 0 0 01bf1da 6 0 019b656 0 0 0 0 0 0 0 0 0 0 0 0
296 0 0 0 0 0 0 0 0c3fb8e a613e7b 5 0 0 0 0 0 0 0 0 0 0 0 0
297 0 0 0 0 0 0 0 049f1fcf5f696 9 0 0 0 0 0 0 0 0 0 0 0 0 0
298 0 0 0 0 0 0 0 0 04ca0b872 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
299 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
300 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
302 The file can contain multiple frames. If an optional label file is also
303 given, the example will compare predicted result against pre-determined
304 label for accuracy. The input files may or may not have `MNIST dataset
305 file headers <http://yann.lecun.com/exdb/mnist/>`_. If using headers,
306 input filenames must end with idx3-ubyte or idx1-ubyte.
308 The MNIST example also illustrates low overhead of TIDL API for small
309 networks with low compute requirements (<5ms). The network runs about 3ms
310 on EVE for a single frame. As shown in the following table, when running
311 over 1000 frames, the overhead is about 1.8%.
313 .. list-table:: Loop overall time over 1000 frames
314 :header-rows: 1
316 * - Device(s)
317 - Device Processing Time
318 - Host Processing Time
319 - API Overhead
320 * - 1 EVE
321 - 3091 ms
322 - 3146 ms
323 - 1.78%
325 Running Examples
326 ----------------
328 The examples are located in ``/usr/share/ti/tidl/examples`` on
329 the EVM file system. **Each example needs to be run in its own directory** due to relative paths to configuration files.
330 Running an example with ``-h`` will show help message with option set.
331 The following listing illustrates how to build and run the examples.
333 .. code-block:: shell
335 root@am57xx-evm:~/tidl-api/examples/imagenet# ./imagenet
336 Input: ../test/testvecs/input/objects/cat-pet-animal-domestic-104827.jpeg
337 frame[ 0]: Time on EVE0: 106.50 ms, host: 107.96 ms API overhead: 1.35 %
338 1: tabby
339 2: Egyptian_cat
340 3: tiger_cat
341 4: lynx
342 5: Persian_cat
343 Loop total time (including read/write/opencv/print/etc): 202.6ms
344 imagenet PASSED
346 root@am57xx-evm:~/tidl-api/examples/segmentation# ./segmentation
347 Input: ../test/testvecs/input/000100_1024x512_bgr.y
348 frame[ 0]: Time on EVE0: 251.74 ms, host: 258.02 ms API overhead: 2.43 %
349 Saving frame 0 to: frame_0.png
350 Saving frame 0 overlayed with segmentation to: overlay_0.png
351 frame[ 1]: Time on EVE0: 251.76 ms, host: 255.79 ms API overhead: 1.58 %
352 Saving frame 1 to: frame_1.png
353 Saving frame 1 overlayed with segmentation to: overlay_1.png
354 ...
355 frame[ 8]: Time on EVE0: 251.75 ms, host: 254.21 ms API overhead: 0.97 %
356 Saving frame 8 to: frame_8.png
357 Saving frame 8 overlayed with segmentation to: overlay_8.png
358 Loop total time (including read/write/opencv/print/etc): 4809ms
359 segmentation PASSED
361 root@am57xx-evm:~/tidl-api/examples/ssd_multibox# ./ssd_multibox
362 Input: ../test/testvecs/input/preproc_0_768x320.y
363 frame[ 0]: Time on EVE0+DSP0: 169.44 ms, host: 173.56 ms API overhead: 2.37 %
364 Saving frame 0 to: frame_0.png
365 Saving frame 0 with SSD multiboxes to: multibox_0.png
366 Loop total time (including read/write/opencv/print/etc): 320.2ms
367 ssd_multibox PASSED
369 root@am57xx-evm:~/tidl/examples/mnist# ./mnist
370 Input images: ../test/testvecs/input/digits10_images_28x28.y
371 Input labels: ../test/testvecs/input/digits10_labels_10x1.y
372 0
373 1
374 2
375 3
376 4
377 5
378 6
379 7
380 8
381 9
382 Device total time: 31.02ms
383 Loop total time (including read/write/print/etc): 32.49ms
384 Accuracy: 100%
385 mnist PASSED
388 Image input
389 ^^^^^^^^^^^
391 The image input option, ``-i <image>``, takes an image file as input.
392 You can supply an image file with format that OpenCV can read, since
393 we use OpenCV for image pre/post-processing. When ``-f <number>`` option
394 is used, the same image will be processed repeatedly.
396 Camera (live video) input
397 ^^^^^^^^^^^^^^^^^^^^^^^^^
399 The input option, ``-i camera<number>``, enables live frame inputs
400 from camera. ``<number>`` is the video input port number
401 of your camera in Linux. Use the following command to check video
402 input ports. The number defaults to ``1`` for TMDSCM572X camera module
403 used on AM57x EVMs. You can use ``-f <number>`` to specify the number
404 of frames you want to process.
406 .. code-block:: shell
408 root@am57xx-evm:~# v4l2-ctl --list-devices
409 omapwb-cap (platform:omapwb-cap):
410 /dev/video11
412 omapwb-m2m (platform:omapwb-m2m):
413 /dev/video10
415 vip (platform:vip):
416 /dev/video1
418 vpe (platform:vpe):
419 /dev/video0
422 Pre-recorded video (mp4/mov/avi) input
423 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
425 The input option, ``-i <name>.{mp4,mov,avi}``, enables frame inputs from
426 pre-recorded video file in mp4, mov or avi format. If you have a video in
427 a different OpenCV-supported format/suffix, you can simply create a softlink
428 with one of the mp4, mov or avi suffixes and feed it into the example.
429 Again, use ``-f <number>`` to specify the number of frames you want to process.
431 Displaying video output
432 ^^^^^^^^^^^^^^^^^^^^^^^
434 When using video input, live or pre-recorded, the example will display
435 the output in a window using OpenCV. If you have a LCD screen attached
436 to the EVM, you will need to kill the ``matrix-gui`` first in order to
437 see the example display window, as shown in the following example.
439 .. code-block:: shell
441 root@am57xx-evm:/usr/share/ti/tidl/examples/ssd_multibox# /etc/init.d/matrix-gui-2.0 stop
442 Stopping Matrix GUI application.
443 root@am57xx-evm:/usr/share/ti/tidl/examples/ssd_multibox# ./ssd_multibox -i camera -f 100
444 Input: camera
445 init done
446 Using Wayland-EGL
447 wlpvr: PVR Services Initialised
448 Using the 'xdg-shell-v5' shell integration
449 ... ...
450 root@am57xx-evm:/usr/share/ti/tidl/examples/ssd_multibox# /etc/init.d/matrix-gui-2.0 start
451 /usr/share/ti/tidl/examples/ssd_multibox
452 Removing stale PID file /var/run/matrix-gui-2.0.pid.
453 Starting Matrix GUI application.
456 .. _AM574x IDK EVM: http://www.ti.com/tool/tmdsidk574
457 .. _AM5749: http://www.ti.com/product/AM5749/
458 .. _Processor SDK Linux: http://software-dl.ti.com/processor-sdk-linux/esd/AM57X/latest/index_FDS.html