1 .. _using-tidl-api:
3 ******************
4 Using the TIDL API
5 ******************
7 Deploying a TIDL network
8 ++++++++++++++++++++++++
10 This example illustrates using the TIDL API to offload deep learning network processing from a Linux application to the C66x DSPs or EVEs on AM57x devices. The API consists of three classes: ``Configuration``, ``Executor`` and ``ExecutionObject``.
12 Step 1
13 ======
15 Determine if there are any TIDL capable devices on the AM57x SoC:
17 .. code-block:: c++
19 uint32_t num_eve = Executor::GetNumDevices(DeviceType::EVE);
20 uint32_t num_dsp = Executor::GetNumDevices(DeviceType::DSP);
22 .. note::
23 By default, the OpenCL runtime is configured with sufficient global memory
24 (via CMEM) to offload TIDL networks to 2 OpenCL devices. On devices where
25 ``Executor::GetNumDevices`` returns 4 (E.g. AM5729 with 4 EVE OpenCL
26 devices) the amount of memory available to the runtime must be increased.
27 Refer :ref:`opencl-global-memory` for details
29 Step 2
30 ======
31 Create a Configuration object by reading it from a file or by initializing it directly. The example below parses a configuration file and initializes the Configuration object. See ``examples/test/testvecs/config/infer`` for examples of configuration files.
33 .. code::
35 Configuration configuration;
36 bool status = configuration.ReadFromFile(config_file);
38 .. note::
39 Refer `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ for creating TIDL network and parameter binary files from TensorFlow and Caffe.
41 Step 3
42 ======
43 Create an Executor with the appropriate device type, set of devices and a configuration. In the snippet below, an Executor is created on 2 EVEs.
45 .. code-block:: c++
47 DeviceIds ids = {DeviceId::ID0, DeviceId::ID1};
48 Executor executor(DeviceType::EVE, ids, configuration);
50 Step 4
51 ======
52 Get the set of available ExecutionObjects and allocate input and output buffers for each ExecutionObject.
54 .. code-block:: c++
56 const ExecutionObjects& execution_objects = executor.GetExecutionObjects();
57 int num_eos = execution_objects.size();
59 // Allocate input and output buffers for each execution object
60 std::vector<void *> buffers;
61 for (auto &eo : execution_objects)
62 {
63 ArgInfo in = { ArgInfo(malloc(frame_sz), frame_sz)};
64 ArgInfo out = { ArgInfo(malloc(frame_sz), frame_sz)};
65 eo->SetInputOutputBuffer(in, out);
67 buffers.push_back(in.ptr());
68 buffers.push_back(out.ptr());
69 }
71 Step 5
72 ======
73 Run the network on each input frame. The frames are processed with available execution objects in a pipelined manner with additional num_eos iterations to flush the pipeline (epilogue).
75 .. code-block:: c++
77 for (int frame_idx = 0; frame_idx < configuration.numFrames + num_eos; frame_idx++)
78 {
79 ExecutionObject* eo = execution_objects[frame_idx % num_eos].get();
81 // Wait for previous frame on the same eo to finish processing
82 if (eo->ProcessFrameWait())
83 WriteFrame(*eo, output_data_file);
85 // Read a frame and start processing it with current eo
86 if (ReadFrame(*eo, frame_idx, configuration, input_data_file))
87 eo->ProcessFrameStartAsync();
88 }
90 For a complete example of using the API, refer any of the examples available at ``/usr/share/ti/tidl/examples`` on the EVM file system.
92 Sizing device side heaps
93 ++++++++++++++++++++++++
95 TIDL API allocates 2 heaps for device size allocations during network setup/initialization:
97 +-----------+-----------------------------------+-----------------------------+
98 | Heap Name | Configuration parameter | Default size |
99 +-----------+-----------------------------------+-----------------------------+
100 | Parameter | Configuration::PARAM_HEAP_SIZE | 9MB, 1 per Executor |
101 +-----------+-----------------------------------+-----------------------------+
102 | Network | Configuration::NETWORK_HEAP_SIZE | 64MB, 1 per ExecutionObject |
103 +-----------+-----------------------------------+-----------------------------+
105 Depending on the network being deployed, these defaults may be smaller or larger than required. In order to determine the exact sizes for the heaps, the following approach can be used:
107 Start with the default heap sizes. The API displays heap usage statistics when Configuration::showHeapStats is set to true.
109 .. code-block:: c++
111 Configuration configuration;
112 bool status = configuration.ReadFromFile(config_file);
113 configuration.showHeapStats = true;
115 If the heap size is larger than required by device side allocations, the API displays usage statistics. When ``Free`` > 0, the heaps are larger than required.
117 .. code-block:: bash
119 # ./test_tidl -n 1 -t e -c testvecs/config/infer/tidl_config_j11_v2.txt
120 API Version: 01.01.00.00.e4e45c8
121 [eve 0] TIDL Device Trace: PARAM heap: Size 9437184, Free 6556180, Total requested 2881004
122 [eve 0] TIDL Device Trace: NETWORK heap: Size 67108864, Free 47047680, Total requested 20061184
125 Update the application to set the heap sizes to the "Total requested size" displayed:
127 .. code-block:: c++
129 configuration.PARAM_HEAP_SIZE = 2881004;
130 configuration.NETWORK_HEAP_SIZE = 20061184;
132 .. code-block:: bash
134 # ./test_tidl -n 1 -t e -c testvecs/config/infer/tidl_config_j11_v2.txt
135 API Version: 01.01.00.00.e4e45c8
136 [eve 0] TIDL Device Trace: PARAM heap: Size 2881004, Free 0, Total requested 2881004
137 [eve 0] TIDL Device Trace: NETWORK heap: Size 20061184, Free 0, Total requested 20061184
139 Now, the heaps are sized as required by network execution (i.e. ``Free`` is 0)
140 and the ``configuration.showHeapStats = true`` line can be removed.
142 .. note::
144 If the default heap sizes are smaller than required, the device will report an allocation failure and indicate the required minimum size. E.g.
145 .. code-block:: bash
147 # ./test_tidl -n 1 -t e -c testvecs/config/infer/tidl_config_j11_v2.txt
148 API Version: 01.01.00.00.0ba86d4
149 [eve 0] TIDL Device Error: Allocation failure with NETWORK heap, request size 161472, avail 102512
150 [eve 0] TIDL Device Error: Network heap must be >= 20061184 bytes, 19960944 not sufficient. Update Configuration::NETWORK_HEAP_SIZE
151 TIDL Error: [src/execution_object.cpp, Wait, 548]: Allocation failed on device
153 .. note::
155 The memory for parameter and network heaps is itself allocated from OpenCL global memory (CMEM). Refer :ref:`opencl-global-memory` for details.
158 Configuration file
159 ++++++++++++++++++
160 TIDL API allows the user to create a Configuration object by reading from a file or by initializing it directly. Configuration settings supported by ``Configuration::ReadFromFile``:
162 * numFrames
163 * inWidth
164 * inHeight
165 * inNumChannels
166 * preProcType
167 * layerIndex2LayerGroupId
169 * inData
170 * outData
172 * netBinFile
173 * paramsBinFile
175 * enableTrace
177 An example configuration file:
179 .. literalinclude:: ../../examples/layer_output/j11_v2_trace.txt
180 :language: bash
182 .. note::
184 Refer :ref:`api-documentation` for the complete set of parameters in the ``Configuration`` class and their description.
187 Overriding layer group assignment
188 +++++++++++++++++++++++++++++++++
189 The `TIDL device translation tool`_ assigns layer group ids to layers during the translation process. TIDL API 1.1 and higher allows the user to override this assignment by specifying explicit mappings. There are two ways for the user to provide an updated mapping:
191 1. Specify a mapping in the configuration file to indicate that layers 12, 13 and 14 are assigned to layer group 2:
193 .. code-block:: c++
195 layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} }
198 2. User can also provide the layer index to group mapping in the code:
200 .. code-block:: c++
202 Configuration c;
203 c.ReadFromFile("test.cfg");
204 c.layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} };
207 .. role:: cpp(code)
208 :language: c++
210 Accessing outputs of network layers
211 +++++++++++++++++++++++++++++++++++
213 TIDL API v1.1 and higher provides the following APIs to access the output buffers associated with network layers:
215 * :cpp:`ExecutionObject::WriteLayerOutputsToFile` - write outputs from each layer into individual files. Files are named ``<filename_prefix>_<layer_index>.bin``.
216 * :cpp:`ExecutionObject::GetOutputsFromAllLayers` - Get output buffers from all layers.
217 * :cpp:`ExecutionObject::GetOutputFromLayer` - Get a single output buffer from a layer.
219 See ``examples/layer_output/main.cpp, ProcessTrace()`` for examples of using these tracing APIs.
221 .. note::
222 The :cpp:`ExecutionObject::GetOutputsFromAllLayers` method can be memory intensive if the network has a large number of layers. This method allocates sufficient host memory to hold all output buffers from all layers.
224 .. _Processor SDK Linux Software Developer's Guide: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/index.html
225 .. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
226 .. _TIDL device translation tool: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html#import-process