summary | shortlog | log | commit | commitdiff | tree
raw | patch | inline | side by side (parent: 2357b3a)
raw | patch | inline | side by side (parent: 2357b3a)
author | Ajay Jayaraj <ajayj@ti.com> | |
Thu, 9 Aug 2018 18:33:58 +0000 (13:33 -0500) | ||
committer | Ajay Jayaraj <ajayj@ti.com> | |
Thu, 9 Aug 2018 18:33:58 +0000 (13:33 -0500) |
Provide API support for updating layer -> layer group id assignments
before executing network.
(MCT-1028)
before executing network.
(MCT-1028)
diff --git a/docs/source/api.rst b/docs/source/api.rst
index 7aa7849a9f542b8d1152cfc2184c7500d34cc0c0..6fd383ca164aa275ea68d734b8ed294687b7e968 100644 (file)
--- a/docs/source/api.rst
+++ b/docs/source/api.rst
Path to the TIDL parameter file. Used by the API, must be specified.
+.. data:: std::map<int, int> layerIndex2LayerGroupId
+
+ Map of layer index to layer group id. Used to override layer group assigment for layers. Any layer not specified in this map will retain its existing mapping.
+
Memory Management
+++++++++++++++++
The ``Configuration`` object specifies the sizes of 2 heaps. These heaps are allocated from OpenCL global memory that is shared across the host and device. Refer section :ref:`opencl-global-memory` for steps to increase the size of the OpenCL global memory heap.
This field is used to specify the size of the device heap used for all allocations other than network parameters. The constructor for ``Configuration`` sets EXTMEM_HEAP_SIZE to 64MB. There is one external memory heap for each instance of ``ExecutionObject``
+Debug
++++++
+.. data:: bool enableOutputTrace;
+
+ Enable tracing of output buffers associated with each layer.
+
+
+
API Reference
-------------
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 6a1cf0a9fd848dc77687316e019c39a56dc22490..c909e771869bca3ee68089478affbf09bc8b80e5 100644 (file)
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
# built documents.
#
# The short X.Y version.
-version = '1.0'
+version = '1.1'
# The full version, including alpha/beta/rc tags.
-release = '1.0.0'
+release = '1.1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/docs/source/intro.rst b/docs/source/intro.rst
index 407a39e1ea387ca4ba030c3ce86118ab07237dab..bb0b9eb2d719c65425fa257ff1dab4968a9327a1 100644 (file)
--- a/docs/source/intro.rst
+++ b/docs/source/intro.rst
TI Deep Learning (TIDL) API brings deep learning to the edge by enabling applications to leverage TI's proprietary, highly optimized CNN/DNN implementation on the EVE and C66x DSP compute engines. TIDL will initially target Vision/2D use cases on AM57x SoCs.
-This User's Guide covers the TIDL API. For information on TIDL such as the overall development flow, techniques to optimize performance of CNN/DNN on TI's SoCs,performance/benchmarking data and list of supported layers, see the TIDL section in the `Processor SDK Linux Software Developer's Guide`_.
+This User's Guide covers the TIDL API. For information on TIDL such as the overall development flow, techniques to optimize performance of CNN/DNN on TI's SoCs,performance/benchmarking data and list of supported layers, see the TIDL section in the `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_.
.. note::
TIDL API is available only on AM57x SoCs. It requires OpenCL version 1.1.15.1 or higher.
Development flow with TIDL APIs
-:numref:`TIDL Development flow` shows the overall development process. Deep learning consists to two stages: training at development stage and inference at deployment stage. Training involves designing neural network model, running training data through the network to tune the model parameters. Inference takes the pre-trained model including parameters, applies to new input and produces output. Training is computationally intensive and is done using frameworks such as Caffe/TensorFlow. Once the network is trained, the TIDL converter tool can be used to translate the network and parameters to TIDL. The `Processor SDK Linux Software Developer's Guide`_ provides details on the development flow and and the converter tool. The converter tool generates a TIDL network binary file and model or parameter file. The network file specifies the network graph. The parameter file specifies the weights.
+:numref:`TIDL Development flow` shows the overall development process. Deep learning consists to two stages: training at development stage and inference at deployment stage. Training involves designing neural network model, running training data through the network to tune the model parameters. Inference takes the pre-trained model including parameters, applies to new input and produces output. Training is computationally intensive and is done using frameworks such as Caffe/TensorFlow. Once the network is trained, the TIDL converter tool can be used to translate the network and parameters to TIDL. The `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ provides details on the development flow and and the converter tool. The converter tool generates a TIDL network binary file and model or parameter file. The network file specifies the network graph. The parameter file specifies the weights.
:numref:`TIDL API Software Architecture` shows the TIDL API software architecture.
Sometimes it is beneficial to partition a network and run different parts on different cores because some types of layers could run faster on EVEs while other types could run faster on DSPs. TIDL APIs provide the flexibility to run partitioned network across EVEs and DSPs. Refer the :ref:`ssd-example` example for details.
.. _Processor SDK Linux Software Developer's Guide: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/index.html
+.. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
.. _OpenCV: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components.html#opencv
.. _OpenCL: http://software-dl.ti.com/mctools/esd/docs/opencl/index.html
index b2c909e03666bc4ae93877fe22b77c84a1b33125..752bd4d3f10e4fbbf7fd42ce6ad21296539f671c 100644 (file)
Using the TIDL API
******************
+Deploying a TIDL network
+++++++++++++++++++++++++
+
This example illustrates using the TIDL API to offload deep learning network processing from a Linux application to the C66x DSPs or EVEs on AM57x devices. The API consists of three classes: ``Configuration``, ``Executor`` and ``ExecutionObject``.
Step 1
uint32_t num_dsp = Executor::GetNumDevices(DeviceType::DSP);
.. note::
- By default, the OpenCL runtime is configured with sufficient global memory
+ By default, the OpenCL runtime is configured with sufficient global memory
(via CMEM) to offload TIDL networks to 2 OpenCL devices. On devices where
``Executor::GetNumDevices`` returns 4 (E.g. AM5729 with 4 EVE OpenCL
- devices) the amount of memory available to the runtime must be increased.
+ devices) the amount of memory available to the runtime must be increased.
Refer :ref:`opencl-global-memory` for details
Step 2
bool status = configuration.ReadFromFile(config_file);
.. note::
- Refer `Processor SDK Linux Software Developer's Guide`_ for creating TIDL network and parameter binary files from TensorFlow and Caffe.
+ Refer `Processor SDK Linux Software Developer's Guide (TIDL chapter)`_ for creating TIDL network and parameter binary files from TensorFlow and Caffe.
Step 3
======
For a complete example of using the API, refer any of the examples available at ``/usr/share/ti/tidl/examples`` on the EVM file system.
+Overriding layer group assignment
++++++++++++++++++++++++++++++++++
+The `TIDL device translation tool`_ assigns layer group ids to layers during the translation process. TIDL API 1.1 and higher allows the user to override this assignment by specifying explicit mappings. There are two ways for the user to provide an updated mapping:
+
+1. Specify a mapping in the configuration file to indicate that layers 12, 13 and 14 are assigned to layer group 2:
+
+.. code-block:: c++
+
+ layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} }
+
+
+2. User can also provide the layer index to group mapping in the code:
+
+.. code-block:: c++
+
+ Configuration c;
+ c.ReadFromFile("test.cfg");
+ c.layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} };
+
+
+.. role:: cpp(code)
+ :language: c++
+
+Accessing outputs of network layers
++++++++++++++++++++++++++++++++++++
+
+TIDL API v1.1 and higher provides the following APIs to access the output buffers associated with network layers:
+
+* :cpp:`ExecutionObject::WriteLayerOutputsToFile` - write outputs from each layer into individual files. Files are named ``<filename_prefix>_<layer_index>.bin``.
+* :cpp:`ExecutionObject::GetOutputsFromAllLayers` - Get output buffers from all layers.
+* :cpp:`ExecutionObject::GetOutputFromLayer` - Get a single output buffer from a layer.
+
+See ``examples/layer_output/main.cpp, ProcessTrace()`` for examples of using these tracing APIs.
+
+.. note::
+ The :cpp:`ExecutionObject::GetOutputsFromAllLayers` method can be memory intensive if the network has a large number of layers. This method allocates sufficient host memory to hold all output buffers from all layers.
+
.. _Processor SDK Linux Software Developer's Guide: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/index.html
+.. _Processor SDK Linux Software Developer's Guide (TIDL chapter): http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html
+.. _TIDL device translation tool: http://software-dl.ti.com/processor-sdk-linux/esd/docs/latest/linux/Foundational_Components_TIDL.html#import-process
index 4bd375470fa60abd67728cbc938247c24f5c3784..d1ebf09ed176f9c8fbe295e4c0e92b3a06999e47 100644 (file)
//! @file configuration.h
#include <string>
+#include <map>
#include <iostream>
namespace tidl {
//! Number of channels in the input frame (e.g. 3 for BGR)
int inNumChannels;
+
+ //! @private
int noZeroCoeffsPercentage;
//! Pre-processing type applied to the input frame
//! Enable tracing of output buffers associated with each layer
bool enableOutputTrace;
+ //! Map of layer index to layer group id. Used to override layer group
+ //! assigment for layers. Any layer not specified in this map will
+ //! retain its existing mapping.
+ std::map<int, int> layerIndex2LayerGroupId;
+
//! Default constructor.
Configuration();
//! Read a configuration from the specified file and validate
bool ReadFromFile(const std::string& file_name);
+
};
}
index e9df4c891a7599d48093cd70b3e4b077043b750b..5871e718bd74a1c2206ea0a0f292ec4d9f47bccb 100644 (file)
#include <boost/spirit/include/qi.hpp>
#include <boost/spirit/include/phoenix_operator.hpp>
+#include <boost/fusion/include/std_pair.hpp>
#include <string>
#include <fstream>
#include <iostream>
#include <algorithm>
#include <cctype>
+#include <utility>
+#include <map>
#include "configuration.h"
using ascii::char_;
using qi::_1;
- path %= lexeme[+(char_ - '"')];
+ // Rules for parsing layer id assignments: { {int, int}, ... }
+ id2group = '{' >> int_ >> ',' >> int_ >> '}';
+ id2groups = '{' >> id2group >> *(qi::lit(',') >> id2group) >> '}';
- // Discard '"'
+ // Rules for parsing paths. Discard '"'
+ path %= lexeme[+(char_ - '"')];
q_path = qi::omit[*char_('"')] >> path >> qi::omit[*char_('"')];
+ // Grammar for parsing configuration file
entry %=
- lit("#") >> *(char_) /* discard comments */ |
- lit("numFrames") >> '=' >> int_[ph::ref(x.numFrames) = _1] |
- lit("preProcType") >> '=' >> int_[ph::ref(x.preProcType) = _1] |
- lit("inWidth") >> '=' >> int_[ph::ref(x.inWidth) = _1] |
- lit("inHeight") >> '=' >> int_[ph::ref(x.inHeight) = _1] |
- lit("inNumChannels") >> '=' >> int_[ph::ref(x.inNumChannels) = _1] |
- lit("inData") >> "=" >> q_path[ph::ref(x.inData) = _1] |
- lit("outData") >> "=" >> q_path[ph::ref(x.outData) = _1] |
- lit("netBinFile") >> "=" >> q_path[ph::ref(x.netBinFile) = _1] |
- lit("paramsBinFile") >> "=" >> q_path[ph::ref(x.paramsBinFile) = _1] |
- lit("enableTrace") >> "=" >> bool_[ph::ref(x.enableOutputTrace) = _1]
+ lit("layerIndex2LayerGroupId") >> '=' >>
+ id2groups[ph::ref(x.layerIndex2LayerGroupId) = _1] |
+ lit("#") >> *(char_) /* discard comments */ |
+ lit("numFrames") >> '=' >> int_[ph::ref(x.numFrames) = _1] |
+ lit("preProcType") >> '=' >> int_[ph::ref(x.preProcType) = _1] |
+ lit("inWidth") >> '=' >> int_[ph::ref(x.inWidth) = _1] |
+ lit("inHeight") >> '=' >> int_[ph::ref(x.inHeight) = _1] |
+ lit("inNumChannels") >> '=' >> int_[ph::ref(x.inNumChannels) = _1] |
+ lit("inData") >> '=' >> q_path[ph::ref(x.inData) = _1] |
+ lit("outData") >> '=' >> q_path[ph::ref(x.outData) = _1] |
+ lit("netBinFile") >> '=' >> q_path[ph::ref(x.netBinFile) = _1] |
+ lit("paramsBinFile") >> '=' >> q_path[ph::ref(x.paramsBinFile) = _1] |
+ lit("enableTrace") >> '=' >> bool_[ph::ref(x.enableOutputTrace)= _1]
;
}
qi::rule<Iterator, std::string(), ascii::space_type> path;
qi::rule<Iterator, std::string(), ascii::space_type> q_path;
qi::rule<Iterator, ascii::space_type> entry;
+
+ qi::rule<Iterator, std::pair<int, int>(), ascii::space_type> id2group;
+ qi::rule<Iterator, std::map<int, int>(), ascii::space_type> id2groups;
};
bool Configuration::ReadFromFile(const std::string &file_name)
return true;
}
+
+#if 0
+--- test.cfg ---
+numFrames = 1
+preProcType = 0
+inData = ../test/testvecs/input/preproc_0_224x224.y
+outData = stats_tool_out.bin
+netBinFile = ../test/testvecs/config/tidl_models/tidl_net_imagenet_jacintonet11v2.bin
+paramsBinFile = ../test/testvecs/config/tidl_models/tidl_param_imagenet_jacintonet11v2.bin
+inWidth = 224
+inHeight = 224
+inNumChannels = 3
+
+# Enable tracing of output buffers
+enableTrace = true
+
+# Override layer group id assignments in the network
+layerIndex2LayerGroupId = { {12, 2}, {13, 2}, {14, 2} }
+----------------
+#endif
+
+#if TEST_PARSING
+int main()
+{
+ Configuration c;
+ c.ReadFromFile("test.cfg");
+
+ return 0;
+}
+#endif
index d67f683a8e86cfbeefbc65f4c23ef69f4caa9a2a..7929c42a06b1c8051021d0ecf77a57d84ca61b33 100644 (file)
net->TIDLLayers[i].layersGroupId = layers_group_id_m;
}
+ // If the user has specified an override mapping, apply it
+ else if (!configuration.layerIndex2LayerGroupId.empty())
+ {
+ for (const auto &item : configuration.layerIndex2LayerGroupId)
+ if (item.first < net->numLayers)
+ net->TIDLLayers[item.first].layersGroupId = item.second;
+ }
+
// Call a setup kernel to allocate and fill network parameters
InitializeNetworkParams(shared_createparam.get());