aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorManu Mathew2020-06-19 08:47:16 -0500
committerManu Mathew2020-06-19 09:13:11 -0500
commit252247115ec2d4eb54dba496cfa94ffce94023ad (patch)
tree1ba4c6feac20fc2c116693c4f2df3c1281f081cb
parent9d0b3422cc0915028dd92b4d07377b3b8f45534f (diff)
downloadpytorch-mmdetection-252247115ec2d4eb54dba496cfa94ffce94023ad.tar.gz
pytorch-mmdetection-252247115ec2d4eb54dba496cfa94ffce94023ad.tar.xz
pytorch-mmdetection-252247115ec2d4eb54dba496cfa94ffce94023ad.zip
documentation added
-rw-r--r--LICENSE234
-rw-r--r--README.md74
-rw-r--r--configs/retinanet/retinanet_regnet_fpn_bgr.py2
-rw-r--r--docs/jacinto_ai_detection_model_zoo.md46
-rw-r--r--docs/jacinto_ai_detection_usage.md13
-rw-r--r--docs/jacinto_ai_quantization_aware_training.md68
-rwxr-xr-xscripts/test_dist.py18
-rwxr-xr-xscripts/test_main.py5
-rwxr-xr-xscripts/train_dist.py18
-rwxr-xr-xscripts/train_main.py2
10 files changed, 457 insertions, 23 deletions
diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..5c8feda
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,234 @@
1Texas Instruments (C) 2018-2020
2All Rights Reserved
3
4Redistribution and use in source and binary forms, with or without
5modification, are permitted provided that the following conditions are met:
6
7* Redistributions of source code must retain the above copyright notice, this
8 list of conditions and the following disclaimer.
9
10* Redistributions in binary form must reproduce the above copyright notice,
11 this list of conditions and the following disclaimer in the documentation
12 and/or other materials provided with the distribution.
13
14* Neither the name of the copyright holder nor the names of its
15 contributors may be used to endorse or promote products derived from
16 this software without specific prior written permission.
17
18THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
19AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
20IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
21DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
22FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
24SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
25CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
26OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
27OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28
29==============================================================================
30Original License of mmdetection (https://github.com/open-mmlab/mmdetection):
31
32Copyright 2018-2019 Open-MMLab. All rights reserved.
33
34 Apache License
35 Version 2.0, January 2004
36 http://www.apache.org/licenses/
37
38 TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
39
40 1. Definitions.
41
42 "License" shall mean the terms and conditions for use, reproduction,
43 and distribution as defined by Sections 1 through 9 of this document.
44
45 "Licensor" shall mean the copyright owner or entity authorized by
46 the copyright owner that is granting the License.
47
48 "Legal Entity" shall mean the union of the acting entity and all
49 other entities that control, are controlled by, or are under common
50 control with that entity. For the purposes of this definition,
51 "control" means (i) the power, direct or indirect, to cause the
52 direction or management of such entity, whether by contract or
53 otherwise, or (ii) ownership of fifty percent (50%) or more of the
54 outstanding shares, or (iii) beneficial ownership of such entity.
55
56 "You" (or "Your") shall mean an individual or Legal Entity
57 exercising permissions granted by this License.
58
59 "Source" form shall mean the preferred form for making modifications,
60 including but not limited to software source code, documentation
61 source, and configuration files.
62
63 "Object" form shall mean any form resulting from mechanical
64 transformation or translation of a Source form, including but
65 not limited to compiled object code, generated documentation,
66 and conversions to other media types.
67
68 "Work" shall mean the work of authorship, whether in Source or
69 Object form, made available under the License, as indicated by a
70 copyright notice that is included in or attached to the work
71 (an example is provided in the Appendix below).
72
73 "Derivative Works" shall mean any work, whether in Source or Object
74 form, that is based on (or derived from) the Work and for which the
75 editorial revisions, annotations, elaborations, or other modifications
76 represent, as a whole, an original work of authorship. For the purposes
77 of this License, Derivative Works shall not include works that remain
78 separable from, or merely link (or bind by name) to the interfaces of,
79 the Work and Derivative Works thereof.
80
81 "Contribution" shall mean any work of authorship, including
82 the original version of the Work and any modifications or additions
83 to that Work or Derivative Works thereof, that is intentionally
84 submitted to Licensor for inclusion in the Work by the copyright owner
85 or by an individual or Legal Entity authorized to submit on behalf of
86 the copyright owner. For the purposes of this definition, "submitted"
87 means any form of electronic, verbal, or written communication sent
88 to the Licensor or its representatives, including but not limited to
89 communication on electronic mailing lists, source code control systems,
90 and issue tracking systems that are managed by, or on behalf of, the
91 Licensor for the purpose of discussing and improving the Work, but
92 excluding communication that is conspicuously marked or otherwise
93 designated in writing by the copyright owner as "Not a Contribution."
94
95 "Contributor" shall mean Licensor and any individual or Legal Entity
96 on behalf of whom a Contribution has been received by Licensor and
97 subsequently incorporated within the Work.
98
99 2. Grant of Copyright License. Subject to the terms and conditions of
100 this License, each Contributor hereby grants to You a perpetual,
101 worldwide, non-exclusive, no-charge, royalty-free, irrevocable
102 copyright license to reproduce, prepare Derivative Works of,
103 publicly display, publicly perform, sublicense, and distribute the
104 Work and such Derivative Works in Source or Object form.
105
106 3. Grant of Patent License. Subject to the terms and conditions of
107 this License, each Contributor hereby grants to You a perpetual,
108 worldwide, non-exclusive, no-charge, royalty-free, irrevocable
109 (except as stated in this section) patent license to make, have made,
110 use, offer to sell, sell, import, and otherwise transfer the Work,
111 where such license applies only to those patent claims licensable
112 by such Contributor that are necessarily infringed by their
113 Contribution(s) alone or by combination of their Contribution(s)
114 with the Work to which such Contribution(s) was submitted. If You
115 institute patent litigation against any entity (including a
116 cross-claim or counterclaim in a lawsuit) alleging that the Work
117 or a Contribution incorporated within the Work constitutes direct
118 or contributory patent infringement, then any patent licenses
119 granted to You under this License for that Work shall terminate
120 as of the date such litigation is filed.
121
122 4. Redistribution. You may reproduce and distribute copies of the
123 Work or Derivative Works thereof in any medium, with or without
124 modifications, and in Source or Object form, provided that You
125 meet the following conditions:
126
127 (a) You must give any other recipients of the Work or
128 Derivative Works a copy of this License; and
129
130 (b) You must cause any modified files to carry prominent notices
131 stating that You changed the files; and
132
133 (c) You must retain, in the Source form of any Derivative Works
134 that You distribute, all copyright, patent, trademark, and
135 attribution notices from the Source form of the Work,
136 excluding those notices that do not pertain to any part of
137 the Derivative Works; and
138
139 (d) If the Work includes a "NOTICE" text file as part of its
140 distribution, then any Derivative Works that You distribute must
141 include a readable copy of the attribution notices contained
142 within such NOTICE file, excluding those notices that do not
143 pertain to any part of the Derivative Works, in at least one
144 of the following places: within a NOTICE text file distributed
145 as part of the Derivative Works; within the Source form or
146 documentation, if provided along with the Derivative Works; or,
147 within a display generated by the Derivative Works, if and
148 wherever such third-party notices normally appear. The contents
149 of the NOTICE file are for informational purposes only and
150 do not modify the License. You may add Your own attribution
151 notices within Derivative Works that You distribute, alongside
152 or as an addendum to the NOTICE text from the Work, provided
153 that such additional attribution notices cannot be construed
154 as modifying the License.
155
156 You may add Your own copyright statement to Your modifications and
157 may provide additional or different license terms and conditions
158 for use, reproduction, or distribution of Your modifications, or
159 for any such Derivative Works as a whole, provided Your use,
160 reproduction, and distribution of the Work otherwise complies with
161 the conditions stated in this License.
162
163 5. Submission of Contributions. Unless You explicitly state otherwise,
164 any Contribution intentionally submitted for inclusion in the Work
165 by You to the Licensor shall be under the terms and conditions of
166 this License, without any additional terms or conditions.
167 Notwithstanding the above, nothing herein shall supersede or modify
168 the terms of any separate license agreement you may have executed
169 with Licensor regarding such Contributions.
170
171 6. Trademarks. This License does not grant permission to use the trade
172 names, trademarks, service marks, or product names of the Licensor,
173 except as required for reasonable and customary use in describing the
174 origin of the Work and reproducing the content of the NOTICE file.
175
176 7. Disclaimer of Warranty. Unless required by applicable law or
177 agreed to in writing, Licensor provides the Work (and each
178 Contributor provides its Contributions) on an "AS IS" BASIS,
179 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
180 implied, including, without limitation, any warranties or conditions
181 of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
182 PARTICULAR PURPOSE. You are solely responsible for determining the
183 appropriateness of using or redistributing the Work and assume any
184 risks associated with Your exercise of permissions under this License.
185
186 8. Limitation of Liability. In no event and under no legal theory,
187 whether in tort (including negligence), contract, or otherwise,
188 unless required by applicable law (such as deliberate and grossly
189 negligent acts) or agreed to in writing, shall any Contributor be
190 liable to You for damages, including any direct, indirect, special,
191 incidental, or consequential damages of any character arising as a
192 result of this License or out of the use or inability to use the
193 Work (including but not limited to damages for loss of goodwill,
194 work stoppage, computer failure or malfunction, or any and all
195 other commercial damages or losses), even if such Contributor
196 has been advised of the possibility of such damages.
197
198 9. Accepting Warranty or Additional Liability. While redistributing
199 the Work or Derivative Works thereof, You may choose to offer,
200 and charge a fee for, acceptance of support, warranty, indemnity,
201 or other liability obligations and/or rights consistent with this
202 License. However, in accepting such obligations, You may act only
203 on Your own behalf and on Your sole responsibility, not on behalf
204 of any other Contributor, and only if You agree to indemnify,
205 defend, and hold each Contributor harmless for any liability
206 incurred by, or claims asserted against, such Contributor by reason
207 of your accepting any such warranty or additional liability.
208
209 END OF TERMS AND CONDITIONS
210
211 APPENDIX: How to apply the Apache License to your work.
212
213 To apply the Apache License to your work, attach the following
214 boilerplate notice, with the fields enclosed by brackets "[]"
215 replaced with your own identifying information. (Don't include
216 the brackets!) The text should be enclosed in the appropriate
217 comment syntax for the file format. We also recommend that a
218 file or class name and description of purpose be included on the
219 same "printed page" as the copyright notice for easier
220 identification within third-party archives.
221
222 Copyright 2018-2019 Open-MMLab.
223
224 Licensed under the Apache License, Version 2.0 (the "License");
225 you may not use this file except in compliance with the License.
226 You may obtain a copy of the License at
227
228 http://www.apache.org/licenses/LICENSE-2.0
229
230 Unless required by applicable law or agreed to in writing, software
231 distributed under the License is distributed on an "AS IS" BASIS,
232 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
233 See the License for the specific language governing permissions and
234 limitations under the License.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..5066e03
--- /dev/null
+++ b/README.md
@@ -0,0 +1,74 @@
1# Jacinto-AI-Detection
2
3
4This repository is an extension of the popular [mmdetection](https://github.com/open-mmlab/mmdetection) open source repository for object detection training. While mmdetection focuses on a wide variety of models, typically at high complexity, we focus on models that are optimized for speed and accuracy so that they run efficiently on embedded devices.
5
6When we say MMDetection or mmdetection, we refer to the original repository. However, when we say Jacinto-AI-Detection or "this repository", we refer to this extension of mmdetection with speed/accuracy optimized models.
7
8Kindly take time to read through the original documentation of the original [mmdetection](https://github.com/open-mmlab/mmdetection) before attempting to use this repository. This repository requires mmdetection to be installed.
9
10
11## License
12
13This repository is released under the following [LICENSE](./LICENSE).
14
15
16## Installation
17
18Please refer to [mmdetection install.md](https://github.com/open-mmlab/mmdetection/docs/install.md) for installation and dataset preparation.
19
20After installing mmdetection, please install [PyTorch-Jacinto-AI-DevKit](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/pytorch-jacinto-ai-devkit/browse/) as our repository uses several components from there - especially to define low complexity models and to Quantization Aware Training (QAT).
21
22
23## Get Started
24
25Please see [getting_started.md](https://github.com/open-mmlab/mmdetection/docs/getting_started.md) for the basic usage of MMDetection. However, some of these may not apply to these repository.
26
27Please see [usage/instructions](https://github.com/open-mmlab/mmdetection/docs/jacinto_ai_detection_usage.md) for training and testing with this repository.
28
29
30## Benchmark and Model Zoo
31
32Several trained models with accuracy report is available at [Jacinto-AI-Detection Model Zoo](docs/jacinto_ai/jacinto_ai_detection_model_zoo.md)
33
34
35## Quantization
36
37Tutorial on how to do [Quantization Aware Training in Jacinto-AI-Detection](docs/jacinto_ai/jacinto_ai_quantization_aware_training.md) in Jacinto-AI-MMDetection.
38
39
40## Acknowledgement
41
42This is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
43We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.
44
45
46## Citation
47
48This package/toolbox is an extension of mmdetection (https://github.com/open-mmlab/mmdetection). If you use this package/toolbox or benchmark in your research, please cite that project as well.
49
50```
51@article{PyTorch-Jacinto-AI-Detection,
52 title = {{PyTorch-Jacinto-AI-Detection}: An Extension To Open MMLab Detection Toolbox and Benchmark},
53 author = {Jacinto AI Team, jacinto-ai-devkit@list.ti.com},
54 journal = {https://github.com/TexasInstruments/jacinto-ai-devkit},
55 year={2020}
56}
57```
58```
59@article{mmdetection,
60 title = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
61 author = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
62 Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
63 Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
64 Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
65 Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
66 and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
67 journal= {arXiv preprint arXiv:1906.07155},
68 year={2019}
69}
70```
71
72
73## Contact
74This extension of MMDetection is part of Jacinto-AI-DevKit and is maintained by the Jacinto AI team: jacinto-ai-devkit@list.ti.com
diff --git a/configs/retinanet/retinanet_regnet_fpn_bgr.py b/configs/retinanet/retinanet_regnet_fpn_bgr.py
index 92abbaf..74d81cc 100644
--- a/configs/retinanet/retinanet_regnet_fpn_bgr.py
+++ b/configs/retinanet/retinanet_regnet_fpn_bgr.py
@@ -18,7 +18,7 @@ else:
18 assert False, f'Unknown dataset_type: {dataset_type}' 18 assert False, f'Unknown dataset_type: {dataset_type}'
19 19
20 20
21input_size = (32,32) #(768,384) # (1536,768) #(1024,512) #(768,384) #(512,512) 21input_size = (768,384) # (1536,768) #(1024,512) #(768,384) #(512,512)
22decoder_width_fact = 1 # 1, 2, 3 22decoder_width_fact = 1 # 1, 2, 3
23 23
24backbone_type = 'RegNet' 24backbone_type = 'RegNet'
diff --git a/docs/jacinto_ai_detection_model_zoo.md b/docs/jacinto_ai_detection_model_zoo.md
new file mode 100644
index 0000000..afcf80c
--- /dev/null
+++ b/docs/jacinto_ai_detection_model_zoo.md
@@ -0,0 +1,46 @@
1# Jacinto-AI-MMDetection Model Zoo
2
3MMDetection has a huge Model Zoo, supporting a lot of models. Many of them are high complexity models that are not suitable for embedded scenarios that require high throughput. (Please refer to the mmdetection documentation link above for details). However, in this fork, we list only speed/accuracy optimized models that we have trained ourselves or is recommending from another location.
4
5## Features
6
7| | ResNet | Reget | MobileNet|
8|--------------------|:--------:|:--------:|:--------:|
9| Faster R-CNN | ✗ | ✗ | ✗ |
10| Mask R-CNN | ✗ | ✗ | ✗ |
11| SSD | ✓ | ✓ | ✓ |
12| RetinaNet | ☐ | ☐ | ☐ |
13| ATSS | ✗ | ✗ | ✗ |
14| FCOS | ✗ | ✗ | ✗ |
15
16✓ Available, ☐ In progress or partially available, ✗ TBD
17
18#### Pascal VOC2007 Dataset
19- Train on Pascal VOC 2007+2012 TrainVal Set
20- Test on Pascal VOC 2007 Test Set
21
22|Dataset |Mode Arch |Backbone Model |Backbone Stride|Resolution |MeanAP50(mAP%)|GigaMACS|Model Config File |Download |
23|--------- |---------- |----------- |-------------- |-----------|-------- |------- |---------- |---
24|VOC2007 |SSD with FPN |MobileNetV2 |32 |512x512 |76.1 |2.21 |configs/jacinto_ai/ssd_mobilenet_fpn.py |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/mmdetection/ssd/20200612-051942_ssd512_mobilenetv2_fpn) |
25|VOC2007 |SSD with FPN |RegNet800MF |32 |512x512 |79.7 |5.64 |configs/jacinto_ai/ssd_regnet_fpn.py |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/mmdetection/ssd/20200611-200124_ssd512_regnet800mf_fpn_bgr) |
26|VOC2007 |SSD with FPN |ResNet50 |32 |512x512 |80.5 |27.1 |configs/jacinto_ai/ssd_resnet_fpn.py |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/mmdetection/ssd/20200614-234748_ssd512_resnet_fpn) |
27|.
28|VOC2007 |SSD |VGG16 |32 |512x512 | |90.39 |configs/pascal_voc/ssd512_voc0712.py | |
29
30#### COCO 2017 Dataset
31- Train on COCO 2017 Train Set
32- Test on COCO 2017 Validation Set
33
34|Dataset |Mode Arch |Backbone Model |Backbone Stride|Resolution |AP[0.5:0.95]% |MeanAP50(mAP%)|GigaMACS|Model Config File |Download |
35|--------- |---------- |----------- |-------------- |-----------|-------- |--- |------- |---------- |--- |
36|COCO2017 |RetinaNet with FPN|ResNet50 |32 |768x384* |29.0 |45.3 | |configs/retinanet/retinanet_r50_fpn_1x_coco.py |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
37|COCO2017 |RetinaNet with FPN|ResNet50 |32 |1536x768* |36.1 |54.9 | |configs/retinanet/retinanet_r50_fpn_1x_coco.py |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/atss) |
38
39
40\* Inference is run at this resolution using the trained model given in the link.
41
42## References
43
44[1] MMDetection: Open MMLab Detection Toolbox and Benchmark, https://arxiv.org/abs/1906.07155, Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, Dahua Lin
45
46[2] SSD: Single Shot MultiBox Detector, https://arxiv.org/abs/1512.02325, Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg \ No newline at end of file
diff --git a/docs/jacinto_ai_detection_usage.md b/docs/jacinto_ai_detection_usage.md
new file mode 100644
index 0000000..ab933e9
--- /dev/null
+++ b/docs/jacinto_ai_detection_usage.md
@@ -0,0 +1,13 @@
1# Jacinto-AI-MMDetection Usage
2
3Additional scripts are provided on top of mmdetection to ease the training and testing process.
4
5#### Training
6- Several complexity optimized configurations are provided in the folder pytorch-mmdetection/configs/jacinto_ai
7- Training is done by pytorch-mmdetection/scripts/train_main.py or pytorch-mmdetection/scripts/train_dist.py (Select the appropriate config file inside these scripts).
8- To enable quantization during training, the quantize flag in the config file being used must be a "truth value in Python" - i.e. a string or True or something like that. If quantize is commented out or if it is False, None etc, quantization will not be performed.
9- Once the training is done test can be done by pytorch-mmdetection/scripts/test_main.py or pytorch-mmdetection/scripts/test_dist.py (Select the appropriate config file inside these scripts).
10
11## Testing
12- Test can be done by using the scripts ./scripts/test_main.py or ./scripts/test_dist.py
13- To enable quantization during test, the quantize flag in the config file being used must be a "truth value in Python" - i.e. a string or True or something like that. If quantize is commented out or if it is False, None etc, quantization will not be performed. \ No newline at end of file
diff --git a/docs/jacinto_ai_quantization_aware_training.md b/docs/jacinto_ai_quantization_aware_training.md
new file mode 100644
index 0000000..56a57e0
--- /dev/null
+++ b/docs/jacinto_ai_quantization_aware_training.md
@@ -0,0 +1,68 @@
1# Jacinto-AI-MMDetection Quantization
2
3Quantization Aware Training (QAT) is often required to achieve the best acuracy for inference in fixed point.
4
5We have developed several tools to aid QAT and is provided in [PyTorch-Jacinto-AI-DevKit Main Page](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/pytorch-jacinto-ai-devkit/browse/). Please consult the documentation on Quantization provided there to understand the internals of our implementation of QAT.
6
7## Features
8
9| | Float | 16 bit | 8bit | 4bit |
10|-------------------- |:--------:|:--------:|:--------:|:--------:|
11| Float32 training and test |✓ | | | |
12| Float16 training and test | | | | |
13| Post Training Calibration for Quantization (PTQ) | | ✓ | ✓ |✗ |
14| Quantization Aware Training (QAT) | | ✓ | ✓ |✗ |
15| Test/Accuracy evaluation of PTQ & QAT models | | ✓ | ✓ |✗ |
16
17✓ Available, ☐ In progress or partially available, ✗ TBD
18
19## Training
20
21#### Floating Point Training
22- Floating point training and testing can be done using the scripts provided in the [scripts](../../scripts) folder. Please consult [Usage/Instructions](jacinto_ai/jacinto_ai_mmdetection_usage.md) for more information.
23
24#### Quantization Aware Training
25- The following are the main tools provided for Quantization are:<br>
26 - Quantization Aware Training (QAT): QuantTrainModule<br>
27 - Post Training Calibration for Quantization (PTQ/Calibration): QuantCalibrateModule<br>
28 - Accuracy Test with Quantization: QuantTestModule<br>
29
30- After a model is created in pytorch-mmdetection, it is wrapped in one of the above modules depending on whether the current phase is QAT, PTQ or Test with Quantization.
31
32- Loading of pretrained model or saving of trained model needs slight change when wrapped with the above modules as the original model is inside the wrapper (otherwise the symbols in pretrained will not match).
33
34- QuantCalibrateModule is fast, but QuantTrainModule typically gives better accuracy. QuantTrainModule and QuantTestModule supports multiple gpus, whereas QuantCalibrateModule has the additional limitation that it doesn't support multiple gpus.
35
36- Training with QuantTrainModule is just like any other training. However using QuantCalibrateModule is a bit different in that it doesn't need backpropagation - so backpropagation is disabled when using QuantCalibrateModule.
37
38- We have derived additional classes from these modules called MMDetQuantTrainModules, MMDetQuantCalibrateModules and MMDetQuantTestModules because the forward call of models in mmdetection is a bit different. For example for tracing through the model, a forward_dummy method is used in mmdetection. Also the way arguments are passed to forward call are also a bit different.
39
40- Training can be done by using the scripts ./scripts/train_main.py or ./scripts/train_dist.py.
41
42- To enable quantization during training, the quantize flag in the config file being used must be a "truth value in Python" - i.e. a string or True or something like that. If quantize is commented out or if it is False, None etc, quantization will not be performed.
43
44## Testing
45- Test can be done by using the scripts ./scripts/test_main.py or ./scripts/test_dist.py
46
47- To enable quantization during test, the quantize flag in the config file being used must be a "truth value in Python" - i.e. a string or True or something like that. If quantize is commented out or if it is False, None etc, quantization will not be performed.
48
49## Results
50
51#### Pascal VOC2007 Dataset
52- Train on Pascal VOC 2007+2012
53- Test on Pascal VOC 2007
54
55|Dataset |Mode Arch |Backbone Model |Backbone Stride|Resolution |Acc Float|Acc 8bit Calib|Acc 8bit QAT|Model Config File |
56|--------- |---------- |----------- |-------------- |-----------|-------- |------- |---------- |---------- |
57|VOC2007 |SSD with FPN |MobileNetV2 |32 |512x512 |76.1 |75.4 |75.4 |configs/jacinto_ai/ssd_mobilenet_fpn.py|
58|VOC2007 |SSD with FPN |RegNet800MF |32 |512x512 |79.7 |79.0 |79.5 |configs/jacinto_ai/ssd_regnet_fpn.py |
59|VOC2007 |SSD with FPN |ResNet50 |32 |512x512 |80.5 |77.0 |79.5 |configs/jacinto_ai/ssd_resnet_fpn.py |
60|.
61|VOC2007 |SSD |VGG16 |32 |512x512 |79.8 | | |configs/pascal_voc/ssd512_voc0712.py |
62
63- Acc Float: MeanAP50(mAP) Accuracy in percentage in this case.
64- Acc 8bit Calib: Same metric with 8bit quantization using PTQ/Calibration
65- Acc Float: Same metric with QAT
66
67## References
68Please Refer to the [pytorch-jacinto-ai-devkit](https://git.ti.com/cgit/jacinto-ai/pytorch-jacinto-ai-devkit/about/) and its [Quantization documentation](https://git.ti.com/cgit/jacinto-ai/pytorch-jacinto-ai-devkit/about/docs/Quantization.md) for further details on the internals of these Quant Modules. \ No newline at end of file
diff --git a/scripts/test_dist.py b/scripts/test_dist.py
index cb4879a..4fc7ddf 100755
--- a/scripts/test_dist.py
+++ b/scripts/test_dist.py
@@ -11,21 +11,21 @@ from torch.distributed import launch as distributed_launch
11Usage: 11Usage:
12(1) Use one of the following config files. 12(1) Use one of the following config files.
13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented. 13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented.
14(3) Use the appropriate input resolution int the config file (input_size). 14(3) Use the appropriate input resolution in the config file (input_size).
15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small. 15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small.
16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py 16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py
17 17
18config='./configs/jacinto_ai/ssd_mobilenet.py' 18config='./configs/ssd/ssd_mobilenet.py'
19config='./configs/jacinto_ai/ssd_mobilenet_fpn.py' 19config='./configs/ssd/ssd_mobilenet_fpn.py'
20config='./configs/jacinto_ai/ssd_resnet_fpn.py' 20config='./configs/ssd/ssd_resnet_fpn.py'
21config='./configs/jacinto_ai/ssd_regnet_fpn_bgr.py' 21config='./configs/ssd/ssd_regnet_fpn_bgr.py'
22 22
23config='./configs/jacinto_ai/retinanet_regnet_fpn_bgr.py' 23config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
24config='./configs/jacinto_ai/retinanet_resnet_fpn.py' 24config='./configs/retinanet/retinanet_resnet_fpn.py'
25config='./configs/jacinto_ai/fcos_regnet_fpn_bgr.py' 25config='./configs/retinanet/fcos_regnet_fpn_bgr.py'
26''' 26'''
27 27
28config='./configs/jacinto_ai/retinanet_resnet_fpn.py' 28config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
29 29
30######################################################################## 30########################################################################
31# other settings 31# other settings
diff --git a/scripts/test_main.py b/scripts/test_main.py
index bb03ff6..71c5339 100755
--- a/scripts/test_main.py
+++ b/scripts/test_main.py
@@ -11,7 +11,7 @@ from tools import test as test_mmdet
11Usage: 11Usage:
12(1) Use one of the following config files. 12(1) Use one of the following config files.
13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented. 13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented.
14(3) Use the appropriate input resolution int the config file (input_size). 14(3) Use the appropriate input resolution in the config file (input_size).
15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small. 15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small.
16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py 16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py
17 17
@@ -22,8 +22,7 @@ config='./configs/ssd/ssd_regnet_fpn_bgr.py'
22 22
23config='./configs/retinanet/retinanet_regnet_fpn_bgr.py' 23config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
24config='./configs/retinanet/retinanet_resnet_fpn.py' 24config='./configs/retinanet/retinanet_resnet_fpn.py'
25 25config='./configs/retinanet/fcos_regnet_fpn_bgr.py'
26config='./configs/fcos/fcos_regnet_fpn_bgr.py'
27''' 26'''
28 27
29config='./configs/retinanet/retinanet_regnet_fpn_bgr.py' 28config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
diff --git a/scripts/train_dist.py b/scripts/train_dist.py
index acafb8f..2f0b97d 100755
--- a/scripts/train_dist.py
+++ b/scripts/train_dist.py
@@ -11,21 +11,21 @@ from torch.distributed import launch as distributed_launch
11Usage: 11Usage:
12(1) Use one of the following config files. 12(1) Use one of the following config files.
13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented. 13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented.
14(3) Use the appropriate input resolution int the config file (input_size). 14(3) Use the appropriate input resolution in the config file (input_size).
15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small. 15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small.
16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py 16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py
17 17
18config='./configs/jacinto_ai/ssd_mobilenet.py' 18config='./configs/ssd/ssd_mobilenet.py'
19config='./configs/jacinto_ai/ssd_mobilenet_fpn.py' 19config='./configs/ssd/ssd_mobilenet_fpn.py'
20config='./configs/jacinto_ai/ssd_resnet_fpn.py' 20config='./configs/ssd/ssd_resnet_fpn.py'
21config='./configs/jacinto_ai/ssd_regnet_fpn_bgr.py' 21config='./configs/ssd/ssd_regnet_fpn_bgr.py'
22 22
23config='./configs/jacinto_ai/retinanet_regnet_fpn_bgr.py' 23config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
24config='./configs/jacinto_ai/retinanet_resnet_fpn.py' 24config='./configs/retinanet/retinanet_resnet_fpn.py'
25config='./configs/jacinto_ai/fcos_regnet_fpn_bgr.py' 25config='./configs/retinanet/fcos_regnet_fpn_bgr.py'
26''' 26'''
27 27
28config='./configs/jacinto_ai/ssd_mobilenet_fpn.py' 28config='./configs/retinanet/retinanet_regnet_fpn_bgr.py'
29 29
30######################################################################## 30########################################################################
31# other settings 31# other settings
diff --git a/scripts/train_main.py b/scripts/train_main.py
index d76efb4..409f6c5 100755
--- a/scripts/train_main.py
+++ b/scripts/train_main.py
@@ -11,7 +11,7 @@ from tools import train as train_mmdet
11Usage: 11Usage:
12(1) Use one of the following config files. 12(1) Use one of the following config files.
13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented. 13(2) Inside the config file, make sure that the dataset that needs to be trained on is uncommented.
14(3) Use the appropriate input resolution int the config file (input_size). 14(3) Use the appropriate input resolution in the config file (input_size).
15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small. 15(4) Recommend to run the first training with voc0712 dataset as it is widely used and reasonably small.
16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py 16(5) To convert cityscapes to coco format, run the script: tools/convert_datasets/cityscapes.py
17 17