aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorManu Mathew2020-08-05 09:39:39 -0500
committerManu Mathew2020-08-05 09:39:39 -0500
commit5abc987796a3d4b1139a7602d486b813a29f37c5 (patch)
tree0a2a4877296dee9ec76cdcd8da756968f73cdae2 /docs
parente5a2ce7b98a6be8a3472c2bf6c75d52c2fcff7b4 (diff)
downloadpytorch-mmdetection-5abc987796a3d4b1139a7602d486b813a29f37c5.tar.gz
pytorch-mmdetection-5abc987796a3d4b1139a7602d486b813a29f37c5.tar.xz
pytorch-mmdetection-5abc987796a3d4b1139a7602d486b813a29f37c5.zip
scripts and docs updated
Diffstat (limited to 'docs')
-rw-r--r--docs/det_modelzoo.md6
-rw-r--r--docs/det_quantization.md38
-rw-r--r--docs/det_usage.md12
3 files changed, 31 insertions, 25 deletions
diff --git a/docs/det_modelzoo.md b/docs/det_modelzoo.md
index 7855f6e3..3b5f3af9 100644
--- a/docs/det_modelzoo.md
+++ b/docs/det_modelzoo.md
@@ -35,7 +35,7 @@ Please see the reference [1] for algorithmic details of the detector.
35 35
36|Model Arch |Backbone Model|Resolution |Complexity (Giga MACS) |AP [0.5:0.95]%|Model Config File |Download | 36|Model Arch |Backbone Model|Resolution |Complexity (Giga MACS) |AP [0.5:0.95]%|Model Config File |Download |
37|---------- |--------------|-----------|-----------------------|--------------|---------------------------------|---------| 37|---------- |--------------|-----------|-----------------------|--------------|---------------------------------|---------|
38|SSDLite+FPN |RegNetX800MF |512x512 |**6.03** |**29.9** |ssd-lite_regnet_fpn.py |[location](pytorch/vision/od/xmmdet/coco/ssd) | 38|SSDLite+FPN |RegNetX800MF |512x512 |**6.03** |**29.9** |ssd-lite_regnet_fpn.py | |
39|SSDLite+FPN |RegNetX1.6GF |768x768 | | |ssd-lite_regnet_fpn.py | | 39|SSDLite+FPN |RegNetX1.6GF |768x768 | | |ssd-lite_regnet_fpn.py | |
40|. 40|.
41|SSD+FPN |ResNet50 |512x512 |**30.77** |**31.2** |ssd_resnet_fpn.py | | 41|SSD+FPN |ResNet50 |512x512 |**30.77** |**31.2** |ssd_resnet_fpn.py | |
@@ -48,10 +48,10 @@ Please see the reference [2] for algorithmic details of the detector.
48 48
49|Model Arch |Backbone Model|Resolution |Complexity (Giga MACS) |AP [0.5:0.95]%|Model Config File |Download | 49|Model Arch |Backbone Model|Resolution |Complexity (Giga MACS) |AP [0.5:0.95]%|Model Config File |Download |
50|---------- |--------------|-----------|-----------------------|--------------|---------------------------------|---------| 50|---------- |--------------|-----------|-----------------------|--------------|---------------------------------|---------|
51|RetinaNetLite+FPN|RegNetX800MF |512x512 |**11.08** |**31.6** |retinanet-lite_regnet_fpn_bgr.py |[location](pytorch/vision/od/xmmdet/coco/retinanet) | 51|RetinaNetLite+FPN|RegNetX800MF |512x512 |**11.08** |**31.6** |retinanet-lite_regnet_fpn_bgr.py | |
52|RetinaNetLite+FPN|RegNetX1.6GF |768x768 | | |retinanet-lite_regnet_fpn.py | | 52|RetinaNetLite+FPN|RegNetX1.6GF |768x768 | | |retinanet-lite_regnet_fpn.py | |
53|. 53|.
54|RetinaNet+FPN* |ResNet50 |512x512 |**68.88** |**29.0** | |[location](pytorch/vision/od/mmdet/coco/retinanet), [external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | 54|RetinaNet+FPN* |ResNet50 |512x512 |**68.88** |**29.0** | |[external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
55|RetinaNet+FPN* |ResNet50 |768x768 |**137.75** |**34.0** | |[external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | 55|RetinaNet+FPN* |ResNet50 |768x768 |**137.75** |**34.0** | |[external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
56|RetinaNet+FPN* |ResNet50 |(1536,768) |**275.5** |**37.0** | |[external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | 56|RetinaNet+FPN* |ResNet50 |(1536,768) |**275.5** |**37.0** | |[external](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
57<br> 57<br>
diff --git a/docs/det_quantization.md b/docs/det_quantization.md
index 93ff238b..49b6a7a0 100644
--- a/docs/det_quantization.md
+++ b/docs/det_quantization.md
@@ -1,21 +1,23 @@
1# Quantization Aware Training of Object Detection Models 1# Quantization Aware Training of Object Detection Models
2 2
3Post Training Calibration for Quantization (PTQ) or Quantization Aware Training (QAT) are often required to achieve the best acuracy for inference in fixed point. This repository can do QAT and/or PTQ on object detection models trained here. PTQ can easily be performed on the inference engine itself, it need not be done using a training framework like this. While PTQ is fast, QAT provides the best accuracy. Due to these reasons, we shall focus on QAT in this repository. 3Post Training Calibration for Quantization (Calibration/PTQ) or Quantization Aware Training (QAT) are often required to achieve the best acuracy for inference in fixed point. This repository can do QAT and/or Calibration on object detection models trained here. PTQ can easily be performed on the inference engine itself, it need not be done using a training framework like this. While PTQ is fast, QAT provides the best accuracy. Due to these reasons, we shall focus on QAT in this repository. However this repository also supports a mechanism to aid PTQ which we refer to as Calibration/PTQ.
4 4
5Although repository does Quantization, the data is still kept as discrete floating point values. Activation range information is inserted into the model using Clip functions, wherever appropriate. 5Although repository does QAT, the data is still kept as discrete floating point values. Activation range information is inserted into the model using Clip functions, wherever appropriate.
6 6
7The foundational components for Quantization are provided in [PyTorch-Jacinto-AI-DevKit](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/pytorch-jacinto-ai-devkit/browse/). This repository uses Quantization tools from there. Please consult the [documentation on Quantization](https://git.ti.com/cgit/jacinto-ai/pytorch-jacinto-ai-devkit/about/docs/Quantization.md) to understand the internals of our implementation of QAT / PTQ. 7The foundational components for Quantization are provided in [PyTorch-Jacinto-AI-DevKit](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/pytorch-jacinto-ai-devkit/browse/). This repository uses Quantization tools from there.
8
9Please consult the [documentation on Quantization](https://git.ti.com/cgit/jacinto-ai/pytorch-jacinto-ai-devkit/about/docs/Quantization.md) to understand the internals of our implementation of QAT and Calibration/PTQ. There are several guidelines provided there to help you set the right parameters to get best accuracy with quantization.
8 10
9 11
10## Features 12## Features
11 13
12| | Float | 16 bit | 8bit | 4bit | 14| | Float | 16 bit | 8bit |
13|-------------------- |:--------:|:--------:|:--------:|:--------:| 15|-------------------- |:--------:|:--------:|:--------:|
14| Float32 training and test |✓ | | | | 16| Float32 training and test |✓ | | |
15| Float16 training and test | | | | | 17| Float16 training and test | | | |
16| Post Training Calibration for Quantization (PTQ) | | ☐ | ☐ |✗ | 18| Post Training Calibration for Quantization (Calibration/PTQ) | | ☐ | ☐ |
17| Quantization Aware Training (QAT) | | | ✓ | | 19| Quantization Aware Training (QAT) | | ✓ | |
18| Test/Accuracy evaluation of QAT / PTQ models | | ✓ | ✓ |✗ | 20| Test/Accuracy evaluation of QAT or Calibration/PTQ models | | ✓ | ✓ |
19 21
20✓ Available, ☐ In progress or partially available, ✗ TBD 22✓ Available, ☐ In progress or partially available, ✗ TBD
21 23
@@ -31,20 +33,18 @@ The foundational components for Quantization are provided in [PyTorch-Jacinto-AI
31Everything required for quantization is already done in this repository and the only thing that user needs to be do is to set a **quantize** flag appropriately in the config file. If quantize flag is not set, the usual floating point training of evaluation will happen. These are the values of the quantize flag and their meanings: 33Everything required for quantization is already done in this repository and the only thing that user needs to be do is to set a **quantize** flag appropriately in the config file. If quantize flag is not set, the usual floating point training of evaluation will happen. These are the values of the quantize flag and their meanings:
32- False: Conventional floating point training (default). 34- False: Conventional floating point training (default).
33- True or 'training': Quantization Aware Training (QAT) 35- True or 'training': Quantization Aware Training (QAT)
34- 'calibration': Post Training Calibration for Quantization (PTQ). 36- 'calibration': Post Training Calibration for Quantization (Calibration/PTQ).
35 37
36Accuracy Evaluation with Quantization: If quantize flag is set in the config file when test script is invoked, accuracy evalatuon with quantization is being done. 38Accuracy Evaluation with Quantization: If quantize flag is set in the config file when test script is invoked, accuracy evalatuon with quantization is being done.
37 39
38#### What is happening behind the scenes 40#### What is happening behind the scenes
39- PyTorch-Jacinto-AI-DevKit provides several modules to aid Quantization: QuantTrainModule for QAT, QuantCalibrateModule for PTQ and QuantTestModule for accuracy evaluation with Quantization. 41- PyTorch-Jacinto-AI-DevKit provides several modules to aid Quantization: QuantTrainModule for QAT, QuantCalibrateModule for Calibration/PTQ and QuantTestModule for accuracy evaluation with Quantization.
40
41- QuantTrainModule and QuantTestModule supports multiple gpus, whereas QuantCalibrateModule has the additional limitation that it doesn't support multiple gpus. But since PTQ is fast, this is not a real issue.
42 42
43- After a model is created, it is wrapped in one of the Quantization modules depending on whether the current phase is QAT, PTQ or accuracy evaluation with Quantization. 43- If the quantize flag is set in the config file being used, the model is wrapped in one of the Quantization modules depending on whether the current phase is QAT, Calibration/PTQ or accuracy evaluation with Quantization.
44 44
45- Loading of pretrained model or saving of trained model needs slight change when wrapped with the above modules as the original model is inside the wrapper (otherwise the symbols in pretrained will not match). 45- Loading of pretrained model or saving of trained model needs slight change when wrapped with the above modules as the original model is inside the wrapper (otherwise the symbols in pretrained will not match).
46 46
47- Training with QuantTrainModule is just like any other training. However using QuantCalibrateModule is a bit different in that it doesn't need backpropagation - so backpropagation is disabled when using PTQ. 47- Training with QuantTrainModule is just like any other training. However using QuantCalibrateModule is a bit different in that it doesn't need backpropagation - so backpropagation is disabled when using Calibration/PTQ.
48 48
49All this has been taken care already in the code and the description in this section is for information only. 49All this has been taken care already in the code and the description in this section is for information only.
50 50
@@ -55,10 +55,10 @@ Please see the reference [2] for algorithmic details of the detector.
55 55
56|Model Arch |Backbone Model|Resolution |Giga MACS |Float AP [0.5:0.95]%|8-bit QAT AP [0.5:0.95]%|Download | 56|Model Arch |Backbone Model|Resolution |Giga MACS |Float AP [0.5:0.95]%|8-bit QAT AP [0.5:0.95]%|Download |
57|---------- |--------------|-----------|----------|--------------------|------------------------|---------| 57|---------- |--------------|-----------|----------|--------------------|------------------------|---------|
58|SSDLite+FPN |RegNetX800MF |512x512 |**6.03** |**29.9** |**29.4** |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/xmmdet/coco/ssd-lite_regnet_fpn_bgr) | 58|SSDLite+FPN |RegNetX800MF |512x512 |**6.03** |**29.9** |**29.4** | |
59|SSDLite+FPN |RegNetX1.6GF |768x768 | | | | | 59|SSDLite+FPN |RegNetX1.6GF |768x768 | | | | |
60|. 60|.
61|SSD+FPN |ResNet50 |512x512 |**30.77** |**31.2** | |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/xmmdet/coco/ssd_resnet_fpn) | 61|SSD+FPN |ResNet50 |512x512 |**30.77** |**31.2** | | |
62 62
63 63
64###### RetinaNet Detector 64###### RetinaNet Detector
@@ -66,7 +66,7 @@ Please see the reference [3] for algorithmic details of the detector.
66 66
67|Model Arch |Backbone Model|Resolution |Giga MACS |Float AP [0.5:0.95]%|8-bit QAT AP [0.5:0.95]%|Download | 67|Model Arch |Backbone Model|Resolution |Giga MACS |Float AP [0.5:0.95]%|8-bit QAT AP [0.5:0.95]%|Download |
68|---------- |--------------|-----------|----------|--------------------|------------------------|---------| 68|---------- |--------------|-----------|----------|--------------------|------------------------|---------|
69|RetinaNetLite+FPN|RegNetX800MF |512x512 |**6.04** | | | | 69|RetinaNetLite+FPN|RegNetX800MF |512x512 |**11.08** |**31.6** |**30.3** | |
70|RetinaNetLite+FPN|RegNetX1.6GF |768x768 | | | | | 70|RetinaNetLite+FPN|RegNetX1.6GF |768x768 | | | | |
71|. 71|.
72|RetinaNet+FPN* |ResNet50 |512x512 |**68.88** |**29.7** | |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) | 72|RetinaNet+FPN* |ResNet50 |512x512 |**68.88** |**29.7** | |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/retinanet) |
diff --git a/docs/det_usage.md b/docs/det_usage.md
index 9a6fcb25..9a96fdd8 100644
--- a/docs/det_usage.md
+++ b/docs/det_usage.md
@@ -4,18 +4,24 @@ Additional scripts are provided on top of mmdetection to ease the training and t
4 4
5 5
6#### Training 6#### Training
7- Select the appropriate config file in [train_detection_main.py](../scripts/train_detection_main.py). Start the training by running [run_detection_train.sh](../run_detection_train.sh) 7- Select the appropriate config file in [detection_configs.py](../scripts/detection_configs.py)
8
9- Start the training by running [run_detection_train.sh](../run_detection_train.sh)
8 10
9- After doing the floating point training, it is possible to run Qunatization Aware Training (QAT) starting from the trained checkpoint. For this, set quantize = True in the config file (see the line where it is set to False and change it to True) and run the training again. This will run a small number epochs of fine tuning with QAT at a lower learning rate. 11- After doing the floating point training, it is possible to run Qunatization Aware Training (QAT) starting from the trained checkpoint. For this, set quantize = True in the config file (see the line where it is set to False and change it to True) and run the training again. This will run a small number epochs of fine tuning with QAT at a lower learning rate.
10 12
11 13
12## Evaluation/Testing 14## Evaluation/Testing
13- Select the appropriate config file in [test_detection_main.py](../scripts/test_detection_main.py). Start evaluation by running [run_detection_test.sh](../run_detection_test.sh). 15- Make sure that the appropriate config file is selected in [detection_configs.py](../scripts/detection_configs.py)
16
17- Start evaluation by running [run_detection_test.sh](../run_detection_test.sh).
14 18
15- Note: If you did QAT, then the flag quantize in teh config file must be set to True even at this stage. 19- Note: If you did QAT, then the flag quantize in teh config file must be set to True even at this stage.
16 20
17 21
18## ONNX & Prototxt Export 22## ONNX & Prototxt Export
19- Select the appropriate config file in [export_pytorch2onnx.py](../scripts/export_pytorch2onnx.py). Start export by running [run_detection_export.sh](../run_detection_export.sh). 23- Make sure that the appropriate config file is selected in [detection_configs.py](../scripts/detection_configs.py)
24
25- Start export by running [run_detection_export.sh](../run_detection_export.sh).
20 26
21- Note: If you did QAT, then the flag quantize in the config file must be set to True even at this stage. \ No newline at end of file 27- Note: If you did QAT, then the flag quantize in the config file must be set to True even at this stage. \ No newline at end of file