aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorManu Mathew2020-07-20 00:24:57 -0500
committerManu Mathew2020-07-20 00:25:14 -0500
commitd9071112fe328c3a26784b5fa3da1fe11827ab25 (patch)
tree9d7724321db0796b3871600c5e8e253864d7f29b
parentab8ea2f92d7945ce386e829ee14ff52c37424bb0 (diff)
downloadpytorch-mmdetection-d9071112fe328c3a26784b5fa3da1fe11827ab25.tar.gz
pytorch-mmdetection-d9071112fe328c3a26784b5fa3da1fe11827ab25.tar.xz
pytorch-mmdetection-d9071112fe328c3a26784b5fa3da1fe11827ab25.zip
doc update and minor script update
-rw-r--r--README.md6
-rw-r--r--docs/det_model_zoo.md (renamed from docs/model_zoo.md)4
-rw-r--r--docs/det_quantization.md (renamed from docs/quantization.md)0
-rw-r--r--docs/det_usage.md (renamed from docs/usage.md)0
-rw-r--r--xmmdet/utils/save_model.py2
5 files changed, 6 insertions, 6 deletions
diff --git a/README.md b/README.md
index c09a21d8..7d0ba1eb 100644
--- a/README.md
+++ b/README.md
@@ -22,17 +22,17 @@ After installing mmdetection, please install [PyTorch-Jacinto-AI-DevKit](https:/
22 22
23Please see [Getting Started with MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/getting_started.md) for the basic usage of mmdetection. Note: Some of these may not apply to this repository. 23Please see [Getting Started with MMDetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/getting_started.md) for the basic usage of mmdetection. Note: Some of these may not apply to this repository.
24 24
25Please see [Usage](./docs/usage.md) for training and testing with this repository. 25Please see [Usage](./docs/det_usage.md) for training and testing with this repository.
26 26
27 27
28## Benchmark and Model Zoo 28## Benchmark and Model Zoo
29 29
30Several trained models with accuracy report is available at [Jacinto-AI-Detection Model Zoo](./docs/model_zoo.md) 30Several trained models with accuracy report is available at [Jacinto-AI-Detection Model Zoo](./docs/det_model_zoo.md)
31 31
32 32
33## Quantization 33## Quantization
34 34
35Tutorial on how to do [Quantization Aware Training](./docs/quantization.md) in Jacinto-AI-MMDetection. 35Tutorial on how to do [Quantization Aware Training](./docs/det_quantization.md) in Jacinto-AI-MMDetection.
36 36
37 37
38## Acknowledgement 38## Acknowledgement
diff --git a/docs/model_zoo.md b/docs/det_model_zoo.md
index 10241a38..e803ff71 100644
--- a/docs/model_zoo.md
+++ b/docs/det_model_zoo.md
@@ -27,7 +27,7 @@ Please see the reference [1] for algorithmic details of the detector.
27|SSDLite+FPN |RegNetX800MF |512x512 | | |ssd-lite_regnet_fpn.py | | 27|SSDLite+FPN |RegNetX800MF |512x512 | | |ssd-lite_regnet_fpn.py | |
28|SSDLite+FPN |RegNetX1.6GF |768x768 | | |ssd-lite_regnet_fpn.py | | 28|SSDLite+FPN |RegNetX1.6GF |768x768 | | |ssd-lite_regnet_fpn.py | |
29|. 29|.
30|SSDLite+FPN |ResNet50 |512x512 | | |ssd_resnet_fpn.py | | 30|SSD+FPN |ResNet50 |512x512 | | |ssd_resnet_fpn.py |[link](https://bitbucket.itg.ti.com/projects/JACINTO-AI/repos/jacinto-ai-modelzoo/browse/pytorch/vision/object_detection/xmmdet/coco/ssd_resnet_fpn) |
31|. 31|.
32|SSD* |VGG16 |512x512 |**29.34** |**98.81**| |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) | 32|SSD* |VGG16 |512x512 |**29.34** |**98.81**| |[link](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd) |
33 33
@@ -47,7 +47,7 @@ Please see the reference [2] for algorithmic details of the detector.
47 47
48- The suffix **Lite** indicates that the model uses either Depthwise Convolutions (like in MobileNet models) or grouped convolutions (like in RegNetX models). When the backbone is a MobileNet, we use Depthwise convolutions even in FPN and the detector heads. When the backbone is a RegNet model, we use Grouped convolutions with the same group size that the RegNet backbone uses. But for backbones that use regular convolutions (such as ResNet) we do not use Depthwise or Grouped convolutions. 48- The suffix **Lite** indicates that the model uses either Depthwise Convolutions (like in MobileNet models) or grouped convolutions (like in RegNetX models). When the backbone is a MobileNet, we use Depthwise convolutions even in FPN and the detector heads. When the backbone is a RegNet model, we use Grouped convolutions with the same group size that the RegNet backbone uses. But for backbones that use regular convolutions (such as ResNet) we do not use Depthwise or Grouped convolutions.
49- A square resolution such as 512x512 indicates that the inputs are resized to that resolution without respecting the aspect ration of the image (keep_ratio=False in config files)<br> 49- A square resolution such as 512x512 indicates that the inputs are resized to that resolution without respecting the aspect ration of the image (keep_ratio=False in config files)<br>
50- A non-square resolution indicated with comma (1536,768) indicates that images are resized to fit within this maximum and minimum size - but the aspect ratio of the image is preserved (keep_ratio=True in config files). This means that each image may have a different size after it is resized and hence is not suitable for embedded inference. But the interesting thing is that such a model can also be inferred or evaluated using a square aspect ratio.<br> 50- A non-square resolution indicated with comma (1536,768) or dash (1536-768) indicates that images are resized to fit within this maximum and minimum size - but the aspect ratio of the image is preserved (keep_ratio=True in config files). This means that each image may have a different size after it is resized and hence is not suitable for embedded inference. But the interesting thing is that such a model can also be inferred or evaluated using a square aspect ratio.<br>
51- The models with a \* were not trained by us, but rather taken from mmdetection model zoo and inference is run at teh said resolution.<br> 51- The models with a \* were not trained by us, but rather taken from mmdetection model zoo and inference is run at teh said resolution.<br>
52 52
53 53
diff --git a/docs/quantization.md b/docs/det_quantization.md
index 2ddf4f40..2ddf4f40 100644
--- a/docs/quantization.md
+++ b/docs/det_quantization.md
diff --git a/docs/usage.md b/docs/det_usage.md
index ac472162..ac472162 100644
--- a/docs/usage.md
+++ b/docs/det_usage.md
diff --git a/xmmdet/utils/save_model.py b/xmmdet/utils/save_model.py
index ff542324..c92f1e45 100644
--- a/xmmdet/utils/save_model.py
+++ b/xmmdet/utils/save_model.py
@@ -100,7 +100,7 @@ def _save_mmdet_proto_ssd(cfg, model, input_size, output_dir, input_names=None,
100 code_type=mmdet_meta_arch_pb2.CENTER_SIZE, keep_top_k=100, 100 code_type=mmdet_meta_arch_pb2.CENTER_SIZE, keep_top_k=100,
101 confidence_threshold=0.5) 101 confidence_threshold=0.5)
102 102
103 ssd = mmdet_meta_arch_pb2.TidlMaSsd(box_input=reg_output_names, class_input=cls_output_names, output='output', prior_box_param=prior_box_param, 103 ssd = mmdet_meta_arch_pb2.TidlMaCaffeSsd(box_input=reg_output_names, class_input=cls_output_names, output='output', prior_box_param=prior_box_param,
104 in_width=input_size[3], in_height=input_size[2], detection_output_param=detection_output_param) 104 in_width=input_size[3], in_height=input_size[2], detection_output_param=detection_output_param)
105 105
106 arch = mmdet_meta_arch_pb2.TIDLMetaArch(name='ssd', caffe_ssd=[ssd]) 106 arch = mmdet_meta_arch_pb2.TIDLMetaArch(name='ssd', caffe_ssd=[ssd])