quantization fixes, docs update, resize_with()
authorManu Mathew <a0393608@ti.com>
Thu, 23 Apr 2020 11:47:59 +0000 (17:17 +0530)
committerManu Mathew <a0393608@ti.com>
Thu, 23 Apr 2020 11:49:49 +0000 (17:19 +0530)
15 files changed:
README.md
docs/Calibration.md
docs/Multi_Task_Learning.md
docs/Quantization.md
modules/pytorch_jacinto_ai/engine/train_classification.py
modules/pytorch_jacinto_ai/engine/train_pixel2pixel.py
modules/pytorch_jacinto_ai/xnn/layers/resize_blocks.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_train_module.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_train_utils.py
modules/pytorch_jacinto_ai/xnn/utils/__init__.py
run_quantization.sh
run_quantization_example.sh
scripts/infer_segmentation_main.py
scripts/train_depth_main.py
scripts/train_pixel2pixel_multitask_main.py

index bc03e38a672dc772e930c675167a93e5e6e0dd02..96713c4e6d103c5bb3b81d8ed93b4557d2bb3871 100644 (file)
--- a/README.md
+++ b/README.md
@@ -30,15 +30,16 @@ This code also includes tools for **Quantization Aware Training** that can outpu
 The following examples are currently available. Click on each of the links below to go into the full description of the example. 
 * Image Classification<br>
     * [**Image Classification**](docs/Image_Classification.md)<br>
-* Pixel2Pixel prediction<br>
+* Pixel2Pixel Prediction<br>
     * [**Semantic Segmentation**](docs/Semantic_Segmentation.md)<br>
     * [Depth Estimation](docs/Depth_Estimation.md)<br>
     * [Motion Segmentation](docs/Motion_Segmentation.md)<br>
-    * Multi Task Estimation - coming soon..<br>
+    * [**Multi Task Estimation**](docs/Multi_Task_Learning.md)<br>
 * Object Detection<br>
     * Object Detection - coming soon..<br>
     * Object Keypoint Estimation - coming soon..<br>
-* [**Quantization**](docs/Quantization.md)<br>
+* Quantization<br>
+    * [**Quantization Aware Training**](docs/Quantization.md)<br>
 
 
 Some of the common training and validation commands are provided in shell scripts (.sh files) in the root folder.
index 3aae54ecb36bdbe4766da26d089c99c13cbf8fe5..9e749243912347892ca54e3b33f2ee3b0115e5c4 100644 (file)
@@ -82,7 +82,7 @@ python ./scripts/train_classification_main.py --phase calibration --dataset_name
 - Calibration of Cityscapes Semantic Segmentation model
 ```
 python ./scripts/train_segmentation_main.py --phase calibration --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 
---pretrained ./data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth 
+--pretrained ./data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar 
 --batch_size 12 --quantize True --epochs 1 --epoch_size 100
 ```
 
index ef4e00546d35a2d385acc0dcd86e00d0a18eead9..14253a7a72e907608a5971d4ac5bd121be032c15 100644 (file)
@@ -14,14 +14,14 @@ Two parallel encoders extract appearance and flow feature separately and fuse th
 
 ## Datasets: Cityscapes Multitask Datset
 **Inputs:** The network takes (Optical flow, Current frame) as input. 
-* **Optical flow** For optical flow input, copy the directory **leftimg8bit_flow_farneback_confidence** from this [**repository**](https://bitbucket.itg.ti.com/projects/ALGO-DEVKIT/repos/cityscapes_motion_dataset/browse) into ./data/datatsets/cityscapes/data/.
+* **Optical flow** For optical flow input, copy the directory **leftimg8bit_flow_farneback_confidence** from this [repository](https://bitbucket.itg.ti.com/projects/ALGO-DEVKIT/repos/cityscapes_motion_dataset/browse) into ./data/datatsets/cityscapes/data/.
 * **Current frame:**: This is can be downloaded from https://www.cityscapes-dataset.com/. Download the zip file leftImg8bit_trainvaltest.zip. keep the directory leftimg8bit in ./data/datatsets/cityscapes/data/. 
 
 **Ground truth**
 Since we are training  network to infer depth, semantic and motion together, we need to have the ground truth for all these tasks for common input.  
 * **Depth:**  This is available from https://www.cityscapes-dataset.com/ . This folder named disparity must be kept in  ./data/datasets/cityscapes/data.
 * **Semantic:** This is available from https://www.cityscapes-dataset.com/ as well. Keep the gtFine directory in ./data/datasets/cityscapes/data. 
-* **Motion:** This [repository](https://bitbucket.itg.ti.com/projects/ALGO-DEVKIT/repos/cityscapes_motion_dataset/browse)contains motion annotation inside **gtFine**. Move the gtFine directory into ./data/datatsets/cityscapes/data/.
+* **Motion:** This [repository](https://bitbucket.itg.ti.com/projects/ALGO-DEVKIT/repos/cityscapes_motion_dataset/browse) contains motion annotation inside **gtFine**. Move the gtFine directory into ./data/datatsets/cityscapes/data/.
 Finally depth annotation must reside inside ./data/datasets/cityscapes/data whereas both the semantic and motion annotations must go inside ./data/datatsets/cityscapes/data/gtFine.
 
 Now, the final directory structure must look like this:
@@ -55,7 +55,7 @@ python ./scripts/train_pixel2pixel_main.py --dataset_name cityscapes_image_dof_c
 | Single Task Training                            | ----- , -----, -----|
 | Vanilla Multi Task Training                     |12.31, 82.32, 80.52|
 | Uncertainty based Multi Task Training           | ----  , ---- , ----|
-| gradient-norm based Multi Task Learning         | 12.64, 85.53, 84.75|
+| Gradient-norm based Multi Task Learning         | 12.64, 85.53, 84.75|
 
 ## References
 [1]The Cityscapes Dataset for Semantic Urban Scene Understanding, Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, Bernt Schiele, CVPR 2016, https://www.cityscapes-dataset.com/
index 7f9ef3d11271ad9024291cb0866a773999576398..6e76a48db34938c3d0827aca05e452818f834ee0 100644 (file)
@@ -83,7 +83,7 @@ python ./scripts/train_classification_main.py --dataset_name image_folder_classi
 
 Cityscapes Semantic Segmentation:<br>
 ```
-python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 --pretrained ./data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth --batch_size 8 --quantize True --epochs 50 --lr 1e-5 --evaluate_start False
+python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 --pretrained ./data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar --batch_size 8 --quantize True --epochs 50 --lr 1e-5 --evaluate_start False
 ```
 
 For more examples, please see the files run_qunatization_example.sh and examples/quantization_example.py
index e5dadf39f3e017bf08d78c68c279816ae183a276..a2e14af09e58384962707be21548aa49e822f987 100644 (file)
@@ -105,6 +105,8 @@ def get_config():
 
     args.freeze_bn = False                              # freeze the statistics of bn
     args.save_mod_files = False                         # saves modified files after last commit. Also  stores commit id.
+
+    args.opset_version = 9                              # onnx opset_version
     return args
 
 
@@ -435,7 +437,7 @@ def write_onnx_model(args, model, save_path, name='checkpoint.onnx'):
     dummy_input = create_rand_inputs(args, is_cuda)
     #
     model.eval()
-    torch.onnx.export(model, dummy_input, os.path.join(save_path,name), export_params=True, verbose=False)
+    torch.onnx.export(model, dummy_input, os.path.join(save_path,name), export_params=True, verbose=False, opset_version=args.opset_version)
 
 
 def train(args, train_loader, model, criterion, optimizer, epoch):
index e9bf7322204a02ef089a7c0769d5f9d51e079ea1..db1b59ca3d748bfb65fe435661838e64d078995e 100644 (file)
@@ -171,6 +171,8 @@ def get_config():
     args.print_val_class_iou = False
     args.freeze_layers = None
 
+    args.opset_version = 9                              # onnx opset_version
+
     return args
 
 
@@ -378,7 +380,8 @@ def main(args):
 
     #################################################
     if args.generate_onnx and (any(args.phase in p for p in ('training','calibration')) or (args.run_soon == False)):
-        write_onnx_model(args, get_model_orig(model), save_path)
+        write_onnx_model(args, get_model_orig(model), save_path, export_torch_script=False)
+
     #
 
     #################################################
@@ -936,15 +939,21 @@ def add_node_names(onnx_model_name= []):
     #update model inplace
     onnx.save(onnx_model, onnx_model_name)
 
-def write_onnx_model(args, model, save_path, name='checkpoint.onnx'):
+def write_onnx_model(args, model, save_path, name='checkpoint.onnx', export_torch_script=False):
     is_cuda = next(model.parameters()).is_cuda
     input_list = create_rand_inputs(args, is_cuda=is_cuda)
     #
     model.eval()
-    torch.onnx.export(model, input_list, os.path.join(save_path, name), export_params=True, verbose=False)
+    torch.onnx.export(model, input_list, os.path.join(save_path, name), export_params=True, verbose=False, opset_version=args.opset_version)
     #torch onnx export does not update names. Do it using onnx.save
     add_node_names(onnx_model_name = os.path.join(save_path, name))
 
+    #write troch script model 
+    if export_torch_script:
+        traced_script_module = torch.jit.trace(model, (input_list,))
+        pretrained_files = args.pretrained if isinstance(args.pretrained, (list, tuple)) else [args.pretrained]
+        trace_model_name = pretrained_files[0].replace('.pth.tar', '_{}_{}_traced_model.pth'.format(args.img_resize[0], args.img_resize[1]))
+        torch.jit.save(traced_script_module, trace_model_name)
 
 ###################################################################
 def write_output(args, prefix, val_epoch_size, iter, epoch, dataset, output_writer, input_images, task_outputs, task_targets, metric_names, writer_idx):
index acd547af01c9444baa562a1315e697fe77b0bfc7..003c179dfcb156f66b05e65dd5eadee64ce83f75 100644 (file)
@@ -4,38 +4,44 @@ from .deconv_blocks import *
 
 
 ##############################################################################################
-# Newer Resize/Upsample mopdules. Please use these modules instead of the older ResizeTo(), UpsampleTo()
-# The older modules may be removed in a later version.
+# Newer Resize/Upsample mopdules to resize with scale factor that outputs a simple onnx graph.
+# Please use these resize_with function or and ResizeWith/UpsampleWith modules instead of the
+# older ResizeTo, UpsampleTo. The older modules may be removed in a later version.
 ##############################################################################################
 
+def resize_with(x, size=None, scale_factor=None, mode='nearest', align_corners=None):
+    assert size is None or scale_factor is None, 'both size and scale_factor must not be specified'
+    assert size is not None or scale_factor is not None, 'at least one of size or scale factor must be specified'
+    assert isinstance(x, torch.Tensor), 'must provide a single tensor as input'
+    try:
+        # Newer PyTorch versions support recompute_scale_factor = False, that exports a clean onnx graph
+        # Attempt it first. Works with onnx opset_version=9 & opset_version=11
+        y = torch.nn.functional.interpolate(x, size=size, scale_factor=scale_factor, mode=mode, align_corners=align_corners, recompute_scale_factor=False)
+    except:
+        if torch.onnx.is_in_onnx_export():
+            warnings.warn('To generate a simple Upsample/Resize ONNX graph, please use pytorch>=1.5 or the nightly, as explained here: https://pytorch.org/')
+        #
+        if scale_factor is not None:
+            # A workaround for older versions of PyTorch to generate a clean onnx graph with onnx opset_version=9 (may not work in onnx opset_version=11).
+            # Generate size as a tuple and pass it - as onnx export inserts scale_factor if size is a non-tensor.
+            scale_factor = (scale_factor,scale_factor) if not isinstance(scale_factor,(list,tuple)) else scale_factor
+            size = [int(round(float(shape)*scale)) for shape, scale in zip(x.shape[2:],scale_factor)]
+        #
+        y = torch.nn.functional.interpolate(x, size=size, mode=mode, align_corners=align_corners)
+    #
+    return y
+
 
-# onnx export from PyTorch is creating a complicated graph - use this workaround for now until the onnx export is fixed.
-# only way to create a simple graph with scale _factors seem to be provide size as integer to interpolate function
-# this workaround seems to be working in onnx opset_version=9, however in opset_version=11, it still produces a complicated graph.
 class ResizeWith(torch.nn.Module):
-    def __init__(self, scale_factor=None, mode='nearest'):
-        ''' Resize with scale_factor
-            This module exports an onnx graph with scale_factor
-        '''
+    def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=None):
         super().__init__()
+        self.size = size
         self.scale_factor = scale_factor
         self.mode = mode
-        assert scale_factor is not None, 'scale_factor must be specified'
+        self.align_corners = align_corners
 
     def forward(self, x):
-        assert isinstance(x, torch.Tensor), 'must provide a single tensor as input'
-        scale_factor = (self.scale_factor, self.scale_factor) if not isinstance(self.scale_factor, (list,tuple)) else self.scale_factor
-        try:
-            y = torch.nn.functional.interpolate(x, scale_factor=scale_factor, mode=self.mode, recompute_scale_factor = False)
-        except:
-            # warnings.warn('Note: If you are exporting to onnx_opset_version>=11, for a simple Upsample/Resize graph using scale_factor, please use pytorch>=1.5.' \
-            #               'Until pytorch 1.5 is available, you can install the nightly, as explained here: https://pytorch.org/blog/')
-            # The following trick seems to be the only way to export Upsample/Resize with scale_factor in onnx opset_version=9.
-            # Generate size as a tuple and pass it - as onnx export inserts scale_factor if the size is a non-tensor.
-            # This trick may not work in onnx opset_version=11, and we recommend to install pytorch 1.5 or latest nightly as explained in the warning above.
-            size = (int(x.shape[2]*scale_factor[0]), int(x.shape[3]*scale_factor[1]))
-            y = torch.nn.functional.interpolate(x, size=size, mode=self.mode)
-        #
+        y = resize_with(x, self.size, self.scale_factor, self.mode, self.align_corners)
         return y
 
 
index d7fbf0386efa99d27a16ffe7916440a6a6d493bf..35144a3b8b30e1ba58a09bca5db34f52605d925a 100644 (file)
@@ -42,9 +42,10 @@ class QuantTrainModule(QuantBaseModule):
         # range shrink - 0.0 indicates no shrink
         percentile_range_shrink = (layers.PAct2.PACT2_RANGE_SHRINK if histogram_range else 0.0)
         # set attributes to all modules - can control the behaviour from here
-        utils.apply_setattr(self, bitwidth_weights=bitwidth_weights, bitwidth_activations=bitwidth_activations, per_channel_q=per_channel_q, bias_calibration=bias_calibration,
-                           quantize_enable=True, quantize_weights=True, quantize_bias=True, quantize_activations=True,
-                           percentile_range_shrink=percentile_range_shrink, constrain_weights=self.constrain_weights)
+        utils.apply_setattr(self, bitwidth_weights=bitwidth_weights, bitwidth_activations=bitwidth_activations,
+                            per_channel_q=per_channel_q, bias_calibration=bias_calibration,
+                            percentile_range_shrink=percentile_range_shrink, constrain_weights=self.constrain_weights,
+                            update_range=True, quantize_enable=True, quantize_weights=True, quantize_bias=True, quantize_activations=True)
 
         # for help in debug/print
         utils.add_module_names(self)
@@ -150,6 +151,7 @@ class QuantCalibrateModule(QuantTrainModule):
         self.calibrate_weights = False
         self.calibrate_repeats = 1
         self.quantize_enable = True
+        self.update_range = True
         # BNs can be adjusted based on the input provided - however this is not really required
         self.calibrate_bn = False
         super().__init__(module, bitwidth_weights=bitwidth_weights, bitwidth_activations=bitwidth_activations, per_channel_q=per_channel_q,
@@ -197,15 +199,15 @@ class QuantCalibrateModule(QuantTrainModule):
     def forward_compute_oputput_stats(self, inputs):
         self._restore_weights_orig()
         # disable quantization for a moment
-        quantize_enable_backup_value = self.quantize_enable
-        utils.apply_setattr(self, quantize_enable=False)
+        quantize_enable_backup_value, update_range_backup_value = self.quantize_enable, self.update_range
+        utils.apply_setattr(self, quantize_enable=False, update_range=False)
 
         self.add_call_hook(self.module, self._forward_compute_oputput_stats_hook)
         outputs = self.module(inputs)
         self.remove_call_hook(self.module)
 
         # turn quantization back on - not a clean method
-        utils.apply_setattr(self, quantize_enable=quantize_enable_backup_value)
+        utils.apply_setattr(self, quantize_enable=quantize_enable_backup_value, update_range=update_range_backup_value)
         self._backup_weights_orig()
         return outputs
     #
index 553f2fad0117d39f3f403440a6d6cedea5f6e269..23965eb49caf3385b2a6a2d09cfab2e3d824a146 100644 (file)
@@ -125,6 +125,7 @@ class QuantTrainPAct2(layers.PAct2):
         # so any clipping we do here is not stored int he weight params
         self.range_shrink_weights = 0.0
         self.round_dither = 0.0
+        self.update_range = True
         self.quantize_enable = True
         self.quantize_weights = True
         self.quantize_bias = True
@@ -152,7 +153,7 @@ class QuantTrainPAct2(layers.PAct2):
                         'bitwidth_weights and bitwidth_activations must not be None'
 
         # the pact range update happens here - but range clipping depends on quantize_enable
-        y = super().forward(x, update_range=True, enable=self.quantize_enable)
+        y = super().forward(x, update_range=self.update_range, enable=self.quantize_enable)
 
         if not self.quantize_enable:
             return y
@@ -178,7 +179,7 @@ class QuantTrainPAct2(layers.PAct2):
             xq = x
         #
 
-        if (self.quantize_activations):
+        if (self.quantize_enable and self.quantize_activations):
             clip_min, clip_max, scale, scale_inv = self.get_clips_scale_act()
             width_min, width_max = self.get_widths_act()
             # no need to call super().forward here as clipping with width_min/windth_max-1 after scaling has the same effect.
@@ -259,7 +260,7 @@ class QuantTrainPAct2(layers.PAct2):
 
         # quantize weight and bias
         if (conv is not None):
-            if (self.quantize_weights):
+            if (self.quantize_enable and self.quantize_weights):
                 if self.constrain_weights and first_training_iter:
                     with torch.no_grad():
                         # clamp merged weights, invert the bn and copy to conv weight
@@ -289,7 +290,7 @@ class QuantTrainPAct2(layers.PAct2):
                 merged_weight = layers.quantize_dequantize_g(merged_weight, scale2, width_min, width_max-1, self.power2, 'round_sym')
             #
 
-            if (self.quantize_bias):
+            if (self.quantize_enable and self.quantize_bias):
                 bias_width_min, bias_width_max = self.get_widths_bias()
                 bias_clip_min, bias_clip_max, bias_scale2, bias_scale_inv2 = self.get_clips_scale_bias(merged_bias)
                 # merged_bias = layers.clamp_g(layers.round_sym_g(merged_bias * bias_scale2), bias_width_min, bias_width_max-1, self.training) * bias_scale_inv2
@@ -299,10 +300,10 @@ class QuantTrainPAct2(layers.PAct2):
             # invert the bn operation and store weights/bias
             if self.training and is_store_weight_bias_iter:
                 with torch.no_grad():
-                    if self.quantize_weights:
+                    if self.quantize_enable and self.quantize_weights:
                         conv.weight.data.copy_(merged_weight.data * merged_scale_inv.view(-1, 1, 1, 1))
                     #
-                    if self.quantize_bias:
+                    if self.quantize_enable and self.quantize_bias:
                         if conv.bias is not None:
                             if bn is not None:
                                 conv_bias = (merged_bias - bn_bias) * merged_scale_inv.view(-1) + bn.running_mean
index 2d49146f048c933726ecce45049452cb0df41f2d..b4bc88118acc1c483f3f7235f2cffbf672b6cf66 100644 (file)
@@ -14,3 +14,5 @@ from .count_flops import forward_count_flops
 from .bn_utils import *
 try: from .tensor_utils_internal import *
 except: pass
+try: from .utils_export_internal import *
+except: pass
index 63ea151c5740ccd34443779744b50e745e90214a..ba9f80d616c026c799f50e1099ce95f0f1865897 100755 (executable)
 #
 #### Image Classification - Trained Quantization - MobileNetV2(Shicai) - a TOUGH MobileNetV2 pretrained model
 #python ./scripts/train_classification_main.py --dataset_name image_folder_classification --model_name mobilenetv2_shicai_x1 --data_path ./data/datasets/image_folder_classification \
-#--pretrained ./data/modelzoo/pretrained/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
+#--pretrained ./data/modelzoo/experimental/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
 #--batch_size 64 --quantize True --epochs 25 --epoch_size 1000 --lr 1e-5 --evaluate_start False
 #
 #
 #### Semantic Segmentation - Trained Quantization for MobileNetV2+DeeplabV3Lite
 #python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
-#--pretrained ./data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth \
+#--pretrained ./data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar \
 #--batch_size 12 --quantize True --epochs 150 --lr 1e-5 --evaluate_start False
 
 
 
 #### Image Classification - Accuracy Estimation with Post Training Quantization - A TOUGH MobileNetV2 pretrained model
 #python ./scripts/train_classification_main.py --phase validation --dataset_name image_folder_classification --model_name mobilenetv2_shicai_x1 --data_path ./data/datasets/image_folder_classification \
-#--pretrained ./data/modelzoo/pretrained/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
+#--pretrained ./data/modelzoo/experimental/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
 #--batch_size 64 --quantize True
 
 #### Semantic Segmentation - Accuracy Estimation with Post Training Quantization
 #python ./scripts/train_segmentation_main.py --phase validation --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
-#--pretrained './data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth' \
+#--pretrained './data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar' \
 #--batch_size 1 --quantize True
 
 
 #
 #### Image Classification - Post Training Calibration & Quantization for a TOUGH MobileNetV2 pretrained model
 #python ./scripts/train_classification_main.py --phase calibration --dataset_name image_folder_classification --model_name mobilenetv2_shicai_x1 --data_path ./data/datasets/image_folder_classification \
-#--pretrained ./data/modelzoo/pretrained/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
+#--pretrained ./data/modelzoo/experimental/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar \
 #--batch_size 64 --quantize True --epochs 1 --epoch_size 100
 #
 #
 #### Semantic Segmentation - Post Training Calibration &  Quantization for MobileNetV2+DeeplabV3Lite
 #python ./scripts/train_segmentation_main.py --phase calibration --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv --data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
-#--pretrained ./data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth \
+#--pretrained ./data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar \
 #--batch_size 12 --quantize True --epochs 1 --epoch_size 100
 
 
index 1f189878c362c461cffc90eca2597852a0db3465..213e8b5ac29261985e7ad8fcb0e0bcf354293c8c 100755 (executable)
@@ -23,7 +23,7 @@ declare -A model_pretrained=(
   [mobilenet_v2]=https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
   [resnet50]=https://download.pytorch.org/models/resnet50-19c8e357.pth
   [shufflenet_v2_x1_0]=https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth
-#  [mobilenetv2_shicai]='./data/modelzoo/pretrained/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar'
+#  [mobilenetv2_shicai]='./data/modelzoo/experimental/pytorch/others/shicai/MobileNet-Caffe/mobilenetv2_shicai_rgb.tar'
 )
 
 # ----------------------------------
index 71fc384d2256b1d7d0972094b765d9d7be8ac076..530a039c3939e5fbcbf1315ce16bd1dfffa6b601 100755 (executable)
@@ -55,11 +55,11 @@ args = infer_pixel2pixel.get_config()
 #Modify arguments
 args.model_name = "deeplabv3lite_mobilenetv2_tv" #"deeplabv3lite_mobilenetv2_relu" #"deeplabv3lite_mobilenetv2_relu_x1p5" #"deeplabv3plus"
 
-args.dataset_name = 'a2d2_segmentation_measure' #'tiad_segmentation_infer'   #'cityscapes_segmentation_infer' #'tiad_segmentation'  #'cityscapes_segmentation_measure'
+args.dataset_name = 'cityscapes_segmentation_measure' #'tiad_segmentation_infer'   #'cityscapes_segmentation_infer' #'tiad_segmentation'  #'cityscapes_segmentation_measure'
 args.dataset_config.split = 'val'
 
 #args.save_path = './data/checkpoints'
-args.data_path = '/data/ssd/datasets/a2d2_v2/' #'./data/datasets/cityscapes/data'   #'/data/hdd/datasets/cityscapes_leftImg8bit_sequence_trainvaltest/' #'./data/datasets/cityscapes/data'  #'./data/tiad/data/demoVideo/sequence0021'  #'./data/tiad/data/demoVideo/sequence0025'   #'./data/tiad/data/demoVideo/sequence0001_2017'
+args.data_path = '/data/ssd/datasets/cityscapes/data/' #'./data/datasets/cityscapes/data'   #'/data/hdd/datasets/cityscapes_leftImg8bit_sequence_trainvaltest/' #'./data/datasets/cityscapes/data'  #'./data/tiad/data/demoVideo/sequence0021'  #'./data/tiad/data/demoVideo/sequence0025'   #'./data/tiad/data/demoVideo/sequence0001_2017'
 #args.pretrained = './data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth'
 #args.pretrained = './data/checkpoints/tiad_segmentation/2019-10-18_00-50-03_tiad_segmentation_deeplabv3lite_mobilenetv2_ericsun_resize768x384_traincrop768x384_float/checkpoint.pth.tar'
 
@@ -94,7 +94,7 @@ args.iter_size = 1                      #2
 args.batch_size = 32 #80                  #12 #16 #32 #64
 args.img_resize = (384, 768)         #(256,512) #(512,512) # #(1024, 2048) #(512,1024)  #(720, 1280)
 
-args.output_size = (1208, 1920)          #(1024, 2048)
+args.output_size = (1024, 2048)          #(1024, 2048)
 #args.rand_scale = (1.0, 2.0)            #(1.0,2.0) #(1.0,1.5) #(1.0,1.25)
 
 args.depth = [False]
index bf4d74ce4f325cc09d1d99eaeb497e686e07167f..97a6aed0bfd0dbc0cbb1dc1da55a263a296811ed 100755 (executable)
@@ -76,7 +76,7 @@ args.split_files = (args.data_path+'/train.txt', args.data_path+'/val.txt')
 
 #args.save_path = './data/checkpoints'
 
-args.pretrained = './data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth'
+args.pretrained = './data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar'
                                     # 'https://download.pytorch.org/models/mobilenet_v2-b0353104.pth'
                                     # './data/modelzoo/pretrained/pytorch/imagenet_classification/ericsun99/MobileNet-V2-Pytorch/mobilenetv2_Top1_71.806_Top2_90.410.pth.tar'
                                     # 'https://download.pytorch.org/models/resnet50-19c8e357.pth'
index f9513d403becdcda77859a00a7608298d9cb5a8d..533831832b2d2218148631cad913ec4cb26039ea 100755 (executable)
@@ -75,7 +75,7 @@ args.dataset_name =  'cityscapes_depth_semantic_five_class_motion_image_dof_conf
 
 args.data_path = './data/datasets/cityscapes_768x384/data'  #./data/pascal-voc/VOCdevkit/VOC2012
 
-args.pretrained = './data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lite-mobilenetv2/cityscapes_segmentation_deeplabv3lite-mobilenetv2_2019-06-26-08-59-32.pth'
+args.pretrained = './data/modelzoo/pytorch/semantic_segmentation/cityscapes/jacinto_ai/deeplabv3lite_mobilenetv2_tv_resize768x384_best.pth.tar'
                             #'./data/checkpoints/cityscapes_depth_semantic_five_class_motion_image_dof_conf/0p9_release/2019-06-27-13-50-10_cityscapes_depth_semantic_five_class_motion_image_dof_conf_deeplabv3lite_mobilenetv2_ericsun_mi4_resize768x384_traincrop768x384/model_best.pth.tar'
                             #'./data/modelzoo/pretrained/pytorch/cityscapes_segmentation/v0.9-2018-12-07-19:38:26_cityscapes_segmentation_deeplabv3lite_mobilenetv2_relu_resize768x384_traincrop768x384_(68.9%)/model_best.pth.tar'
                             #'./data/checkpoints/store/saved/cityscapes_segmentation/v0.7-2018-10-25-13:07:38_cityscapes_segmentation_deeplabv3lite_mobilenetv2_relu_resize1024x512_traincrop512x512_(71.5%)/model_best.pth.tar'