documentation update - added accuracies for regnetx based segmentation models.
authorManu Mathew <a0393608@ti.com>
Wed, 12 Aug 2020 04:29:49 +0000 (09:59 +0530)
committerManu Mathew <a0393608@ti.com>
Wed, 12 Aug 2020 04:32:07 +0000 (10:02 +0530)
README.md
docs/Semantic_Segmentation.md
run_segmentation.sh

index 59b6fb1daafcdd73c4718c29b1f2189953de8793..9050c67e29cba6aa437cfacc4ce1db5dee6c3d24 100644 (file)
--- a/README.md
+++ b/README.md
@@ -39,14 +39,13 @@ This code also includes tools for **Quantization Aware Training** that can outpu
 Above are some of the examples are currently available. Click on each of the links above to go into the full description of the example.
 
 
-
 ## Additional Information
-- Some of the common training and validation commands are provided in shell scripts (.sh files) in the root folder.<br>
-- Landing Page: [https://github.com/TexasInstruments/jacinto-ai-devkit](https://github.com/TexasInstruments/jacinto-ai-devkit)<br>
-- Actual Git Repositories: [https://git.ti.com/jacinto-ai-devkit](https://git.ti.com/jacinto-ai-devkit)<br>
+- Some of the common training and validation commands are provided in shell scripts (.sh files) in the root folder. <br>
+- Landing Page: [https://github.com/TexasInstruments/jacinto-ai-devkit](https://github.com/TexasInstruments/jacinto-ai-devkit) <br>
+- Actual Git Repositories: [https://git.ti.com/jacinto-ai](https://git.ti.com/jacinto-ai) <br>
+- Each of the repositories listed in the above link have an "about" tab with documentation and a "summary" tab with git clone/pull URLs. 
 
 ## Acknowledgements
-
 Our source code uses parts of the following open source projects. We would like to sincerely thank their authors for making their code bases publicly available.
 
 |Module/Functionality              |Parts of the code borrowed/modified from                                             |
index c0dda137ec62df4b7baa2c881ce150506812391e..c18b462e8191883cc2f496b7cb68c11a1f975b58 100644 (file)
@@ -101,9 +101,9 @@ Inference can be done as follows (fill in the path to the pretrained model):<br>
 |Cityscapes |FPNLitePixel2Pixel with DWASPP|FD-ResNet50    |64             |1536x768   |30.91                |-         |fpnlite_pixel2pixel_aspp_resnet50_fd          |
 |Cityscapes |FPNLitePixel2Pixel with DWASPP|ResNet50       |32             |1536x768   |114.42               |-         |fpnlite_pixel2pixel_aspp_resnet50             |
 |.
-|Cityscapes |DeepLabV3Lite GroupedConvASPP |RegNet800MF [9]|32             |768x384    |**11.19**            |**70.22** |**deeplav3lite_pixel2pixel_aspp_regnetx800mf**|
-|Cityscapes |DeepLabV3Lite GroupedConvASPP |RegNet800MF [9]|32             |768x384    |**7.29*              |          |**fpnlite_pixel2pixel_aspp_regnetx800mf**     |
-|Cityscapes |DeepLabV3Lite GroupedConvASPP |RegNet800MF [9]|32             |768x384    |**6.09**             |          |**unetlite_pixel2pixel_aspp_regnetx800mf**    |
+|Cityscapes |DeepLabV3Lite GroupedConvASPP |RegNet800MF [9]|32             |768x384    |**11.19**            |**68.44** |**deeplav3lite_pixel2pixel_aspp_regnetx800mf**|
+|Cityscapes |FPNLite GroupedConvASPP       |RegNet800MF [9]|32             |768x384    |**7.29**              |**70.22** |**fpnlite_pixel2pixel_aspp_regnetx800mf**     |
+|Cityscapes |UNetLite GroupedConvASPP      |RegNet800MF [9]|32             |768x384    |**6.09**             |**69.93** |**unetlite_pixel2pixel_aspp_regnetx800mf**    |
 
 
 For comparison, here we list a few models from the literature:
index 11616e43e03bad07b5a5198ee3f06423028668a3..e2b6970e3586c9e8af84318f590e1731a148d15c 100755 (executable)
 #--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
 #--pretrained https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
 
+#### Cityscapes Semantic Segmentation - Training with MobileNetV2+FPNLite
+#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name fpnlite_pixel2pixel_aspp_mobilenetv2_tv \
+#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
+#--pretrained https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
+
+#### Cityscapes Semantic Segmentation - Training with MobileNetV2+UNetLite
+#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name unetlite_pixel2pixel_aspp_mobilenetv2_tv \
+#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
+#--pretrained https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
+
+#Higher Resolution
+#------------------------
 #### Cityscapes Semantic Segmentation - Training with MobileNetV2+DeeplabV3Lite, Higher Resolution
 #python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name deeplabv3lite_mobilenetv2_tv \
 #--data_path ./data/datasets/cityscapes/data --img_resize 768 1536 --rand_crop 512 1024 --output_size 1024 2048 --gpus 0 1 \
 
 
 
+#RegNetX based Models
+#------------------------
+## Cityscapes Semantic Segmentation - Training with RegNetX800MF+DeeplabV3Lite
+#Note: to use BGR input, set: --input_channel_reverse True, for RGB input ommit this argument or set it to False.
+#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name deeplabv3lite_regnetx800mf \
+#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
+#--pretrained https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906036/RegNetX-800MF_dds_8gpu.pyth
+
+### Cityscapes Semantic Segmentation - Training with RegNetX800MF+FPNLite
+#Note: to use BGR input, set: --input_channel_reverse True, for RGB input ommit this argument or set it to False.
+#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name fpnlite_pixel2pixel_aspp_regnetx800mf \
+#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
+#--pretrained https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906036/RegNetX-800MF_dds_8gpu.pyth
+
+### Cityscapes Semantic Segmentation - Training with RegNetX800MF+UNetLite
+#Note: to use BGR input, set: --input_channel_reverse True, for RGB input ommit this argument or set it to False.
+#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name unetlite_pixel2pixel_aspp_regnetx800mf \
+#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
+#--pretrained https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906036/RegNetX-800MF_dds_8gpu.pyth
+
+
 #ResNet50 based Models
 #------------------------
 #### Cityscapes Semantic Segmentation - Training with ResNet50+DeeplabV3Lite
 #--pretrained "./data/modelzoo/pretrained/pytorch/imagenet_classification/jacinto_ai/resnet50-0.5_2018-07-23_12-10-23.pth"
 
 
-#RegNetX based Models
-#------------------------
-### Cityscapes Semantic Segmentation - Training with RegNetX800MF+DeeplabV3Lite
-#Note: to use BGR input, set: --input_channel_reverse True, for RGB input ommit this argument or set it to False.
-#python ./scripts/train_segmentation_main.py --dataset_name cityscapes_segmentation --model_name fpnlite_pixel2pixel_aspp_regnetx800mf \
-#--data_path ./data/datasets/cityscapes/data --img_resize 384 768 --output_size 1024 2048 --gpus 0 1 \
-#--pretrained https://dl.fbaipublicfiles.com/pycls/dds_baselines/160906036/RegNetX-800MF_dds_8gpu.pyth
-
-
 
 
 #-- VOC Segmentation