summary | shortlog | log | commit | commitdiff | tree
raw | patch | inline | side by side (parent: 44708ee)
raw | patch | inline | side by side (parent: 44708ee)
author | Manu Mathew <a0393608@ti.com> | |
Tue, 28 Jan 2020 08:33:44 +0000 (14:03 +0530) | ||
committer | Manu Mathew <a0393608@ti.com> | |
Tue, 28 Jan 2020 08:33:44 +0000 (14:03 +0530) |
docs/Depth_Estimation.md | patch | blob | history | |
scripts/train_depth_main.py | patch | blob | history |
index b38d4e77a3994facf393cdece1e4f2387db5beac..e48465ee0b619ee09e95237a68a2de57d08ae8d4 100644 (file)
--- a/docs/Depth_Estimation.md
+++ b/docs/Depth_Estimation.md
Loss functions and many other parametes can be changed or configured in [scripts/train_depth_main.py](../scripts/train_depth_main.py). We have seen that a combination of SmoothL1, ErrorVariance and Overall Scale Difference produces good results.
+Since Depth Estimation is a regression task, the generated output can be unconstrained. It is good to constrain it within reasonable limits so that the quantization error is contained. For this, in [scripts/train_depth_main.py](../scripts/train_depth_main.py) we can set the output range using the parameter args.model_config.output_range. For example:<br>
+args.model_config.output_range = [(0,128)]
+
### Results
##### KITTI Depth Dataset
index f0f7e786a7a1957d6ea2bb4d78768727aa6415f6..4d11dcb47335b731d8af6668a3d11234912d64d6 100755 (executable)
@@ -80,11 +80,11 @@ args.pretrained = './data/modelzoo/semantic_segmentation/cityscapes/deeplabv3lit
# './data/modelzoo/pretrained/pytorch/imagenet_classification/ericsun99/MobileNet-V2-Pytorch/mobilenetv2_Top1_71.806_Top2_90.410.pth.tar'
# 'https://download.pytorch.org/models/resnet50-19c8e357.pth'
-args.model_config.input_channels = (3,) # [3,3]
+args.model_config.input_channels = (3,) # [3,3]
args.model_config.output_type = ['depth']
args.model_config.output_channels = [1]
-args.model_config.output_range = [(0,64)] # important note: set this output_range parameter in the inference script as well
- # this is an important difference from the semantic segmentation script.
+args.model_config.output_range = [(0,128)] # important note: set this output_range parameter in the inference script as well
+ # this is an important difference from the semantic segmentation script.
args.losses = [['supervised_loss', 'scale_loss', 'supervised_error_var']] #[['supervised_loss', 'scale_loss']]
args.loss_mult_factors = [[0.125, 0.125, 4.0]]