default of model_surgery_quantize is now True for QuantTestModule.
authorManu Mathew <a0393608@ti.com>
Sat, 23 May 2020 12:52:30 +0000 (18:22 +0530)
committerManu Mathew <a0393608@ti.com>
Sat, 23 May 2020 12:58:11 +0000 (18:28 +0530)
commitfe0de31c22f4320a6f33fc2733d507af14bb3e20
treebe40141cb81f6d90860e48c2220190779374caa5
parent8273351c0f889420acd8e08627ebb2c1f1ddb687
default of model_surgery_quantize is now True for QuantTestModule.
model_surgery_quantize must be True if the pretrained is a QAT or Calib module.
To test accuracy for a purely float model, set this flag to zero.
docs/Quantization.md
modules/pytorch_jacinto_ai/vision/models/classification/__init__.py
modules/pytorch_jacinto_ai/vision/models/resnet.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_base_module.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_calib_module.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_graph_module.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_test_module.py
modules/pytorch_jacinto_ai/xnn/quantize/quant_utils.py
run_quantization.sh