diff --git a/docs/Quantization.md b/docs/Quantization.md
index fa2fc98d3533f23c6971422c67247a7b4148e21b..263fba6336814a10c48b3e7925edb72db8ede49d 100644 (file)
--- a/docs/Quantization.md
+++ b/docs/Quantization.md
<p float="left"> <img src="quantization/pact2_activation.png" width="640" hspace="5"/> </p>
We use statistical range clipping in PACT2 to improve the Quantized Accuracy (compared to simple min-max range clipping).
-## Post Training Calibration For Quantization (a.k.a. Calibration)
+## Post Training Calibration For Quantization (Calibration)
**Note: this is not our recommended method in PyTorch.**<br>
Post Training Calibration or simply Calibration is a method to reduce the accuracy loss with quantization. This is an approximate method and does not require ground truth or back-propagation - hence it is suitable for implementation in an Import/Calibration tool. We have simulated that in PyTorch and can be used as fast method to improve the accuracy of Quantization. If you are interested, you can take a look at the [documentation of Calibration here](Calibration.md).<br>
However, in a training frame work such as PyTorch, it is possible to get better accuracy with Quantization Aware Training and we recommend to use that (next section).