diff --git a/docs/Calibration.md b/docs/Calibration.md
index 385c0c7bde8515ef9d6ade7da081a5ac3c2009f1..0d1098cb1b488037db0b0e04538b749e9ca31b86 100644 (file)
--- a/docs/Calibration.md
+++ b/docs/Calibration.md
--batch_size 12 --quantize True --epochs 1
```
+## Guidelines, Implementation Notes, Limitations & Recommendations
+- Please refer to the section on Quantization Aware Training, as the same guidelines, recomendations & limitations apply to QuantCalibrateModule.<br>
+- An additional limitation is that multi gpu processing with DataParallel / DistributedDataParallel is not supported for QuantCalibrateModule (also for QuantTestModule). In our example training scripts train_classification.py and train_pixel2pixel.py in pytorch_jacinto_ai/engine, we do not wrap the model in DataParallel if the model is QuantCalibrateModule or QuantTestModule. The original floating point training (without quantization) can use Multi-GPU as usual and we do not have any restrictions on that. (However multi gpu support with DataParallel works for QuantTrainModule - more details of this in the QAT section).<br>