updated quantization modules to support mmdetection, using Hardtanh for fixed range activation, other minor fixes, updated docs
re-implemented QuantTestModule using QuantTrainModule. constrain_bias added.
release commit
release commit
default of model_surgery_quantize is now True for QuantTestModule.
model_surgery_quantize must be True if the pretrained is a QAT or Calib module.
To test accuracy for a purely float model, set this flag to zero.
model_surgery_quantize must be True if the pretrained is a QAT or Calib module.
To test accuracy for a purely float model, set this flag to zero.
support DataParallel for QuantTrainModule
quantization docs update
release commit
quantization docs update
release commit
cosmetic change to quantization shell script
minor update to quantization docs & scritps. support for external model in classification.
quantization_example - RandomSampler is used when epoch_size!=0. epoch_size=0.1 means 10% of images in the dataset is used.
epoch_size - meaning has changed to number of images instead of number of iterations - due to the use of RandomSampler to implement it - update the scripts
added more options to the main scripts
doc update - to clarify the use of model.train() and model.eval()
constrain_weights - in default scenario, this is not used with per_channel_q.
epoch_size and shuffling arguments for validation
class weights were not being used in segmentation loss due to a bug. fixed it.
quantization - fix for ConvTranspose2d and BatchNorm merge. Do not merge weights upfront in QuantCalibrateModule, but will be done in PAct2.
calibration - fix bugs there were introduced during the recent restructure
pretrained model names update
changing extension of checkpoint files to .pth instead of .pth.tar
model path update
minor document update
minor document update
cleanup of several modules in the repository
documentation update
bugfix for evaluation
better implementation for epoch_size
quantization docs update
quantization docs updated
quantization docs update
quantization docs update
quantization docs updated
quantization docs update
quantization doc update
quantization doc update
quantization doc update
quantization doc update
docs update and minor fixes
quantization cleanup and minor fixes
minor doc update
release commit
release commit
release commit
release commit
release commit
minor doc update
release commit
release commit
release commit
release commit
release commit
quantization fixes, docs update, resize_with()
quantization aware training - bugfix for merged weights becoming 0 (typically due to one bn weight becomming 0)
torch.nn.ReLU is the recommended activation module. removed the custom defined module called ReLUN - if fixed range activation module is needed torch.nn.Hardtanh can be used.
support Hardtanh activation function also in quantization aware training
ResizeWith, UpsampleWith classes that can export to onnx with scale_factor in opset_version>=11 if pytoch>=1.5/nightly is installed. (opset_version=9 was already okay)
simpler resize/upsample modules using scale_factor
shufflenetv2 mnodel loading fix
renamed the low complexity pixel2pixel models with suffix "lite"
remove unused files
release commit
release commit
release commit
release commit
improved speed in training pixel2pixel models, added unet, other fixes
release commit
release commit
release commit
depth - doc and script update
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit
release commit