Skip to content

rajgarg021/quantization

Repository files navigation

quantization

Implementing various quantization techniques for neural networks using PyTorch

What is Quantization and why is it important?

Quantization is the process of converting continuous values to discrete set of values using linear/non-linear scaling techniques.

It refers to techniques for performing computations and storing tensors at lower bit-widths than floating point precision. A quantized model executes some or all of the operations on tensors with integers rather than floating point values. This allows for a more compact model representation and the use of high performance vectorized operations on many hardware platforms. This technique is in particular useful at the inference time since it saves a lot of inference computation cost without sacrificing too much inference accuracies.

Resources:

About

Implementing various quantization techniques for neural networks using PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors