Neural Network Compression – Dmitri Puzyrev

Neural networks are growing in size almost exponentially. At the same time, we want to use neural network models in devices like smartphones, tablets, and wearables. To do that, one of the solutions is to compress them. In this video, we cover commonly used methods to reduce the size of neural networks: weight sharing, pruning, decomposition, distillation, and quantization. Follow us on social media: Learn more about us: Contact us: academy@ 0:00 Intro 0:19 Neural network sizes 1:44 Neural networks for smartphones 2:25 Deep learning research 3:00 Weight sharing 5:50 Pruning 10:07 Decomposition 11:34 Knowledge distillation 13:35 Quantization 15:07 Summary
Back to Top