HPC Approaches to Training Neural Networks in Deep Learning
Patrick Legresley, Baidu
Parallel computing is critical to achieving cost-effective, fast-turnaround for training models in deep learning. In this talk I will give a brief overview of algorithms for deep learning using neural networks, and describe parallelization of model training for speech recognition. Our work uses a High Performance Computing (HPC) approach: a cluster of multi-GPU servers, linked via an InfiniBand interconnect, and using CUDA aware Message Passing Interface (MPI) for communication. Th
15 views
0
0
6 months ago 00:17:20 1
Why it Was Almost Impossible to Put a Computer in Space
11 months ago 00:05:43 1
Quantum & High-Performance Computing in The Cloud in 6 Minutes • Stig Elkjær Rasmussen • GOTO 2023
1 year ago 00:57:41 1
Quantum Dots
1 year ago 00:26:13 1
Quantum Computing in the Cloud • Stig E. Rasmussen, Søren Gammelmark & James Lewis • GOTO 2023
1 year ago 00:36:08 1
Quantum & High-Performance Computing in The Cloud • Søren Gammelmark & Stig E. Rasmussen • GOTO 2023
2 years ago 00:13:52 1
How To Paint Stark Sworn Swords | Song Of Ice And Fire Miniatures Game
2 years ago 00:36:28 1
Observing all the Serverless Things • Peter Elger & Guilherme Dalla Rosa • GOTO 2022
3 years ago 00:02:59 1
Aerodynamic behavior of the NREL 5-MW floating offshore wind turbine
3 years ago 00:21:00 2
How To Paint Zarbag’S Gitz For Warhammer Underworlds Nightvault
9 years ago 00:44:19 15
HPC Approaches to Training Neural Networks in Deep Learning