Infoshare 2019: Mateusz Malinowski - From Images to Graphs: Modeling Invariances with Deep Learning

In recent years Deep Learning has become a dominant paradigm to learn representation for images and sequential data. Such a ’revolution’ has started with the remarkable results on the ImageNet competition with AlexNet and has continued with more modern architectures like ResNet. Similarly, Recurrent Neural Networks are often used to represent language. Both types of architectures use different inductive biases that encode weight symmetries either on the grid (images) or on the chain (language), and more rec
Back to Top