NIPS 2017 - presentations from the Theory, Probabilistic Methods sessions

• On Structured Prediction Theory with Calibrated Convex Surrogate Losses • REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models • Variance-based Regularization with Convex Objectives • More powerful and flexible rules for online FDR control with memory and weights • Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues • Fast Black-box Variational Inference through Stochastic Trust-Region Optimization • A Universal Analysis of Large-Scale Regularized Least Squares Solutions • A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning • Accelerated Stochastic Greedy Coordinate Descent by Soft Thresholding Projection onto Simplex • Early stopping for kernel boosting algorithms: A general analysis with localized complexities • Spectrally-normalized margin bounds for neural networks • The Scaling Limit of High-Dimensional Online Independent Component Analysis
Back to Top