NIPS was pretty fantastic this year. There were a number of breakthroughs in the areas that interest me most: Markov Decision Processes, Game Theory, Multi-Armed Bandits, and Deep Belief Networks. Here is the list of papers, workshops, and presentations I found the most interesting or potentially useful:
- Representation, Inference and Learning in Structured Statistical Models
- Stochastic Search and Optimization
- Quantum information and the Brain
- Relax and Randomize : From Value to Algorithms (Great)
- Classification with Deep Invariant Scattering Networks
- Discriminative Learning of Sum-Product Networks
- On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes
- A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes
- Regularized Off-Policy TD-Learning
- Multi-Stage Multi-Task Feature Learning
- Graphical Models via Generalized Linear Models (Great)
- No voodoo here! Learning discrete graphical models via inverse covariance estimation (Great)
- Gradient Weights help Nonparametric Regressors
- Dropout: A simple and effective way to improve neural networks (Great)
- Efficient Monte Carlo Counterfactual Regret Minimization in Games with Many Player Actions
- A Better Way to Pre-Train Deep Boltzmann Machines
- Bayesian Optimization and Decision Making
- Practical Bayesian Optimization of Machine Learning Algorithms
- Modern Nonparametric Methods in Machine Learning
- Deep Learning and Unsupervised Feature Learning
Unfortunately, when you have 30 full day workshops in a two day period, you miss most of them. I could only attend the three listed above. There were many other great ones.