The Thirtieth Annual Conference on Neural Information Processing Systems (NIPS) was a multi-track machine learning and computational neuroscience conference that included talks, demonstrations, symposia and oral and poster presentations of refereed papers at Centre Convencions Internacional Barcelona, Barcelona SPAIN between Monday December 05 — Saturday December 10, 2016. It was all about Generative Adversarial Networks(GANs)- recent advancement in training and using GANs , improvements in Reinforcement and Deep Learning, as well as more broadly used machine learning practices and their applications. The poster say it all how astonishing it was, with more than 6000 researchers at NIPS -2016 with amazing papers and talks.


Generative Adversarial Networks

The minimax game playing networks have by now won the favor of many luminaries in the field. Yann LeCun hails them as the most exciting development in ML in recent years. GANs are neural networks that allow for generating data via adversarial training initial proposed in 2014 by Goodfellow et. al . GANs, simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptron’s, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. GANs therefore should internally learn a good model representation of what an image or a sentence is. The success of GANs so far has been limited mostly to Computer Vision due to their difficulty in modeling discrete rather than continuous data.

OK switching back to NIPS 2016, 2016 witnessed  InfoGAN, which is a new type of generative adversarial network (GAN), based on mutual information presented by Xi Chen, a workshop on adversarial training  and most amazing  work of the year Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space . “Plug and Play Generative Networks”. PPGNs are composed of

  1. a generator network G that is capable of drawing a wide range of image types and
  2. a replaceable “condition” network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). It would be real interesting to see how GANs work on NLP surely we will see them in NIPS-2017. Below are few images and videos from the paper:
http://www.evolvingai.org/ppgn
Images synthetically generated by Plug and Play Generative Networks at high-resolution (227×227) for four ImageNet classes. Not only are many images nearly photo-realistic, but samples within a class are diverse. For more detail visits : http://www.evolvingai.org/ppgn

 

 

 

RNN & Phased LSTM

The conference also featured a symposium dedicated to Recurrent Neural Networks (RNNs). The symposium coincided with the 20 year anniversary of LSTM. The authors added a new time gate to traditional LSTMs. With it, the computation time is reduced by an order of magnitude and the network is suitable for inputs with different sampling rates. AS per the author, this newly added gate is controlled by a parametrized oscillation with a frequency range that produces updates of the memory cell only during a small percentage of the cycle. Even with the sparse updates imposed by the oscillation, the Phased LSTM network achieves faster convergence than regular LSTMs on tasks which require learning of long sequences. The model naturally integrates inputs from sensors of arbitrary sampling rates, thereby opening new areas of investigation for processing asynchronous sensory events that carry timing information. It also greatly improves the performance of LSTMs in standard RNN applications, and does so with an order-of-magnitude fewer computes at runtime. Another work in RNN was Sequential Neural Models with Stochastic Layers, combining the ideas from State Space Models (formally best in class for stochastic sequences like audio) and RNNs, leveraging the best of both worlds.

DeepMind’s -open-source Reinforcement Learning platform and OpenAI Universe

The conference started off with two exciting announcements on open-sourcing collections of environments for training and testing general AI capabilities – the DeepMind Lab and the OpenAI Universe. Among other things, this is promising for testing safety properties of ML algorithms. OpenAI has already used their Universe environment to give an entertaining and instructive demonstration of reward hacking that illustrates the challenge of designing robust reward functions.

 

 

I have manged to collect a some list of amazing work published at NIPS-2016, I know there will be lot of interested people like me so have fun. I plan to write more on GANs so stay tuned!!