Skip to content

caiqingnanhai/deeplearning-papernotes

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 

Repository files navigation

2017-04

  • General Video Game AI: Learning from Screen Capture [arXiv]
  • Learning to Skim Text [arXiv]

2017-03

  • Evolution Strategies as a Scalable Alternative to Reinforcement Learning [arXiv]
  • Controllable Text Generation [arXiv]
  • Neural Episodic Control [arXiv]
  • A Structured Self-attentive Sentence Embedding [arXiv]
  • Multi-step Reinforcement Learning: A Unifying Algorithm [arXiv]
  • Neural Machine Translation and Sequence-to-sequence Models: A Tutorial [arXiv]
  • Large-Scale Evolution of Image Classifiers [arXiv]
  • FeUdal Networks for Hierarchical Reinforcement Learning [arXiv]
  • Evolving Deep Neural Networks [arXiv]

2017-02

  • The Shattered Gradients Problem: If resnets are the answer, then what is the question? [arXiv]
  • Neural Map: Structured Memory for Deep Reinforcement Learning [arXiv]
  • Bridging the Gap Between Value and Policy Based Reinforcement Learning [arXiv]
  • Deep Voice: Real-time Neural Text-to-Speech [arXiv]
  • Beating the World's Best at Super Smash Bros. with Deep Reinforcement Learning [arXiv]
  • The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI [arXiv]
  • Learning to Parse and Translate Improves Neural Machine Translation [arXiv]
  • All-but-the-Top: Simple and Effective Postprocessing for Word Representations [arXiv]
  • Deep Learning with Dynamic Computation Graphs [arXiv]
  • Skip Connections as Effective Symmetry-Breaking arXiv
  • odelSemi-Supervised QA with Generative Domain-Adaptive Nets [arXiv]

2017-01

  • Wasserstein GAN arXiv
  • Deep Reinforcement Learning: An Overview [arXiv]
  • DyNet: The Dynamic Neural Network Toolkit [arXiv]
  • DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker [arXiv]
  • NIPS 2016 Tutorial: Generative Adversarial Networks arXiv

2016-12

  • A recurrent neural network without Chaos [arXiv]
  • Language Modeling with Gated Convolutional Networks [arXiv]
  • How Grammatical is Character-level Neural Machine Translation? Assessing MT Quality with Contrastive Translation Pairs [arXiv]
  • Improving Neural Language Models with a Continuous Cache [arXiv]
  • DeepMind Lab[arXiv]
  • Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning [arXiv]
  • Overcoming catastrophic forgetting in neural networks [arXiv]

2016-11 (ICLR Edition)

Reinforcement Learning:

-Learning to reinforcement learn [arXiv]

Machine Translation & Dialog

2016-10

2016-09

  • Towards Deep Symbolic Reinforcement Learning [arXiv]
  • HyperNetworks [arXiv]
  • Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation [arXiv]
  • Safe and Efficient Off-Policy Reinforcement Learning [arXiv]
  • Playing FPS Games with Deep Reinforcement Learning [arXiv]
  • SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient [arXiv]
  • Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks [arXiv]
  • Energy-based Generative Adversarial Network [arXiv]
  • Stealing Machine Learning Models via Prediction APIs [arXiv]
  • Semi-Supervised Classification with Graph Convolutional Networks [arXiv]
  • WaveNet: A Generative Model For Raw Audio [arXiv]
  • Hierarchical Multiscale Recurrent Neural Networks [arXiv]
  • End-to-End Reinforcement Learning of Dialogue Agents for Information Access [arXiv]
  • Deep Neural Networks for YouTube Recommendations [paper]

2016-08

  • Machine Comprehension Using Match-LSTM and Answer Pointer [arXiv]
  • Stacked Approximated Regression Machine: A Simple Deep Learning Approach [arXiv]
  • Decoupled Neural Interfaces using Synthetic Gradients [arXiv]
  • WikiReading: A Novel Large-scale Language Understanding Task over Wikipedia [arXiv]
  • Temporal Attention Model for Neural Machine Translation [arXiv]
  • Residual Networks of Residual Networks: Multilevel Residual Networks [arXiv]
  • Learning Online Alignments with Continuous Rewards Policy Gradient [arXiv]

2016-07

2016-06

  • Sequence-to-Sequence Learning as Beam-Search Optimization [arXiv]

  • Sequence-Level Knowledge Distillation [arXiv]

  • Policy Networks with Two-Stage Training for Dialogue Systems [arXiv]

  • Towards an integration of deep learning and neuroscience [arXiv]

  • On Multiplicative Integration with Recurrent Neural Networks [arxiv]

  • Wide & Deep Learning for Recommender Systems [arXiv]

  • Online and Offline Handwritten Chinese Character Recognition [arXiv]

  • Tutorial on Variational Autoencoders [arXiv]

  • Concrete Problems in AI Safety [arXiv]

  • Deep Reinforcement Learning Discovers Internal Models [arXiv]

  • SQuAD: 100,000+ Questions for Machine Comprehension of Text [arXiv]

  • Conditional Image Generation with PixelCNN Decoders [arXiv]

  • Model-Free Episodic Control [arXiv]

  • Progressive Neural Networks [arXiv]

  • Improved Techniques for Training GANs [arXiv])

  • Memory-Efficient Backpropagation Through Time [arXiv]

  • InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [arXiv]

  • Zero-Resource Translation with Multi-Lingual Neural Machine Translation [arXiv]

  • Key-Value Memory Networks for Directly Reading Documents [arXiv]

  • Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translatin [arXiv]

  • Learning to learn by gradient descent by gradient descent [arXiv]

  • Learning Language Games through Interaction [arXiv]

  • Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations [arXiv]

  • Smart Reply: Automated Response Suggestion for Email [arXiv]

  • Virtual Adversarial Training for Semi-Supervised Text Classification [arXiv]

  • Deep Reinforcement Learning for Dialogue Generation [arXiv]

  • Very Deep Convolutional Networks for Natural Language Processing [arXiv]

  • Neural Net Models for Open-Domain Discourse Coherence [arXiv]

  • Neural Architectures for Fine-grained Entity Type Classification [arXiv]

  • Gated-Attention Readers for Text Comprehension [arXiv]

  • End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning [arXiv]

  • Iterative Alternating Neural Attention for Machine Reading [arXiv]

  • Memory-enhanced Decoder for Neural Machine Translation [arXiv]

  • Multiresolution Recurrent Neural Networks: An Application to Dialogue Response Generation [arXiv]

  • Natural Language Comprehension with the EpiReader [arXiv]

  • Conversational Contextual Cues: The Case of Personalization and History for Response Ranking [arXiv]

  • Adversarially Learned Inference [arXiv]

  • Neural Network Translation Models for Grammatical Error Correction [arXiv]

2016-05

  • Hierarchical Memory Networks [arXiv]
  • Deep API Learning [arXiv]
  • Wide Residual Networks [arXiv]
  • TensorFlow: A system for large-scale machine learning [arXiv]
  • Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention [arXiv]
  • Aspect Level Sentiment Classification with Deep Memory Network [arXiv]
  • FractalNet: Ultra-Deep Neural Networks without Residuals [arXiv]
  • Learning End-to-End Goal-Oriented Dialog [arXiv]
  • One-shot Learning with Memory-Augmented Neural Networks [arXiv]
  • Deep Learning without Poor Local Minima [arXiv]
  • AVEC 2016 - Depression, Mood, and Emotion Recognition Workshop and Challenge [arXiv]
  • Data Programming: Creating Large Training Sets, Quickly [arXiv]
  • Deeply-Fused Nets [arXiv]
  • Deep Portfolio Theory [arXiv]
  • Unsupervised Learning for Physical Interaction through Video Prediction [arXiv]
  • Movie Description [arXiv]

2016-04

2016-03

2016-02

2016-01

2015-12

NLP

Vision

2015-11

NLP

Programs

  • Neural Random-Access Machines [arxiv]
  • Neural Programmer: Inducing Latent Programs with Gradient Descent [arXiv]
  • Neural Programmer-Interpreters [arXiv]
  • Learning Simple Algorithms from Examples [arXiv]
  • Neural GPUs Learn Algorithms [arXiv]
  • On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models [arXiv]

Vision

  • ReSeg: A Recurrent Neural Network for Object Segmentation [arXiv]
  • Deconstructing the Ladder Network Architecture [arXiv]
  • Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [arXiv]

General

2015-10

2015-09

2015-08

2015-07

2015-06

2015-05

2015-04

  • Correlational Neural Networks [arXiv]

2015-03

2015-02

2015-01

2014-12

2014-11

2014-10

2014-09

2014-08

  • Convolutional Neural Networks for Sentence Classification [arxiv]

2014-07

2014-06

2014-05

2014-04

  • A Convolutional Neural Network for Modelling Sentences [arXiv]

2014-03

2014-02

2014-01

2013

  • Visualizing and Understanding Convolutional Networks [arXiv]
  • DeViSE: A Deep Visual-Semantic Embedding Model [pub]
  • Maxout Networks [arXiv]
  • Exploiting Similarities among Languages for Machine Translation [arXiv]
  • Efficient Estimation of Word Representations in Vector Space [arXiv]

2011

  • Natural Language Processing (almost) from Scratch [arXiv]

About

Summaries and notes on Deep Learning research papers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published