-
Notifications
You must be signed in to change notification settings - Fork 12
/
latent-nlp-tutorial.html
37 lines (26 loc) · 1.35 KB
/
latent-nlp-tutorial.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
layout: page
---
<center>
<h1>Deep Latent-Variable Models <br> for Natural Language</h1>
<br>
<h3>Yoon Kim, Sam Wiseman, Alexander Rush</h3>
<br>
<h3>EMNLP 2018</h3>
<img src="http://nlp.seas.harvard.edu/images/vae.png">
</center>
Tutorial: <a href="https://arxiv.org/abs/1812.06834"> PDF </a>
<br>
<br>
<br>
Tutorial Slides: <a href="https://github.com/harvardnlp/DeepLatentNLP/raw/master/tutorial_deep_latent.pdf">PDF</a>
<br>
<br>
<br>
Live Questions: <a href="http://cs61.seas.harvard.edu/jk/cs281/">Anonymous</a> | <a href="https://twitter.com/harvardnlp">Tweet</a>
<br>
<br>
<br>
Tutorial Code (Pytorch, Pyro): <a href="https://colab.research.google.com/drive/1b522fQGUdFI8Wl840D_XsevYYdbIAlrl">Notebook</a>
<br><br><br>
This tutorial covers deep latent variable models both in the case where exact inference over the latent variables is tractable and when it is not. The former case includes neural extensions of unsupervised tagging and parsing models. Our discussion of the latter case, where inference cannot be performed tractably, will restrict itself to continuous latent variables. In particular, we will discuss recent developments both in neural variational inference (e.g., relating to Variational Auto-encoders). We will highlight the challenges of applying these families of methods to NLP problems, and discuss recent successes and best practices.