Jekyll2021-11-17T20:23:50+00:00https://keyonvafa.github.io/feed.xmlKeyon VafaA blog about technology and stuff relatedRationales for Sequential Predictions2021-10-16T16:00:00+00:002021-10-16T16:00:00+00:00https://keyonvafa.github.io/sequential-rationales<hr /> <p>[<a href="https://arxiv.org/pdf/2109.06387.pdf">PDF</a>] [<a href="https://github.com/keyonvafa/sequential-rationales">Code</a>] [<a href="https://colab.research.google.com/drive/1l33I0BDOXtPMdQVqB8Y24DJUp7K52qDz#scrollTo=KdN0dxky7nMw">Colab Example</a>] [<a href="https://www.youtube.com/watch?v=4Nvy2AVkKVA">Video</a>]</p> <p>Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain. We consider model explanations though rationales, subsets of context that can explain individual model predictions. We find sequential rationales by solving a combinatorial optimization: the best rationale is the smallest subset of input tokens that would predict the same output as the full sequence. Enumerating all subsets is intractable, so we propose an efficient greedy algorithm to approximate this objective. The algorithm, which is called greedy rationalization, applies to any model. For this approach to be effective, the model should form compatible conditional distributions when making predictions on incomplete subsets of the context. This condition can be enforced with a short fine-tuning step. We study greedy rationalization on language modeling and machine translation. Compared to existing baselines, greedy rationalization is best at optimizing the combinatorial objective and provides the most faithful rationales. <a href="https://github.com/keyonvafa/sequential-rationales">On a new dataset of annotated sequential rationales</a>, greedy rationales are most similar to human rationales.</p> <p><a href="https://colab.research.google.com/drive/1l33I0BDOXtPMdQVqB8Y24DJUp7K52qDz#scrollTo=KdN0dxky7nMw">Notebook with example available on Colab</a>.</p> <center> <img src="/assets/gifs/sequential_rationalization.gif" width="500" /> </center> <hr /> <p>K. Vafa, Y. Deng, D. Blei, and A. Rush. <a href="https://arxiv.org/abs/2109.06387"><strong>Rationales for Sequential Predictions</strong></a>. In <em>Proceedings of EMNLP</em>, 2021.</p>keyonvafaText-Based Ideal Points2020-05-28T16:00:00+00:002020-05-28T16:00:00+00:00https://keyonvafa.github.io/text-based-ideal-points<hr /> <p>[<a href="https://www.aclweb.org/anthology/2020.acl-main.475.pdf">PDF</a>] [<a href="https://github.com/keyonvafa/tbip">Code</a>] [<a href="https://colab.research.google.com/drive/1_KkVI2lGtPdgsHSKDIMhSLCKkHvBQ4LO?usp=sharing">Tutorial</a>] [<a href="/assets/slides/tbip_slides.pdf">Slides</a>] [<a href="https://slideslive.com/38929238/textbased-ideal-points">Video</a>]</p> <p>Ideal point models analyze lawmakers’ votes to quantify their political positions, or ideal points. But votes are not the only way to express a political position. Lawmakers also give speeches, release press statements, and post tweets. <a href="https://www.aclweb.org/anthology/2020.acl-main.475/">In this paper</a>, we introduce the text-based ideal point model (TBIP), an unsupervised probabilistic topic model that analyzes texts to quantify the political positions of its authors. We demonstrate the TBIP with two types of politicized text data: U.S. Senate speeches and senator tweets. Though the model does not analyze their votes or political affiliations, the TBIP separates lawmakers by party, learns interpretable politicized topics, and infers ideal points close to the classical vote-based ideal points. One benefit of analyzing texts, as opposed to votes, is that the TBIP can estimate ideal points of anyone who authors political texts, including non-voting actors. To this end, we use it to study tweets from the 2020 Democratic presidential candidates. Using only the texts of their tweets, it identifies them along an interpretable progressive-to-moderate spectrum.</p> <!-- [PyTorch](https://github.com/keyonvafa/tbip/blob/master/pytorch/tbip.py) and [Tensorflow](https://github.com/keyonvafa/tbip/blob/master/tbip.py) implementations available on [Github](https://github.com/keyonvafa/tbip). --> <p><a href="https://colab.research.google.com/drive/1_KkVI2lGtPdgsHSKDIMhSLCKkHvBQ4LO?usp=sharing">Notebook with tutorial available on Colab</a>.</p> <p>The plots below show examples of ideological topics for U.S. Senate speeches (2015-2017). Move the slider to see how the ideological topic changes as a function of ideal point:</p> <iframe width="900" height="600" frameborder="0" scrolling="no" src="//plotly.com/~keyonvafa/256.embed?&amp;link=false"></iframe> <iframe width="900" height="600" frameborder="0" scrolling="no" src="//plotly.com/~keyonvafa/252.embed?&amp;link=false"></iframe> <iframe width="900" height="600" frameborder="0" scrolling="no" src="//plotly.com/~keyonvafa/250.embed?&amp;link=false"></iframe> <iframe width="900" height="600" frameborder="0" scrolling="no" src="//plotly.com/~keyonvafa/254.embed?&amp;link=false"></iframe> <hr /> <p>K. Vafa, S. Naidu, and D. Blei. <a href="https://www.aclweb.org/anthology/2020.acl-main.475/"><strong>Text-Based Ideal Points</strong></a>. In <em>Proceedings of ACL</em>, 2020.</p> <!-- --- <iframe width="900" height="600" frameborder="0" scrolling="no" src="//plotly.com/~keyonvafa/228.embed"></iframe> --> <!-- ![Senate speech ideal point comparisons](https://keyonvafa.github.io/assets/images/projects/senate_ideal_point_comparisons.jpg) <figcaption class="caption">The ideal points learned by the TBIP for senator speeches and tweets are highly correlated with the classical vote ideal points. Senators are coded by their political party (Democrats in blue circles, Republicans in red x’s). Although the algorithm does not have access to these labels, the TBIP almost completely separates parties.</figcaption> -->keyonvafaDiscrete Flows: Invertible Generative Models of Discrete Data2019-10-09T15:28:00+00:002019-10-09T15:28:00+00:00https://keyonvafa.github.io/discrete-flows<hr /> <p>While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. <a href="https://papers.nips.cc/paper/9612-discrete-flows-invertible-generative-models-of-discrete-data">In this work</a>, we show that flows can in fact be extended to discrete events—and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8.</p> <p>D. Tran, K. Vafa, K. K. Agrawal, L. Dinh, and B. Poole. <a href="https://papers.nips.cc/paper/9612-discrete-flows-invertible-generative-models-of-discrete-data"><strong>Discrete Flows: Invertible Generative Models of Discrete Data</strong></a>. In <em>Proceedings of NeurIPS</em>, 2019.</p> <hr /> <p><img src="https://keyonvafa.github.io/assets/images/projects/gaussian_mixture_flows.png" alt="Gaussian mixture with flows" /></p> <figcaption class="caption">Learning a discretized mixture of Gaussians with maximum likelihood. Discrete flows help capture the multi-dimensional modes, which a factorized distribution cannot.</figcaption>keyonvafaBlack Box Variational Inference for Logistic Regression2017-04-01T14:00:00+00:002017-04-01T14:00:00+00:00https://keyonvafa.github.io/logistic-regression-bbvi<p>A couple of weeks ago, I wrote about <a href="http://keyonvafa.com/variational-inference-probit-regression/">variational inference for probit regression</a>, which involved some pretty ugly algebra. Although variational inference is a powerful method for approximate Bayesian inference, it can be tedious to come up with the variational updates for every model (which aren’t always available in closed-form), and these updates are model-specific.</p> <p><a href="http://www.cs.columbia.edu/~blei/papers/RanganathGerrishBlei2014.pdf">Black Box Variational Inference</a> (BBVI) offers a solution to this problem. Instead of computing all the updates in closed form, BBVI uses <em>sampling</em> to approximate the gradient of our bound, and then uses stochastic optimization to optimize this bound. Below, I’ll briefly go over the main ideas behind BBVI, and then demonstrate how easy it makes inference for Bayesian logistic regression. I want to emphasize that the <a href="http://www.cs.columbia.edu/~blei/papers/RanganathGerrishBlei2014.pdf">original BBVI paper</a> describes the method better than I ever could, so I encourage you to read the paper as well.</p> <h2 id="black-box-variational-inference-a-brief-overview">Black Box Variational Inference: A Brief Overview</h2> <p>In the context of Bayesian statistics, we’re frequently modeling the distribution of observations, $$x$$, conditioned on some (random) latent variables $$z$$. We would like to evaluate $$p(z \vert x)$$, but this distribution is often intractable. The idea behind variational inference is to introduce a family of distributions over $$z$$ that depend on <em>variational parameters</em> $$\lambda$$, $$q(z \vert \lambda)$$, and find the values of $$\lambda$$ that minimize the KL divergence between $$q(z \vert \lambda)$$ and $$p(z \vert x)$$. One of the most common forms of $$q$$ comes from the <em>mean-field variational family</em>, where $$q$$ factors into conditionally independent distributions each governed by some set of parameters, $$q(z \vert \lambda) = \prod_{j=1}^m q_j(z_j \vert \lambda)$$. Minimizing the KL divergence is equivalent to maximizing the <em>Evidence Lower Bound</em> (ELBO), given by</p> $L(\lambda) = E_{q_{\lambda}(z)}[\log p(x,z) - \log q(z)].$ <p>It can involve a lot of tedious computation to evaluate the gradient in closed form (when a closed form expression exists). The key insight behind BBVI is that it’s possible to write the gradient of the ELBO as an expectation:</p> $\nabla_{\lambda}L(\lambda) = E_q[(\nabla_{\lambda} \log q(z \vert \lambda)) (\log p(x,z) - \log q(z \vert \lambda))].$ <p>So instead of evaluating a closed form expression for the gradient, we can use Monte Carlo samples and take the average to get a noisy estimate of the gradient. That is, for our current set of parameters $$\lambda$$, we can sample $$z_s \sim q(z \vert \lambda)$$ for $$s \in 1, \dots, S$$, and for each of these samples evaluate the above expression, replacing $$z$$ with the sample $$z_s$$. If we take the mean over all samples, we will have a (noisy) estimate for the gradient. Finally, by applying an appropriate step-size at every iteration, we can optimize the ELBO with stochastic gradient descent.</p> <p>The above expression may look daunting, but it’s straightforward to evaluate. The first term is the gradient of $$\log q(z \vert \lambda)$$, which is also known as the score function. As we’ll see in the logistic regression example, this expression is straightforward to evaluate for many distributions, but we can even use automatic differentiation to streamline this process if we have a more complicated model (or if we’re feeling lazy). The next two terms are log-likelihoods that we specify, so we can compute them with a sample $$z_s$$.</p> <h2 id="bbvi-for-bayesian-logistic-regression">BBVI for Bayesian Logistic Regression</h2> <p>Consider data $$\boldsymbol X \in \mathbb{R}^{N \times P}$$ with binary outputs $$\boldsymbol y \in \mathbb{R}^{N}$$. We can model $$P(y_i \vert \boldsymbol x_i, \boldsymbol z) \sim \text{Bern}(\sigma(\boldsymbol z^T \boldsymbol x_i))$$, with $$\sigma(\cdot)$$ the inverse-logit function and $$\boldsymbol z$$ drawn from a $$p$$-dimensional multivariate normal with independent components, $$\boldsymbol z \sim \mathcal N(\boldsymbol 0, \boldsymbol I_p)$$. We would like to evaluate $$p(\boldsymbol z \vert \boldsymbol X, \boldsymbol y)$$, but this is not available in closed form. Instead, we posit a variational distribution over $$\boldsymbol z$$, $$q(\boldsymbol z \vert \lambda) = \prod_{j=1}^P \mathcal N(z_i \vert \mu_j, \sigma_j^2)$$. To be clear, we model each $$z_j$$ as an independent Gaussian with mean $$\mu_j$$ and $$\sigma_j^2$$, and we use BBVI to learn the optimal values of $$\lambda = \{\mu_j,\sigma_j^2\}_{j=1}^P$$. We’ll use the shorthand $$\boldsymbol \mu = (\mu_1, \dots, \mu_P)$$ and $$\boldsymbol \sigma^2 = (\sigma_1^2, \dots, \sigma_P^2)$$.</p> <p>Since $$\sigma_j^2$$ is constrained to be positive, we will instead optimize over $$\alpha_i = \log(\sigma_j^2)$$. First, evaluating the score function, it’s straightforward to see</p> $\nabla_{\mu_j}\log q(\boldsymbol z \vert \lambda ) = \nabla_{\mu_j} \sum_{i=1}^P -\frac{\log(\sigma_i^2)}{2}-\frac{(z_i-\mu_i)^2}{2\sigma_i^2} = \frac{(z_j-\mu_j)}{\sigma^2_j}.\\ \nabla_{\alpha_j}\log q(\boldsymbol z \vert \lambda ) = \nabla_{\sigma_j} \left(\sum_{i=1}^P -\frac{\log(\sigma_i^2)}{2}-\frac{(z_i-\mu_i)^2}{2\sigma_i^2}\right) * \nabla_{\alpha_j}(\sigma_j^2) = \left(-\frac{1}{2\sigma_j^2} + \frac{(z_j-\mu_j)^2}{2(\sigma_j^2)^2}\right) * (\sigma_j^2).$ <p>Note that we use the chain rule in the derivation for $$\nabla_{\alpha_j}\log q(\boldsymbol z \vert \lambda )$$. For the complete data log-likelihood, we can decompose $$\log p( \boldsymbol y, \boldsymbol X, \boldsymbol z) = \log p( \boldsymbol y \vert \boldsymbol X, \boldsymbol z) + \log p(\boldsymbol z)$$, using the chain rule of probability (and noting that $$\boldsymbol X$$ is a constant). Thus, it’s straightforward to calculate</p> $\log p(\boldsymbol y, \boldsymbol X, \boldsymbol z) = \sum_{i=1}^N [y_i \log(\sigma(\boldsymbol z^T \boldsymbol x_i)) + (1-y_i)\log(1-\sigma(\boldsymbol z^T \boldsymbol x_i))] + \sum_{j=1}^P \log \varphi(z_j \vert 0, 1).\\ \log q(\boldsymbol z \vert \lambda) = \sum_{j=1}^P \log \varphi(z_j \vert \mu_j, \sigma_j^2).$ <p>The notation $$\varphi(z_j \vert \mu, \sigma^2)$$ refers to evaluating the standard normal pdf with mean $$\mu$$ and variance $$\sigma^2$$ at the point $$z_j$$.</p> <p>And that’s it. Thus, given a sample <code class="language-plaintext highlighter-rouge">z_sample</code> from $$q(\boldsymbol z \vert \lambda) \sim \mathcal N(\boldsymbol \mu, \text{diag}(\boldsymbol \sigma^2))$$ and current variational parameters <code class="language-plaintext highlighter-rouge">mu</code> $$= \boldsymbol \mu$$ and <code class="language-plaintext highlighter-rouge">sigma</code> $$= \boldsymbol \sigma^2$$, we can approximate the gradient using the following Python code:</p> <div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">elbo_grad</span><span class="p">(</span><span class="n">z_sample</span><span class="p">,</span> <span class="n">mu</span><span class="p">,</span> <span class="n">sigma</span><span class="p">):</span> <span class="n">score_mu</span> <span class="o">=</span> <span class="p">(</span><span class="n">z_sample</span> <span class="o">-</span> <span class="n">mu</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="n">sigma</span><span class="p">)</span> <span class="n">score_logsigma</span> <span class="o">=</span> <span class="p">(</span><span class="o">-</span><span class="mi">1</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">sigma</span><span class="p">)</span> <span class="o">+</span> <span class="n">np</span><span class="p">.</span><span class="n">power</span><span class="p">((</span><span class="n">z_sample</span> <span class="o">-</span> <span class="n">mu</span><span class="p">),</span><span class="mi">2</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="mi">2</span><span class="o">*</span><span class="n">np</span><span class="p">.</span><span class="n">power</span><span class="p">(</span><span class="n">sigma</span><span class="p">,</span><span class="mi">2</span><span class="p">)))</span> <span class="o">*</span> <span class="n">sigma</span> <span class="n">log_p</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="nb">sum</span><span class="p">(</span><span class="n">y</span> <span class="o">*</span> <span class="n">np</span><span class="p">.</span><span class="n">log</span><span class="p">(</span><span class="n">sigmoid</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">dot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="n">z_sample</span><span class="p">)))</span> <span class="o">+</span> <span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">y</span><span class="p">)</span> <span class="o">*</span> <span class="n">np</span><span class="p">.</span><span class="n">log</span><span class="p">(</span><span class="mi">1</span><span class="o">-</span><span class="n">sigmoid</span><span class="p">(</span><span class="n">np</span><span class="p">.</span><span class="n">dot</span><span class="p">(</span><span class="n">X</span><span class="p">,</span><span class="n">z_sample</span><span class="p">))))</span> <span class="o">+</span> <span class="n">np</span><span class="p">.</span><span class="nb">sum</span><span class="p">(</span><span class="n">norm</span><span class="p">.</span><span class="n">logpdf</span><span class="p">(</span><span class="n">z_sample</span><span class="p">,</span> <span class="n">np</span><span class="p">.</span><span class="n">zeros</span><span class="p">(</span><span class="n">P</span><span class="p">),</span> <span class="n">np</span><span class="p">.</span><span class="n">ones</span><span class="p">(</span><span class="n">P</span><span class="p">)))</span> <span class="n">log_q</span> <span class="o">=</span> <span class="n">np</span><span class="p">.</span><span class="nb">sum</span><span class="p">(</span><span class="n">norm</span><span class="p">.</span><span class="n">logpdf</span><span class="p">(</span><span class="n">z_sample</span><span class="p">,</span> <span class="n">mu</span><span class="p">,</span> <span class="n">np</span><span class="p">.</span><span class="n">sqrt</span><span class="p">(</span><span class="n">sigma</span><span class="p">)))</span> <span class="k">return</span> <span class="n">np</span><span class="p">.</span><span class="n">concatenate</span><span class="p">([</span><span class="n">score_mu</span><span class="p">,</span><span class="n">score_logsigma</span><span class="p">])</span><span class="o">*</span><span class="p">(</span><span class="n">log_p</span> <span class="o">-</span> <span class="n">log_q</span><span class="p">)</span> </code></pre></div></div> <p>To test this out, I simulated data from the model with $$N = 100$$ and $$P = 4$$. I set the step-size with <a href="http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf">AdaGrad</a>, I used 10 samples at every iteration, and I stopped optimizing when the distance between variational means was less than 0.01. The following plot shows the true values of $$z_1, \dots, z_4$$, along with their learned variational distributions (the curves belonging to each parameter are a different color):</p> <p><img src="/assets/images/logistic_regression_bbvi_blog/densities.png" alt="Variational densities" /></p> <p>It appears that BBVI does a pretty decent job of picking up the distribution over true values. The following plots depict the value of each variational mean at every iteration (left), along with the change in variational means (right).</p> <p><img src="/assets/images/logistic_regression_bbvi_blog/trace_plots.png" alt="Trace plots" /></p> <p>Again, I highly recommend checking out the <a href="http://www.cs.columbia.edu/~blei/papers/RanganathGerrishBlei2014.pdf">original paper</a>. This <a href="http://people.seas.harvard.edu/~dduvenaud/papers/blackbox.pdf">Python tutorial</a> by <a href="https://www.cs.toronto.edu/~duvenaud/">David Duvenaud</a> and <a href="http://people.seas.harvard.edu/~rpa/">Ryan Adams</a>, which uses BBVI to train Bayesian neural networks in only a few lines of Python code, is also a great resource.</p> <p>All my code is available <a href="https://github.com/keyonvafa/logistic-reg-bbvi-blog">here</a>.</p>keyonvafaA couple of weeks ago, I wrote about variational inference for probit regression, which involved some pretty ugly algebra. Although variational inference is a powerful method for approximate Bayesian inference, it can be tedious to come up with the variational updates for every model (which aren’t always available in closed-form), and these updates are model-specific.US Senators and PCA2017-03-28T00:00:00+00:002017-03-28T00:00:00+00:00https://keyonvafa.github.io/voting-record-pca<p>A couple of weeks ago, I wrote a <a href="http://keyonvafa.com/ideal-points/">blog post about modeling ideal points of US senators</a>. I wanted to follow up (very briefly), since I was curious about comparing the Bayesian method there with Principal Component Analysis (PCA).</p> <p>Here are the (new) results performing PCA on the voting record:</p> <iframe width="1000" height="300" frameborder="0" scrolling="no" src="https://plot.ly/~keyonvafa/114.embed"></iframe> <p>Here are the (older) results using ideal point modeling:</p> <iframe width="1000" height="300" frameborder="0" scrolling="no" src="https://plot.ly/~keyonvafa/58.embed"></iframe> <p>It’s interesting to compare the methods (the scale on the x-axis is irrelevant). Both models do a good job of capturing the more moderate senators, since <a href="https://en.wikipedia.org/wiki/Susan_Collins">Susan Collins</a>, <a href="https://en.wikipedia.org/wiki/Lisa_Murkowski">Lisa Murkowski</a>, and <a href="https://en.wikipedia.org/wiki/Kelly_Ayotte">Kelly Ayotte</a> are in the middle in both methods. The furthest left senator using PCA is <a href="https://en.wikipedia.org/wiki/Maria_Cantwell">Maria Cantwell</a>, who is also pretty far left with ideal points. Meanwhile, the furthest right senator with PCA is <a href="https://en.wikipedia.org/wiki/Tom_Coburn">Tom Coburn</a> (whose <a href="https://en.wikipedia.org/wiki/Tom_Coburn">Wikipedia page</a> describes him as “the godfather of the modern conservative, austerity movement”), yet he is further left than 8 senators with ideal point modeling.</p> <p>Overall, I was surprised by how similar these results were, given how differently the two methods are motivated. Ideal point modeling yields scores for every bill and senator (along with a predictive interpretation), while PCA can reduce the voting data to any dimension to capture senator voting habits (not to mention it’s much faster). I would definitely be interested in exploring these methods with more rigor.</p>keyonvafaA couple of weeks ago, I wrote a blog post about modeling ideal points of US senators. I wanted to follow up (very briefly), since I was curious about comparing the Bayesian method there with Principal Component Analysis (PCA).Variational Inference for Bayesian Probit Regression2017-03-16T20:00:00+00:002017-03-16T20:00:00+00:00https://keyonvafa.github.io/variational-inference-probit-regression<p>Variational inference has become one of the most important approximate inference techniques for Bayesian statistics, but it has taken me a long time to wrap my head around the central ideas (and I’m still learning). Since I’ve found that going through examples is the most efficient way to learn, I thought I would go through a single example in this post, performing variational inference on Bayesian probit regression.</p> <p>I’m going to assume the reader is somewhat familiar with the basic ideas behind variational inference. If you’ve never seen variational infererence before, I strongly recommend <a href="https://arxiv.org/pdf/1601.00670.pdf">this tutorial</a> by <a href="http://www.cs.columbia.edu/~blei/">David Blei</a>, <a href="http://www.proditus.com/">Alp Kucukelbir</a>, and <a href="https://www.stat.berkeley.edu/~jon/">Jon McAuliffe</a>. These <a href="https://www.cs.princeton.edu/courses/archive/fall11/cos597C/lectures/variational-inference-i.pdf">course notes</a> from David Blei are also very <a href="https://www.youtube.com/watch?v=eXiwYUCe_bY">handy</a>.</p> <h2 id="variational-inference-a-very-brief-overview">Variational Inference: A (Very) Brief Overview</h2> <p>Bayesian statistics often requires computing the conditional density $$p(\boldsymbol z \vert \boldsymbol x)$$ of latent variables $$\boldsymbol z = z_{1:m}$$ given observed variables $$\boldsymbol x = x_{1:n}$$. Since this distribution is typically intractable, variational inference learns an approximate distribution $$q(\boldsymbol z)$$ that is meant to be “close” to $$p(\boldsymbol z \vert \boldsymbol x)$$, using <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">Kullback-Leibler divergence</a> as a measure.</p> <p>Thus, there are two steps. The first comes from providing a form for the variational distribution, $$q(\boldsymbol z)$$. The most frequently used form comes from the <em>mean-field variational family</em>, where $$q$$ factors into conditionally independent distributions each governed by some set of parameters, $$q(\boldsymbol z) = \prod_{j=1}^m q_j(z_j)$$. Once we have specified the factorization of the distribution, we are still required to figure out the optimal form of each factor, both in terms of its family and parameters (although these can be conisdered the same thing). Thus, the second step is optimizing $$KL(q \vert \vert p)$$.</p> <p>It turns out the optimal form of each factor is straightforward: $$q_j^*(z_j) \propto \exp\left\{E_{-j}[\log p(\boldsymbol z, \boldsymbol x)]\right\}$$, where $$E_{-j}[\cdot]$$ refers to the expectation when omitting variable $$z_j$$. To minimize $$KL(q \vert \vert p)$$, we cycle between latent factors $$q_j$$ and update the mean (with respect to the current parameters) according to the equation above. If these results are unfamiliar, definitely check out <a href="https://arxiv.org/pdf/1601.00670.pdf">the tutorial</a> I mentioned earlier.</p> <h2 id="variational-inference-for-bayesian-probit-regression">Variational Inference for Bayesian Probit Regression</h2> <p>Consider a probit regression problem, where we have data $$\boldsymbol x \in \mathbb{R}^{N \times 1}$$ and a binary outcome $$\boldsymbol y \in \{0,1\}^{N}$$. In probit regression, we assume $$p(y_i = 1) = \Phi(a + bx_i)$$, where $$a$$ and $$b$$ are unknown and random, with a uniform prior, and $$\Phi(\cdot)$$ is the standard normal CDF. To simplify things, we can introduce variables $$z_i \sim \mathcal{N}(a+bx_i,1)$$ so $$y_i = 1$$ if $$z_i &gt; 0$$ and $$y_i = 0$$ if $$z_i \leq 0$$.</p> <p>The first step is writing down the log posterior density $$\log p(a,b,\boldsymbol z \vert \boldsymbol y)$$ up to a constant. It is straightforward to see</p> $\log p(a, b, \boldsymbol z \vert \boldsymbol y) \propto \sum_{i=1}^n y_i \log I(z_i &gt; 0) + (1-y_i)\log(I(z_i \leq 0)) - \frac{\sum_{i=1}^n (z_i - (a+bx_i))^2}{2}.$ <p>The next step is defining our variational distribution $$q$$. We will provide one factor for each $$z_i$$, along with indendent factors for $$a$$ and $$b$$ each. Therefore, $$q$$ consists of $$n + 2$$ independent factors:</p> $q(a, b, \boldsymbol z) = q_a(a) q_b(b) \prod_{j=1}^m q_j(z_j).$ <p>To learn the optimal form of each factor, we use the rule described above. That is, consider a single $$z_j$$. The optimal distribution is therefore $$q_j^*(z_j) \propto \exp \left\{E_{a,b,\boldsymbol z_{-j}}[\log p(a, b, \boldsymbol z \vert \boldsymbol y)]\right\}$$. Writing this out, we see</p> $E_{a,b,\boldsymbol z_{-j}}[\log p(a, b, \boldsymbol z \vert \boldsymbol y)] \propto y_j \log I(z_j &gt; 0) + (1-y_j)\log I(z_j \leq 0) - \frac{E_{a,b}(z_j-(a+bx_i))^2}{2}.$ <p>Thus, after exponentiating, we have that the ideal form is a truncated normal distribution. That is, $$q_j(z_j) \sim \mathcal N^+(E(a)+E(b)x_i,1)$$ if $$y_j = 1$$ and $$q_j(z_j) \sim \mathcal N^-(E(a)+E(b)x_i,1)$$ if $$y_j = 0$$, where $$\mathcal N^+$$ and $$\mathcal N^-$$ are normal distributions truncated to be positive and negative, respecitively.</p> <p>Similarly, for $$a$$, we have $$E_{b,\boldsymbol z}[\log p(a, b, \boldsymbol z \vert \boldsymbol y)] \propto E_{b,\boldsymbol z}\left(-\frac{\sum_{i=1}^n (z_i - (a+bx_i))^2}{2}\right)$$. Removing terms that do not depend on $$a$$ and completing the square, we have the optimal form as $$q_a(a) \sim \mathcal N\left(\frac{\sum_{i=1}^n [E(z_i)-E(b)x_i]}{n},\frac{1}{n}\right)$$.</p> <p>Finally, for $$b$$, we have $$E_{a,\boldsymbol z}[\log p(a, b, \boldsymbol z)] \propto E_{a, \boldsymbol z}\left(-\frac{\sum_{i=1}^n (z_i - (a+ bx_i))^2}{2}\right)$$. Again removing the terms that do not depend on $$b$$ and completing the square, we have the following optimal form:</p> $q_b(b) \sim \mathcal N \left(\frac{\sum_{i=1}^n x_i[E(z_i)-E(a)]}{\sum_{=1}^n x_i^2}, \frac{1}{\sum_{i=1}^n x_i^2}\right).$ <p>Now that we know the form of all the factors, it’s time to optimize. To do this, we set each parameter to the mean of its optimal factored distribution. The updates can take the following form in R:</p> <div class="language-R highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">update_M_zj</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">function</span><span class="p">(</span><span class="n">M_a</span><span class="p">,</span><span class="n">M_b</span><span class="p">,</span><span class="n">j</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">mu</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">M_a</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">M_b</span><span class="o">*</span><span class="n">x</span><span class="p">[</span><span class="n">j</span><span class="p">]</span><span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="p">(</span><span class="n">y</span><span class="p">[</span><span class="n">j</span><span class="p">]</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="m">1</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nf">return</span><span class="p">(</span><span class="n">mu</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="n">dnorm</span><span class="p">(</span><span class="m">-1</span><span class="o">*</span><span class="n">mu</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="m">1</span><span class="o">-</span><span class="n">pnorm</span><span class="p">(</span><span class="m">-1</span><span class="o">*</span><span class="n">mu</span><span class="p">)))</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="k">else</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nf">return</span><span class="p">(</span><span class="n">mu</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">dnorm</span><span class="p">(</span><span class="m">-1</span><span class="o">*</span><span class="n">mu</span><span class="p">)</span><span class="o">/</span><span class="p">(</span><span class="n">pnorm</span><span class="p">(</span><span class="m">-1</span><span class="o">*</span><span class="n">mu</span><span class="p">)))</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="n">update_M_a</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">function</span><span class="p">(</span><span class="n">M_z</span><span class="p">,</span><span class="n">M_b</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nf">return</span><span class="p">(</span><span class="nf">sum</span><span class="p">(</span><span class="n">M_z</span><span class="o">-</span><span class="n">M_b</span><span class="o">*</span><span class="n">x</span><span class="p">)</span><span class="o">/</span><span class="n">n</span><span class="p">)</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="n">update_M_b</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="k">function</span><span class="p">(</span><span class="n">M_z</span><span class="p">,</span><span class="n">M_a</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nf">return</span><span class="p">(</span><span class="nf">sum</span><span class="p">(</span><span class="n">x</span><span class="o">*</span><span class="p">(</span><span class="n">M_z</span><span class="o">-</span><span class="n">M_a</span><span class="p">))</span><span class="o">/</span><span class="nf">sum</span><span class="p">(</span><span class="n">x</span><span class="o">^</span><span class="m">2</span><span class="p">))</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Thefore, a single updating step would look like</p> <div class="language-R highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span><span class="w"> </span><span class="p">(</span><span class="n">i</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="n">n</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">M_z</span><span class="p">[</span><span class="n">iteration</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">update_M_zj</span><span class="p">(</span><span class="n">M_a</span><span class="p">,</span><span class="n">M_b</span><span class="p">,</span><span class="n">i</span><span class="p">)</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="n">M_a</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">update_M_a</span><span class="p">(</span><span class="n">M_z</span><span class="p">,</span><span class="n">M_b</span><span class="p">)</span><span class="w"> </span><span class="n">M_b</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">update_M_b</span><span class="p">(</span><span class="n">M_z</span><span class="p">,</span><span class="n">M_a</span><span class="p">)</span><span class="w"> </span><span class="n">as</span><span class="p">[</span><span class="n">iteration</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">M_a</span><span class="w"> </span><span class="n">bs</span><span class="p">[</span><span class="n">iteration</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">M_b</span><span class="w"> </span></code></pre></div></div> <p>Again, variational inference is an incredibly powerful tool, and I cannot overstate how helpful the links I posted above are in understanding all of this. Hopefully this tutorial clears up some of the confusion about variational inferece.</p>keyonvafaVariational inference has become one of the most important approximate inference techniques for Bayesian statistics, but it has taken me a long time to wrap my head around the central ideas (and I’m still learning). Since I’ve found that going through examples is the most efficient way to learn, I thought I would go through a single example in this post, performing variational inference on Bayesian probit regression.Ideal Points of US Senators2017-03-09T02:00:00+00:002017-03-09T02:00:00+00:00https://keyonvafa.github.io/ideal-points<p><a href="http://k7moa.com/pdf/Upside_Down-A_Spatial_Model_for_Legislative_Roll_Call_Analysis_1983.pdf">Popularized by Keith Poole and Howard Rosenthal</a>, ideal point modeling is a powerful way to extract the relative ideologies of politicans based solely on their voting records. <a href="http://www.acrwebsite.org/search/view-conference-proceedings.aspx?Id=9188">A</a> <a href="http://www.stat.columbia.edu/~gelman/research/published/171.pdf">lot</a> <a href="https://www.cs.princeton.edu/~blei/papers/GerrishBlei2011.pdf">has</a> <a href="http://pablobarbera.com/static/barbera_twitter_ideal_points.pdf">been</a> <a href="https://www.jstor.org/stable/1558585">written</a> on ideal point models, so I’m not going to add anything new, but I wanted to give a brief overview of the Bayesian perspective.</p> <p>First, some results. The following plot shows the ideal points (essentially inferred ideologies) of US senators based solely on roll call voting from 2013-2015 (scroll over the points to see names):</p> <iframe width="1000" height="300" frameborder="0" scrolling="no" src="https://plot.ly/~keyonvafa/58.embed"></iframe> <p>More extreme scores (i.e. away from zero) represent more extreme political views. While the liberal-conservative spectrum is not explicitly encoded into the model, the model picks this up naturally from voting patterns. On the far left are some of the most liberal members of the US Senate, such as <a href="https://en.wikipedia.org/wiki/Brian_Schatz">Brian Schatz</a>, while the far right has some of the most conservative members, such as <a href="https://en.wikipedia.org/wiki/Jim_Risch">Jim Risch</a> and <a href="https://en.wikipedia.org/wiki/Ted_Cruz">Ted Cruz</a>. In the middle are senators sometimes referred to as <a href="https://en.wikipedia.org/wiki/Democrat_In_Name_Only">DINOs</a> and <a href="https://en.wikipedia.org/wiki/Republican_In_Name_Only">RINOs</a>, such as <a href="https://en.wikipedia.org/wiki/Joe_Manchin">Joe Manchin</a>, <a href="https://en.wikipedia.org/wiki/Susan_Collins">Susan Collins</a>, and <a href="https://en.wikipedia.org/wiki/Lisa_Murkowski">Lisa Murkowski</a>.</p> <p>The basic model is as follows. Consider a legislator $$u$$ and a particular bill $$d$$. The vote $$u$$ places on $$d$$ is denoted as a binary variable, $$v_{ud} = 1$$ for Yea and $$v_{ud} = 0$$ for Nay. Each legislator has an <em>ideal point</em> $$x_u$$; a value of 0 is political neutrality, whereas large values in either direction indicate more political extremism in the respective direction. Every bill has its own <em>discrimination</em> $$b_d$$, which is on the same scale as the ideal points for legislators. If $$x_u*b_d$$ is high, the legislator is likely to vote for the bill, and if the value is low, the legislator is less likely to vote. Finally, each bill also has an offset $$a_d$$ that indicates how popular the bill is overall, regardless of political affiliation. Formally, the model is as follows:</p> $P(v_{ud} = 1) = \sigma(x_ub_d + a_d),$ <p>where $$\sigma(\cdot)$$ is some sigmoidal function, such as the inverse-logit or the standard normal CDF. If a senator didn’t vote on a particular bill, this data is considered missing at random.</p> <p>Inference requires learning the vectors $$X, B$$, and $$A$$. I took a Bayesian approach and put (independent) normal priors on each variable. I then used an EM algorithm derived by <a href="http://imai.princeton.edu/research/files/fastideal.pdf">Kosuke Imai et al</a>. The E-Step and M-Step are described in full detail in the paper, and I followed their setup, except I removed senators with less than 50 votes, and I stopped after 500 iterations.</p> <p>All my code is available <a href="https://github.com/keyonvafa/ideal-point-blog">here</a>.</p>keyonvafaPopularized by Keith Poole and Howard Rosenthal, ideal point modeling is a powerful way to extract the relative ideologies of politicans based solely on their voting records. A lot has been written on ideal point models, so I’m not going to add anything new, but I wanted to give a brief overview of the Bayesian perspective.The Box-Muller Transform2017-02-27T21:00:00+00:002017-02-27T21:00:00+00:00https://keyonvafa.github.io/box-muller-transform<p>Every statistician has a favorite way of generating samples from a distribution (not sure if I need a citation for this one). From <a href="https://en.wikipedia.org/wiki/Rejection_sampling">rejection sampling</a> to <a href="https://arxiv.org/pdf/1206.1901.pdf">Hamiltonian Monte Carlo</a>, there are countless methods to choose from (my personal favorite is <code class="language-plaintext highlighter-rouge">rnorm</code>).</p> <p>One of the most interesting and counterintuitive sampling techniques is the Box-Muller transform. I’m not sure how widely it’s used today, but given two samples from a uniform distribution, it can generate two <em>independent</em> samples from a standard normal distribution.</p> <!--Given a uniform sample $$U \sim \text{Unif}(0,1)$$, we can generally sample from a distribution with cdf $$F$$ by taking $$F^{-1}(U)$$. Since we cannot write the normal cdf in closed form, we must rule out the inverse cdf method.--> <p>The idea behind the Box-Muller transform is to imagine two independent samples $$X, Y \sim \mathcal{N}(0,1)$$ plotted in the Cartesian plane, and then represent these points as polar coordinates. Recall, to transform to polar, we need the distance $$R$$ between $$(X,Y)$$ and the origin along with $$\theta$$, the angle this line segment makes with the x-axis.</p> <p>We start with the distance from the origin, $$R = \sqrt{X^2 + Y^2}$$. For simplicity, we work with $$R^2 = X^2 + Y^2$$. The sum of two independent squared standard normals follows a <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution">chi-squared distribution</a> with 2 degrees of freedom. It is also a <a href="https://en.wikipedia.org/wiki/Chi-squared_distribution#Gamma.2C_exponential.2C_and_related_distributions">known fact</a> that a chi-squared distribution with 2 degrees of freedom is equivalent to a $$\text{Gamma}(1,\frac{1}{2})$$ random variable, which is itself <a href="http://stats.stackexchange.com/questions/27908/sum-of-exponential-random-variables-follows-gamma-confused-by-the-parameters">equivalent</a> to a $$\text{Expo}(\frac{1}{2})$$ variable. Finally, we can express an exponential random variable as the <a href="http://math.stackexchange.com/questions/199614/distribution-of-log-x-if-x-is-uniform">log of a uniform</a>. More succinctly,</p> $R^2 \sim \chi^2_{df=2} \sim \text{Gamma}\left(1,\frac{1}{2}\right) \sim \text{Expo}\left(\frac{1}{2}\right) \sim -2\log U_1$ <p>where $$U_1 \sim \text{Unif}(0,1).$$</p> <p>What about the angle, $$\theta$$? If we write the joint density of $$X$$ and $$Y$$, we can see</p> $f_{X,Y}(x,y) = \frac{1}{2\pi} e^{-\frac{X^2}{2}}e^{-\frac{Y^2}{2}} = \frac{1}{2\pi}e^{-\frac{(X^2+Y^2)}{2}} = \frac{1}{2\pi}e^{-\frac{R^2}{2}}.$ <p>Thus, once we have $$R^2$$, the squared distance between $$(X,Y)$$ and the origin, the joint distribution of $$X$$ and $$Y$$ is uniform. That is, as long as $$(X,Y)$$ is a pair satisfying $$X^2 + Y^2 = R^2$$, it can be any point on the circle with radius $$R$$. As a result, we can simply take $$\theta = 2\pi U_2$$, where $$U_2 \sim \text{Unif}(0,1).$$</p> <p>Putting all these results together, if we take $$R = \sqrt{-2\log U_1}$$ and $$\theta = 2\pi U_2$$ for $$U_1, U_2 \sim \text{Unif}(0,1)$$, we have the polar coordinates for two independent standard normal draws. Thus, converting back to Cartesian, we have</p> $X = R\cos\theta = \sqrt{-2\log U_1}\cos(2\pi U_2)\\ Y = R\sin\theta = \sqrt{-2\log U_1}\sin(2\pi U_2).$ <p>This is straightforward to implement in R:</p> <div class="language-R highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">nsims</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">10000</span><span class="w"> </span><span class="n">samples</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">rep</span><span class="p">(</span><span class="kc">NA</span><span class="p">,</span><span class="n">nsims</span><span class="o">*</span><span class="m">2</span><span class="p">)</span><span class="w"> </span><span class="k">for</span><span class="w"> </span><span class="p">(</span><span class="n">sim</span><span class="w"> </span><span class="k">in</span><span class="w"> </span><span class="m">1</span><span class="o">:</span><span class="n">nsims</span><span class="p">)</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="n">us</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">runif</span><span class="p">(</span><span class="m">2</span><span class="p">)</span><span class="w"> </span><span class="n">R</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nf">sqrt</span><span class="p">(</span><span class="m">-2</span><span class="o">*</span><span class="nf">log</span><span class="p">(</span><span class="n">us</span><span class="p">[</span><span class="m">1</span><span class="p">]))</span><span class="w"> </span><span class="n">theta</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="m">2</span><span class="o">*</span><span class="nb">pi</span><span class="o">*</span><span class="n">us</span><span class="p">[</span><span class="m">2</span><span class="p">]</span><span class="w"> </span><span class="n">samples</span><span class="p">[</span><span class="m">2</span><span class="o">*</span><span class="n">sim</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">R</span><span class="o">*</span><span class="nf">cos</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span><span class="w"> </span><span class="n">samples</span><span class="p">[</span><span class="m">2</span><span class="o">*</span><span class="n">sim</span><span class="m">-1</span><span class="p">]</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">R</span><span class="o">*</span><span class="nf">sin</span><span class="p">(</span><span class="n">theta</span><span class="p">)</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre></div></div> <p>Using the above code, I compared the histogram of Box-Muller samples to those using <code class="language-plaintext highlighter-rouge">rnorm</code>, which were nearly identical:</p> <p><img src="/assets/images/box_muller_blog/box_muller_samples.png" alt="Box-Muller Samples" /></p> <p><em>Interesting, but this is nothing more than a cool sampling trick, right?</em> Wrong. If we represent normal random variables in Box-Muller form, it can become easier to prove results about the normal distribution.</p> <p>For example, consider the problem of proving that for independent draws $$X,Y \sim \mathcal{N}(0,1)$$, $$X+Y$$ is independent of $$X-Y$$, and both distributed as $$\mathcal{N}(0,2)$$. A proof that doesn’t require the use of pdfs involves representing $$X$$ and $$Y$$ in Box-Muller form (I first saw this solution in <a href="http://www.people.fas.harvard.edu/~blitz/Site/Home.html">Joe Blitzstein’s</a> class <a href="https://locator.tlt.harvard.edu/course/colgsas-111696">Stat 210</a>, which I encourage any Harvard student who’s reading this to take). Let $$R^2 \sim \chi^2_{df=2}$$ and $$U \sim \text{Unif}(0,1)$$, as in the representation above. Thus, $$X = R\cos(\theta) = R\cos(2\pi U)$$, and $$Y = R\sin(\theta) = R\sin(2\pi U)$$. This form gives us</p> $X + Y = R\cos(2\pi U) + R\sin(2\pi U) = \sqrt{2}R\sin(2\pi U + \pi/4)\\ X - Y = R\cos(2\pi U) - R\sin(2\pi U) = \sqrt{2}R\cos(2\pi U + \pi/4)$ <p>Note that we use the trigonometric identities for $$\cos(\alpha + \beta)$$ and $$\sin(\alpha + \beta)$$ in the derivation. The final form should look familiar – we’ve recovered the Box-Muller representation, albeit with some modifications. The $$\sqrt{2}$$ in front scales the standard normal so it now has a variance of 2. Additionally, note that we are using $$2\pi U + \pi/4$$ as $$\theta$$ instead of $$2\pi U$$. However, we do not have to worry about it as it still results in a uniform sample over the possible angles.</p> <p>Thus, $$X+Y$$ and $$X-Y$$ are independent draws from the distribution $$\mathcal{N}(0,2)$$.</p> <!--between the x-axis and the line segment connecting the origin and $$(X,Y)$$. --> <!--I first came across the method in a class taught by <a href='http://www.people.fas.harvard.edu/~blitz/Site/Home.html'>Joe Blitzstein</a>, and a conversation today with another PhD student inspired me to write up a short tutorial.-->keyonvafaEvery statistician has a favorite way of generating samples from a distribution (not sure if I need a citation for this one). From rejection sampling to Hamiltonian Monte Carlo, there are countless methods to choose from (my personal favorite is rnorm).Lies, Damned Lies, and Causal Inference2017-02-18T17:00:00+00:002017-02-18T17:00:00+00:00https://keyonvafa.github.io/smoking-causal-inference-paradox<p>To paraphrase <a href="https://en.wikipedia.org/wiki/Lies,_damned_lies,_and_statistics">Benjamin Disraeli</a>, statistics makes it easy to lie. In this post, I’ll go over an example from Judea Pearl’s excellent textbook, <a href="https://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X">Causality</a>, that shows how different statistical approaches can lead to different estimates of the causal effect of smoking on lung cancer.</p> <p>First, the (fictional) data, which is taken from Section 3.3 of <a href="https://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X">Causality</a>. Say we have results from an observational (i.e. non-randomized) study, that aims to assess the affect of smoking on developing lung cancer. For every person, we have a binary variable $$X$$ that indicates whether that person is a smoker and a binary outcome variable $$Y$$ that indicates whether that person developed lung cancer. Additionally, we have a binary variable $$Z$$ that indicates whether each person had a significant amount of tar in their lungs.</p> <p>The results from the (fictional) study are depicted in the table below:</p> <p>\begin{array}{c|c|c|c} \text{Smoker } (X) &amp; \text{Tar }(Z) &amp; \text{Group Size (% of population)} &amp; \text{Cancer Prevalence (% of group)} \<br /> \hline 0 &amp; 0 &amp; 47.5\% &amp; 10\%\<br /> 0 &amp; 1 &amp; 2.5\% &amp; 5\%\<br /> 1 &amp; 0 &amp; 2.5\% &amp; 90\%\<br /> 1 &amp; 1 &amp; 47.5\% &amp; 85\%\<br /> \end{array}</p> <p>At first glance, it seems that smoking is likely to cause cancer. Ignoring $$Z$$, both groups of $$X = 1$$ have a far larger prevalence of cancer than $$X = 0$$. Even considering $$Z$$, smokers with tar buildup are more likely to have cancer than nonsmokers with tar buildup, and smokers without tar buildup are still more likely to have cancer than nonsmokers without tar buildup.</p> <p>Indeed, simple calculations using Bayes’ rule verify $$P(Y = 1 \vert X =0) = .10$$ and $$P(Y = 1 \vert X = 1) = .85$$, indicating one is much more likely to have lung cancer if that person is also a smoker.</p> <p>However, this might be misleading. The Bayes’ rule calculation above corresponds to a <em>prediction</em> problem: What’s the probability someone has cancer if she’s a smoker? In real life, we may be more curious about the <em>causal</em> problem: What’s the probability that smoking will <em>cause</em> someone to have cancer? The distinction may seem like a subtle one but it’s important. It may be possible that lung cancer and smoking are correlated due to a common cause, but that lung cancer does not directly (or indirectly) cause smoking. Since we’re concerned with an intervention (i.e. choosing to smoke or not), we would like to estimate the direct cause of this intervention.</p> <p>This problem came up in a <a href="http://www.cs.columbia.edu/~blei/seminar/2017_applied_causality/index.html">causal inference class</a> I’m taking this semester, and our professor likes to say it’s easy to go down philosophical rabbit holes when defining causality. I’ll leave that to the experts (there are excellent textbooks by <a href="https://www.amazon.com/Causal-Inference-Statistics-Biomedical-Sciences/dp/0521885884">Guido Imbens and Don Rubin</a> along with <a href="https://www.amazon.com/Counterfactuals-Causal-Inference-Principles-Analytical/dp/0521671930">Stephen Morgan and Christopher Winship</a>).</p> <p>An intuitive approach for me is through the use of causal graphs. I won’t go over all the details, but the main idea is that every node in the graph represents a variable in the causal problem of interest, and the arrows between each node show the causal direction. Nodes can either be observed (shaded) or latent (unshaded).</p> <p>For example, in the smoking example, we would depict $$X$$, $$Y$$, and $$Z$$ with observed nodes. It’s fair to imagine that the decision to smoke will cause the amount of tar buildup in the lungs, and we can also assume that lung cancer is only caused by tar in the lungs. In this case, we would have an arrow from $$X$$ to $$Z$$ followed by another arrow from $$Z$$ to $$Y$$.</p> <p>This is unrealistic, however, as there are likely unknown, unobserved causes that <em>confound</em> these variables. For example, genetics can influence our decision to smoke, and it can also determine our predisposition to cancer. It wouldn’t be a stretch to assume that tar buildup is determined only by smoking. (These assumptions are definitely simplifying and unrealistic, but that’s besides the point for this example.) Accounting for this <em>confounder</em> illuminates the difficulties posed by the causal approach: people who are genetically inclined to smoke may also be more genetically likely to have cancer, correlating these two variables without a causal relationship.</p> <p>Denoting genetics as the latent variable $$U$$, the causal graph is depicted in subfigure (a) below:</p> <p><img src="/assets/images/causal_inference_lies_blog/observed_do_model.png" alt="Causal Graphs" /></p> <p>If we’re interested in the causal effect of $$X$$ on $$Z$$, we are thinking in terms of interventions; that is, $$X$$ would no longer depend on $$U$$ if someone is forced to smoke or to not smoke. Thus, Pearl introduces the $$do(\cdot)$$ operator, which imagines the causal graph under intervention. If $$do(X = 1)$$, we force $$X$$ to be 1, and imagine that $$X$$ is only caused by the “do-er” as opposed to any of its causal predecessors, since we can intervene. Thus, the causal effect of interest becomes $$P(Y = 1 \vert do(X = 1))$$ as opposed to $$P(Y = 1 \vert X = 1)$$. This scenario is depicted in subfigure (b) above.</p> <p>Because of the confounding variable $$U$$, the numbers at the beginning of this post do not accurately reflect the causal effect. There are several set of criteria for calculating causal effects based off causal graphs, most notably the <a href="http://bayes.cs.ucla.edu/BOOK-2K/ch3-3.pdf">back-door and front-door criteria</a>. Using the front-door criterion (which I won’t elaborate on here but deserves its own post), we can see that $$Z$$ is an intermediate causal effect. That is, $$Z$$ only depends on $$X$$ through $$X$$.</p> <p>We can then calculate the effect of $$Z$$ on $$Y$$; however, there exists what’s called a <em>back-door path</em> from $$Z$$ to $$Y$$ through $$X$$. That is, if we just calculate the causal effect of $$Z$$ on $$Y$$, because of the confounder $$U$$, we would include spurious effects that are due to $$X$$. Therefore, we must <em>block</em> $$X$$ by accounting for it when calculating the causal effect. Chapter 3.3 of <a href="https://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/052189560X">Pearl’s textbook</a> goes through these derivations in more depth.</p> <p>Mathematically, then, we can calculate</p> $P(Y = 1 \vert do(X = x)) = \sum_{z=0}^1 P(Z = z \vert X = x) \sum_{x'=0}^{1} P(Y = 1 \vert X = x', Z = z)P(X = x').$ <p>The $$P(Z = z \vert X = x)$$ term accounts for the intermediate causal effect of $$X$$ on $$Z$$. The term in the sum estimates $$P(y = 1 \vert do(Z = z))$$ by conditioning on $$X$$ to account for the final causal effect of $$Z$$ on $$Y$$. Using this formula with the same data (re-posted below), we can calculate $$P(Y = 1 \vert do(X = 1)) = 0.45$$ and $$P(Y = 1 \vert do(X = 0)) = 0.50$$, indicating that smoking would actually <em>decrease</em> the chance of lung cancer.</p> <p>Intuitively, what’s going on? It appears that smoking increases the amount of tar buildup in the lungs, which is easily verified in the table below, since $$P(Z = 1 \vert X = 1) = 0.95$$ and $$P(Z = 1 \vert X = 0) = 0.05$$. However, we can see that conditioning on $$X$$, tar buildup <em>decreases</em> your likelihood of getting lung cancer. That is, $$P(Y = 1 \vert X = 1, Z = 0) &gt; P(Y = 1 \vert X = 1, Z =1)$$ and $$P(Y = 1 \vert X = 0, Z = 0) &gt; P(Y = 1 \vert X = 0, Z = 1).$$ Thus, combining these results: smoking causes a larger amount of tar buildup in the lungs, and large tar buildups in the lungs prevent cancer.</p> <p>\begin{array}{c|c|c|c} \text{Smoker } (X) &amp; \text{Tar }(Z) &amp; \text{Group Size (% of population)} &amp; \text{Cancer Prevalence (% of group)} \<br /> \hline 0 &amp; 0 &amp; 47.5\% &amp; 10\%\<br /> 0 &amp; 1 &amp; 2.5\% &amp; 5\%\<br /> 1 &amp; 0 &amp; 2.5\% &amp; 90\%\<br /> 1 &amp; 1 &amp; 47.5\% &amp; 85\%\<br /> \end{array}</p> <p>I want to stress this data is fictional, and the arguments are simplistic. One could easily come up with another causal diagram to show that smoking increases the likelihood of cancer. However, I think this example illustrates the importance of being careful when performing causal inference analyses, along with the differences between causal inference and prediction problems.</p>keyonvafaTo paraphrase Benjamin Disraeli, statistics makes it easy to lie. In this post, I’ll go over an example from Judea Pearl’s excellent textbook, Causality, that shows how different statistical approaches can lead to different estimates of the causal effect of smoking on lung cancer.Tweet Counts as Poisson GLMs2017-02-10T17:00:00+00:002017-02-10T17:00:00+00:00https://keyonvafa.github.io/tweet-counts-poisson-glm<p><em>Last week, <a href="http://keyonvafa.com/tweet-counts-poisson-processes/">I wrote about modeling tweet counts as a simple Poisson process</a>. In this post, I’ll dive into a slightly more sophisticated method, so check out the previous post for some background.</em></p> <p>I’m interested in estimating the number of tweets President Trump will post in a given week so I can use the model to <a href="https://www.predictit.org/Market/2956/How-many-tweets-will-%40realDonaldTrump-post-from-noon-Feb-8-to-noon-Feb-15">bet on PredictIt</a>. <a href="http://keyonvafa.com/tweet-counts-poisson-processes/">My post last week</a> demonstrated that a stationary Poisson process had some weaknesses – the rate wasn’t constant everywhere, and Trump’s tweets seemed to self-excite (i.e. if he’s in the middle of a tweet storm, he’s likely to keep tweeting).</p> <p>In this post, I’ll focus on modeling tweet counts as a Poisson <em>generalized linear model</em> (GLM). (You probably won’t need to know much about GLMs to understand this post, but if you’re interested, the <a href="https://www.amazon.com/Generalized-Chapman-Monographs-Statistics-Probability/dp/0412317605">canonical text</a> is by <a href="https://galton.uchicago.edu/~pmcc/">Peter McCullagh</a> and <a href="https://en.wikipedia.org/wiki/John_Nelder">John Nelder</a>. I also highly recommend <a href="http://www.stat.ufl.edu/~aa/">Alan Agresti’s</a> <a href="https://www.amazon.com/Foundations-Linear-Generalized-Probability-Statistics/dp/1118730038">textbook</a>, which I used in his class.) The model will be autoregressive, as I will include the tweet counts for the previous few days among my set of predictors.</p> <p>First I’ll go over the results, so <a href="#model">jump ahead</a> if you’re interested in the more technical model details.</p> <h2 id="results">Results</h2> <p>In short, my model uses simulations to predict the weekly tweet count probabilities. That is, it simulates 5,000 possible versions of the week, and counts how many of these simulations are in each <a href="https://www.predictit.org/Market/2956/How-many-tweets-will-%40realDonaldTrump-post-from-noon-Feb-8-to-noon-Feb-15">PredictIt bucket</a>. It uses these counts to assign probabilities to each bucket.</p> <p>I ran the model last night and compared the results to the probabilities on PredictIt – all of my predictions were within three percentage points of those online, with the exception of one bucket that was eight off (the “55 or more” bucket, which my model thought was less likely than the market). Running it again this morning, however, something was off – the odds in the market had shifted considerably toward preferring less tweets, at odds with my model.</p> <p>Confused, I read the comments, which indicated that seven tweets had been removed from Trump’s account this morning. However, the removed tweets were from a while ago, so I was confused why they would make a difference in this week’s count. Then I read the market rules:</p> <blockquote> <p><em>“The number of total tweets posted by the Twitter account realDonaldTrump shall exceed 34,455 by the number or range identified in the question…The number by which the total tweets at expiration exceeds 34,455 may not equal the number of tweets actually posted over that time period … [since] <strong>tweets may be deleted prior to expiration of this market</strong>.”</em></p> </blockquote> <p>D’oh. That didn’t seem like the smartest rule. It meant the number of weekly tweets could be negative if Trump deleted a whole bunch of tweets from before the week. There weren’t many options for modeling these purges with the data at hand. Therefore, I decided to assume that no more tweets would be deleted this week, and subtracted the 7 missing tweets from the simulation.</p> <p>I ran the model on Friday evening, with the following histogram depicting the distribution of simulated total weekly tweet counts:</p> <p><img src="/assets/images/tweet_counts_poisson_glm_blog/simulated_tweet_hist.png" alt="Simulated tweet histogram" /></p> <p>The following plot shows the simulated trajectories for the week, with 4 paths randomly colored for emphasis:</p> <p><img src="/assets/images/tweet_counts_poisson_glm_blog/simulated_tweet_paths.png" alt="Simulated tweet paths" /></p> <p>Finally, the following table shows my model probabilities, compared to those on PredictIt as of this writing:</p> <p>\begin{array}{c|cccc} \text{Number of tweets} &amp; \text{“Yes” Price} &amp; \text{Model “Yes” Probability} &amp; \text{“No” Price} &amp; \text{Model “No” Probability} \<br /> \hline\text{24 or fewer} &amp; $0.11 &amp; 1\% &amp;$0.90 &amp; 99\%\<br /> \text{25 - 29} &amp; $0.14 &amp; 7\% &amp;$0.88 &amp; 93\%\<br /> \text{30 - 34} &amp; $0.23 &amp; 24\% &amp;$0.79 &amp; 76\%\<br /> \text{35 - 39} &amp; $0.31 &amp; 35\% &amp;$0.73 &amp; 65\%\<br /> \text{40 - 44} &amp; $0.19 &amp; 23\% &amp;$0.84 &amp; 77\%\<br /> \text{45 - 49} &amp; $0.09 &amp; 9\% &amp;$0.93 &amp; 91\%\<br /> \text{50 - 54} &amp; $0.05 &amp; 2\% &amp;$0.96 &amp; 98\%\<br /> \text{55 or more} &amp; $0.04 &amp; 0.3\% &amp;$0.97 &amp; 99.7\%\<br /> \end{array}</p> <p>Thus, compared to my model, the market believes Trump will have a quiet week. This may reflect the possibility of Trump deleting more tweets, or it could be some market knowledge that Trump will be preoccupied by various presidential engagements.</p> <p>In general, however, the market prices align nicely with the model; no two buckets (beside the first two) disagree with the model probability by more than 4%. I think this is definitely a more robust model than the simple Poisson process, as the probabilities align quite well with the market. Thus, not expecting much in returns, I bought shares of “No” for “24 or fewer” and “25-29” and “Yes” for “35-39” and “40-44”.</p> <h2 id="model">Model</h2> <p>For this analysis, I thought it made sense to predict tweets as daily counts as opposed to weekly counts, so the predictions would be more fine-tuned. Thus, denote by $$y_t$$ the number of tweets made by Trump on day $$t$$. Given a vector of predictors $$\boldsymbol x_t$$ for day $$t$$ and a vector of (learned) coefficients $$\boldsymbol \beta$$, the model I used was</p> $y_t \sim \text{Pois}(\exp(\boldsymbol x_t^T \boldsymbol \beta)).$ <p>Note that because we are exponentiating $$\boldsymbol x_t^T \boldsymbol\beta$$, the rate parameter will never be negative, so there are no constraints on the sign of $$\boldsymbol \beta$$.</p> <p>To keep the model simple, I was fairly limited in my set of predictors. I included an intercept term, the day of the week, and binary variables to indicate if the tweet occurred after Trump won the election and whether the tweet occurred after the inauguration (the graph from <a href="http://keyonvafa.com/tweet-counts-poisson-processes/">my previous post</a> indicates a significant changepoint after the election). I also included an indicator variable indicating whether there was a presidential or vice presidential debate – although these won’t happen again, they explain spikes in the existing data.</p> <p>It also seemed reasonable that the number of Trump’s tweets today would depend on how many tweets he had in the previous few days. Thus, as a first attempt, I included the past 5 days of history, and used the following model:</p> $y_t |\boldsymbol x_t,y_{t-1}, \dots, y_{t-5} \sim \text{Pois}\left(\exp\left(\boldsymbol\beta^T \boldsymbol x_t + \sum_{k=1}^5 \gamma_k y_{t-k} \right)\right).$ <p>Here, $$\boldsymbol x_t$$ is the vector of aforementioned predictors, i.e. intercept, day of week, etc. At time $$t$$, the scalars $$y_{t-1}, \dots, y_{t-5}$$ indicate the counts of the previous 5 days, and each count has its own parameter to be estimated, $$\gamma_k$$. Thus, this model requires that we estimate $$\boldsymbol \beta$$ along with $$\gamma_1, \dots, \gamma_5$$.</p> <p>I used the built-in <code class="language-plaintext highlighter-rouge">glm</code> function in R to estimate these variables using maximum likelihood. If you’re unfamiliar with maximum likelihood, the basic idea is that we can maximize $$\sum_{t=1}^T \log p(y_t\vert x_t,y_{t-1}, \dots, y_{t-5})$$ by taking the gradient with respect to our parameters $$\boldsymbol \gamma$$ and $$\boldsymbol \beta$$ and using an iterative method to set the gradient to 0. (I’d like to get a blog post up someday about GLMs in general so I could focus on maximum likelihood estimation and discuss some other nice properties.)</p> <p>After fitting to the current data, I found that among the $$\boldsymbol\gamma$$, only $$\gamma_1$$ and $$\gamma_2$$ were deemed statistically significant (and even these predicted values were quite small). Besides the intercept and debate indicator, the most statistically significant $$\boldsymbol\beta$$ coefficient was for the indicator of being after the election, at $$-0.44$$ (recall that these end up getting exponentiated). Thus, I re-ran the model using only the past two days of history (as opposed to five) in the autoregressive component. The following graph shows how the model mean fits to the training data:</p> <p><img src="/assets/images/tweet_counts_poisson_glm_blog/trained_tweet_data.png" alt="Trained tweet data" /></p> <p>Not perfect, but reasonable given the basic set of predictors, and it appears to get the general trends right. Note that the four spikes correspond exactly to the debates.</p> <p>I was initially worried about overdispersion – recall that in a Poisson model, the variance of the output $$y_t$$ is equal to the mean, so if the variance in reality is larger than the mean, a Poisson would be a poor approximation. Thus, I also tried using a negative binomial to model the data, which performed worse in training log-likelihood and training error. As a result, I stuck with the original Poisson model.</p> <p>After estimating all the coefficients, it was time to model the probability of finishing in each <a href="https://www.predictit.org/Market/2956/How-many-tweets-will-%40realDonaldTrump-post-from-noon-Feb-8-to-noon-Feb-15">bucket on PredictIt</a>. Because the number of tweets in one day would affect the number of tweets for the next day, I couldn’t model these probabilities analytically. Thus, I ran 5,000 simulations to approximate the probability of being in each bucket by Wednesday noon.</p> <p>One final note about the model – it predicts tweets for full-day length intervals, i.e. noon Monday to noon Tuesday. However, what if it’s 8 pm on Sunday, and we’re curious how often Trump will tweet before Wednesday at noon? Predicting for 2 more rows would not be enough (finishing Tuesday at 8 pm), and using 3 would be too much (finishing Wednesday at 8 pm). Thus, I decided to run an additional model that rounded at the nearest noon. That is, I duplicated the above model, except I used the number of tweets between now and the next noon as the response variable. For example, if I were running the program at 8 pm on Sunday, I would model how often Trump tweets between 8 pm and the following day’s noon for every day in the history. Then, I would use this set of coefficients to predict the tweets between now and the next noon, and then finish off all remaining full days with the coefficients from the aforementioned model. (If none of this paragraph makes sense, don’t worry about it, as it’s a pretty minor detail.)</p> <p>In the future, I’d be interested in more complicated variations, such as modeling tweet deletions or using a larger set of predictors (along with performing a more rigorous dispersion analysis).</p> <p>All code is available <a href="https://github.com/keyonvafa/tweet-count-poisson-blog">here</a>.</p> <h2 id="update">Update</h2> <p>I bought shares in four markets (two Yes’s and two No’s). The tweet count ended up being in one of the Yes markets, good enough for a 25% return. That’s a great return, but it’s too early to say anything conclusive about the model because $$N = 1$$. That being said, I’ll continue to use the GLM because the results seem promising so far.</p> <h2 id="acknowledgments">Acknowledgments</h2> <p>Thanks to <a href="http://www.columbia.edu/~swl2133/">Scott Linderman</a> for suggesting an autoregressive GLM model. Also thanks to <a href="https://medium.com/@Teddy__Kim">Teddy Kim</a> for various suggestions and brainstorming help. A final thank you to <a href="http://stat.columbia.edu/department-directory/name/owen-ward/">Owen Ward</a> for suggesting the connection between spikes and debates in the model.</p>keyonvafaLast week, I wrote about modeling tweet counts as a simple Poisson process. In this post, I’ll dive into a slightly more sophisticated method, so check out the previous post for some background.