From 03c14dae66af70883bc1dd0b10a83509daa76ff6 Mon Sep 17 00:00:00 2001 From: Nicolas GUILLARD <151741326+nicolasguillard@users.noreply.github.com> Date: Fri, 13 Sep 2024 22:55:35 +0200 Subject: [PATCH] BugFix #402 Update 06.5-example-based-influence-fct.Rmd updating the latex code of the equation of miminizer for loss with upweighted point z is wrong in section 10.5.2 --- manuscript/06.5-example-based-influence-fct.Rmd | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/manuscript/06.5-example-based-influence-fct.Rmd b/manuscript/06.5-example-based-influence-fct.Rmd index cb154bc9d..a7d139dd5 100644 --- a/manuscript/06.5-example-based-influence-fct.Rmd +++ b/manuscript/06.5-example-based-influence-fct.Rmd @@ -340,7 +340,7 @@ The following section explains the intuition and math behind influence functions The key idea behind influence functions is to upweight the loss of a training instance by an infinitesimally small step $\epsilon$, which results in new model parameters: -$$\hat{\theta}_{\epsilon,z}=\arg\min_{\theta{}\in\Theta}(1-\epsilon)\frac{1}{n}\sum_{i=1}^n{}L(z_i,\theta)+\epsilon{}L(z,\theta)$$ +$$\hat{\theta}_{\epsilon,z}=\arg\min_{\theta{}\in\Theta}\frac{1}{n}\sum_{i=1}^n{}L(z_i,\theta)+\epsilon{}L(z,\theta)$$ where $\theta$ is the model parameter vector and $\hat{\theta}_{\epsilon,z}$ is the parameter vector after upweighting z by a very small number $\epsilon$. L is the loss function with which the model was trained, $z_i$ is the training data and z is the training instance which we want to upweight to simulate its removal.