From f9aa102b47d97860f0a109425ea276f08c955e89 Mon Sep 17 00:00:00 2001 From: Srihari Thyagarajan Date: Fri, 16 Aug 2024 08:50:15 +0530 Subject: [PATCH] Update README --- easy/Log_Softmax/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/easy/Log_Softmax/README.md b/easy/Log_Softmax/README.md index 5f590e0..e00a759 100644 --- a/easy/Log_Softmax/README.md +++ b/easy/Log_Softmax/README.md @@ -35,7 +35,7 @@ $$\text{softmax}(x_i) = \frac{e^{x_i}}{\sum_{j=1}^n e^{x_j}}$$ However, directly applying the logarithm to the softmax function can lead to numerical instability, especially when dealing with large numbers. To prevent this, we use the log-softmax function, which incorporates a shift by subtracting the maximum value from the input vector: $$ -\text{log-softmax}(x_i) = x_i - \max(x) - \log\left(\sum_{j=1}^n e^{x_j - \max(x)}\right) +\text{log\_softmax}(x_i) = x_i - \max(x) - \log\left(\sum_{j=1}^n e^{x_j - \max(x)}\right) $$ This formulation helps to avoid overflow issues that can occur when exponentiating large numbers. The log-softmax function is particularly useful in machine learning for calculating probabilities in a stable manner, especially when used with cross-entropy loss functions.