Skip to content

Fixed: #12233 #12249

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
kl_loss won't return -inf if y_true is 0 Issue Fixed
  • Loading branch information
Ved Prakash Vishwakarma authored and Ved Prakash Vishwakarma committed Oct 23, 2024
commit d400b6339c92d7df49dd7305399737cb7fdccc27
5 changes: 4 additions & 1 deletion machine_learning/loss_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -659,7 +659,10 @@ def kullback_leibler_divergence(y_true: np.ndarray, y_pred: np.ndarray) -> float
if len(y_true) != len(y_pred):
raise ValueError("Input arrays must have the same length.")

kl_loss = y_true * np.log(y_true / y_pred)
kl_loss = 0
if y_true != 0:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't you check the length of the y_true array greater than 0?

kl_loss = y_true * np.log(y_true / y_pred)

return np.sum(kl_loss)


Expand Down