Skip to content

Commit a9c0f29

Browse files
committed
Likelihood should actually be between 0 and 1 as claimed
1 parent 3223b6c commit a9c0f29

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

Chapter5_LossFunctions/LossFunctions.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
"- $ L( \\theta, \\hat{\\theta} ) = \\frac{ | \\theta - \\hat{\\theta} | }{ \\theta(1-\\theta) }, \\; \\; \\hat{\\theta}, \\theta \\in [0,1] $ emphasizes an estimate closer to 0 or 1 since if the true value $\\theta$ is near 0 or 1, the loss will be *very* large unless $\\hat{\\theta}$ is similarly close to 0 or 1. \n",
7474
"This loss function might be used by a political pundit who's job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occcur), he or she would want to strongly agree as to not look like a skeptic. \n",
7575
"\n",
76-
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( (\\theta - \\hat{\\theta} )^2 \\right) $ is bounded between 0 and 1 and reflects that the user is indifferent to suffiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
76+
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( -(\\theta - \\hat{\\theta} )^2 \\right) $ is bounded between 0 and 1 and reflects that the user is indifferent to suffiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
7777
"- Complicated non-linear loss functions can programmed: \n",
7878
"\n",
7979
" def loss(true_value, estimate):\n",

0 commit comments

Comments
 (0)