You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Chapter5_LossFunctions/LossFunctions.ipynb
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@
73
73
"- $ L( \\theta, \\hat{\\theta} ) = \\frac{ | \\theta - \\hat{\\theta} | }{ \\theta(1-\\theta) }, \\; \\; \\hat{\\theta}, \\theta \\in [0,1] $ emphasizes an estimate closer to 0 or 1 since if the true value $\\theta$ is near 0 or 1, the loss will be *very* large unless $\\hat{\\theta}$ is similarly close to 0 or 1. \n",
74
74
"This loss function might be used by a political pundit who's job requires him or her to give confident \"Yes/No\" answers. This loss reflects that if the true parameter is close to 1 (for example, if a political outcome is very likely to occcur), he or she would want to strongly agree as to not look like a skeptic. \n",
75
75
"\n",
76
-
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( (\\theta - \\hat{\\theta} )^2 \\right) $ is bounded between 0 and 1 and reflects that the user is indifferent to suffiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
76
+
"- $L( \\theta, \\hat{\\theta} ) = 1 - \\exp \\left( -(\\theta - \\hat{\\theta} )^2 \\right) $ is bounded between 0 and 1 and reflects that the user is indifferent to suffiently-far-away estimates. It is similar to the zero-one loss above, but not quite as penalizing to estimates that are close to the true parameter. \n",
77
77
"- Complicated non-linear loss functions can programmed: \n",
0 commit comments