The human perception tends to arrange probabilities into above 50% and below. For most probabilistic models this is not the case at all. Frequently, resulting probabilities are neither normal distributed between zero and 1 with a mean of 0.5 nor correct in terms of absolute values. Often, this depends on the dataset and is not seldom an issue accompanied with the existence of a minority class.
For example, if the result of a probabilistic model of having an accident given a blood alcohol Level of 0.5 %o is 40% does not necessarily mean that you should predict this case as no accident.
Examining the probability distribution, you could realize a slope that the probabilities tend to the value of zero. This is not wrong at all costs, but you can easily proof whether it is better to adjust your cutoff criterion by setting it down – or up. ROC helps as well.
If you have doubts regarding the shape of the probabilities, you can adjust the whole distribution by doing the following:
Supposed you found that the cutoff should be at 40% instead of 50%. So, you know three things:
- p = 0 should remain 0
- p = 1 should remain 1
- p = 40% should be 50%
A root function fulfills the first two requirements. The rest is simple mathematics.
0.5 = 0.4^x
log(0.5) / log(0.4) = x
log(0.5) / log(cutoff) = x
With this exponent you can adjust the whole probability-results. At least slightly, this exponent can remove the slope of the probability distribution.
Just give it a try and you will see that it performs better in many cases – instead of just setting down the cutoff criterion.