Frequentist inference with Linear Models
Estimate ’true’ slope and intercept
State confidence in our estimate
Evaluate probabilty of obtaining data or more extreme data given a hypothesis
Likelihood inference with Linear Models
Estimate ’true’ slope and intercept
State confidence in our estimate
Evaluate likelihood of data versus likelihood of alternate hypothesis
Estimate probability of a parameter
State degree of believe in specific parameter values
Evaluate probability of hypothesis given the data
Incorporate prior knowledge
And so…
\[\huge p(a)p(b|a) = p(b)p(a|b) \]
And thus…
\[\huge p(a|b) = \frac{p(b|a)p(a)}{p(b)} \]
\[\huge p(H|D) = \frac{p(D|H)p(H)}{p(D)} \]
where p(H|D) is your posterior probability of a hypothesis
The probability that the parameter is 13 is 0.4
The probability that the parameter is 13 is 0.4
The probability that the parameter is 10 is 0.044
Probability that parameter is between 12 and 13 = 0.3445473
Area that contains 95% of the probability mass of the posterior distribution
In Bayesian analyses, the 95% Credible Interval is the region in which we find 95% of the possible parameter values. The observed parameter is drawn from this distribution. For normally distributed parameters:
\[\hat{\beta} - 2*\hat{SD} \le \hat{\beta} \le \hat{\beta} +2*\hat{SD}\]
where \(\hat{SD}\) is the SD of the posterior distribution of the parameter \(\beta\). Note, for non-normal posteriors, the distribution may be different.
\[\hat{\beta} - t(\alpha, df)SE_{\beta} \le \beta \le \hat{\beta} +t(\alpha, df)SE_{\beta}\]
\[\huge p(H|D) = \frac{p(D|H)p(H)}{p(D)} \]
where p(H|D) is your posterior probability of a hypothesis
This is why Bayes is different from Likelihood!
We know/assume:
p(Sun Explodes) = 0.0001, P(Yes \(|\) Sun Explodes) = 35/36
So…
p(Yes) = P(Yes \(|\) Sun Explodes)p(Sun Explodes)
= 35/36 * 0.0001
= 9.7e10^-5
credit: Amelia Hoover