4 Bayesian Statistics
Bayesian estimation is an approach for incorporating prior knowledge into the estimation of unknown parameters.
The Bayesian philosophy represents a total shift from the Frequentist philosophy that we’ve seen so far in this class, and that you’ve seen in your other stats classes. An important distinction between the two philosophies lies in their beliefs about unknown parameters. In particular:
- Frequentists believe that parameters are unknown but fixed constants
- Bayesians believe that parameters are random variables (so they have a probability distribution)
Learning Goals
- Understand the difference between the Frequentist and Bayesian philosophies
- Derive the posterior distribution for an unknown parameter based on a specified prior and likelihood
- Derive the Bayes estimator for a given loss function
- Evaluate the properties of Bayes estimators
- Understand the impact of the choice of prior on Bayesian estimation
4.1 Prior and Posterior Distributions
Textbook Reading Guide
Read: first half of Section 5.8 (pages 329–337)
Definitions and Theorems:
- Bayes’ Theorem
- prior distribution
- posterior distribution
- noninformative prior
- conjugate prior
- marginal pdf
Questions:
- What is the difference between the Bayesian and Frequentist philosophies?
- What are the typical steps to deriving a posterior distribution?
- How is the posterior distribution impacted by the observed data and our choice of prior? What sorts of considerations should we keep in mind in choosing a prior?
Corresponding Videos
- Bayesian vs Frequentist Philosophy
- Prior and Posterior Distributions Intro
- Prior and Posterior Distributions Example
4.2 Bayes Estimators
Textbook Reading Guide
Read: second half of Section 5.8 (pages 337–341)
Definitions:
- posterior mode
- posterior median
- posterior mean
- loss function
- risk
- Bayes estimate
Questions:
- Once we have a posterior distribution for \(\theta\), how can we provide an estimate for \(\theta\)?
- What are some examples of commonly-used loss functions?
- What are the typical steps to finding a Bayes estimate?
- What are the Bayes estimates for absolute error loss and squared error loss?
- How are the Bayes and maximum likelihood estimators typically related?
- What criteria do we consider in choosing an “optimal” estimator under the Bayesian paradigm? How do Bayes estimators stand up against Frequentist optimality criteria (e.g., bias, asymptotic bias, consistency)?
Corresponding Videos
- Bayesian Estimation Intro
- Bayesian Estimation Proofs
- Properties of Bayes Estimators