# Editing 2059: Modified Bayes' Theorem

**Warning:** You are not logged in. Your IP address will be publicly visible if you make any edits. If you **log in** or **create an account**, your edits will be attributed to your username, along with other benefits.

The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision | Your text | ||

Line 11: | Line 11: | ||

Bayes' theorem is: | Bayes' theorem is: | ||

− | + | ||

− | + | <math>P(H \mid X) = \frac{P(X \mid H) \, P(H)}{P(X)}</math>, | |

− | + | where | |

− | + | *<math>P(H \mid X)</math> is the probability that <math>H</math>, the hypothesis, is true given observation <math>X</math>. This is called the ''posterior probability''. | |

− | + | *<math>P(X \mid H)</math> is the probability that observation <math>X</math> will appear given the truth of hypothesis <math>H</math>. This term is often called the ''likelihood''. | |

+ | *<math>P(H)</math> is the probability that hypothesis <math>H</math> is true before any observations. This is called the ''prior'', or ''belief''. | ||

+ | *<math>P(X)</math> is the probability of the observation <math>X</math> regardless of any hypothesis might have produced it. This term is called the ''marginal likelihood''. | ||

The purpose of Bayesian inference is to discover something we want to know (how likely is it that our explanation is correct given the evidence we've seen) by mathematically expressing it in terms of things we can find out: how likely are our observations, how likely is our hypothesis ''a priori'', and how likely are we to see the observations we've seen assuming our hypothesis is true. A Bayesian learning system will iterate over available observations, each time using the likelihood of new observations to update its priors (beliefs) with the hope that, after seeing enough data points, the prior and posterior will converge to a single model. | The purpose of Bayesian inference is to discover something we want to know (how likely is it that our explanation is correct given the evidence we've seen) by mathematically expressing it in terms of things we can find out: how likely are our observations, how likely is our hypothesis ''a priori'', and how likely are we to see the observations we've seen assuming our hypothesis is true. A Bayesian learning system will iterate over available observations, each time using the likelihood of new observations to update its priors (beliefs) with the hope that, after seeing enough data points, the prior and posterior will converge to a single model. |