Editing 2059: Modified Bayes' Theorem

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 11: Line 11:
  
 
Bayes' theorem is:
 
Bayes' theorem is:
:<math>P(H \mid X) = \frac{P(X \mid H) \, P(H)}{P(X)}</math>
+
 
:*''P(H | X)'' is the probability that ''H'', the hypothesis, is true given observation ''X''. This is called the ''posterior probability''.
+
<math>P(H \mid X) = \frac{P(X \mid H) \, P(H)}{P(X)}</math>,
:*''P(X | H)'' is the probability that observation ''X'' will appear given the truth of hypothesis ''H''. This term is often called the ''likelihood''.
+
where
:*''P(H)'' is the probability that hypothesis ''H'' is true before any observations. This is called the ''prior'', or ''belief''.
+
*<math>P(H \mid X)</math> is the probability that <math>H</math>, the hypothesis, is true given observation <math>X</math>. This is called the ''posterior probability''.
:*''P(X)'' is the probability of the observation ''X'' regardless of any hypothesis might have produced it. This term is called the ''marginal likelihood''.
+
*<math>P(X \mid H)</math> is the probability that observation <math>X</math> will appear given the truth of hypothesis <math>H</math>. This term is often called the ''likelihood''.
 +
*<math>P(H)</math> is the probability that hypothesis <math>H</math> is true before any observations. This is called the ''prior'', or ''belief''.
 +
*<math>P(X)</math> is the probability of the observation <math>X</math> regardless of any hypothesis might have produced it. This term is called the ''marginal likelihood''.
  
 
The purpose of Bayesian inference is to discover something we want to know (how likely is it that our explanation is correct given the evidence we've seen) by mathematically expressing it in terms of things we can find out: how likely are our observations, how likely is our hypothesis ''a priori'', and how likely are we to see the observations we've seen assuming our hypothesis is true. A Bayesian learning system will iterate over available observations, each time using the likelihood of new observations to update its priors (beliefs) with the hope that, after seeing enough data points, the prior and posterior will converge to a single model.
 
The purpose of Bayesian inference is to discover something we want to know (how likely is it that our explanation is correct given the evidence we've seen) by mathematically expressing it in terms of things we can find out: how likely are our observations, how likely is our hypothesis ''a priori'', and how likely are we to see the observations we've seen assuming our hypothesis is true. A Bayesian learning system will iterate over available observations, each time using the likelihood of new observations to update its priors (beliefs) with the hope that, after seeing enough data points, the prior and posterior will converge to a single model.

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)