Editing 2295: Garbage Math
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 8: | Line 8: | ||
==Explanation== | ==Explanation== | ||
− | + | {{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}} | |
− | + | This comic explains the "{{w|garbage in, garbage out}}" concept using arithmetical expressions. Just like the comic says, if you get garbage in any part of your workflow, you get garbage as a result. Except when you multiply by zero. That one always fixes everything. | |
− | + | Some of these rules correspond to the rules of {{w|floating point arithmetic}}, while others may be inspired by the rules of {{w|Propagation_of_uncertainty#Example_formulae| propagation of uncertainty}} where a "garbage" number would correspond to an estimate with a high degree of uncertainty, and the uncertainty of the result of arithmetic operations will tend to be dominated by the term with the highest uncertainty. The rule about N pieces of independent garbage reflects the {{w|central limit theorem}} and how it predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged. | |
+ | |||
+ | This comic is about the propagation of errors in numerical analysis and statistics, but described in much more colloquial terms. Numbers with low precision are termed "garbage" and numbers with high precision are labeled "precise". | ||
{| class="wikitable" | {| class="wikitable" | ||
!Formula as shown | !Formula as shown | ||
− | !Resulting | + | !Resulting variance |
!Explanation | !Explanation | ||
|- | |- | ||
|Precise number + Precise number = Slightly less precise number | |Precise number + Precise number = Slightly less precise number | ||
|<math>\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}</math> | |<math>\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}</math> | ||
− | |{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10<sup>-6</sup>) and 1 (±10<sup>-6</sup>), then our sum is 2 (±2·10<sup>-6</sup>). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its | + | |{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10<sup>-6</sup>) and 1 (±10<sup>-6</sup>), then our sum is 2 (±2·10<sup>-6</sup>). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number and then close to its inverse. This phenomenon is known as catastrophic cancellation. Therefore, it is likely that all numbers referred here are positive numbers, which does not exhibit this phenomenon. |
|- | |- | ||
|Precise number × Precise number = Slightly less precise number | |Precise number × Precise number = Slightly less precise number | ||
− | |<math>\mathop\sigma(X\times Y) | + | |<math>\mathop\sigma(X\times Y)=</math><br><br><math>\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}</math> |
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10<sup>-6</sup>) and 1 (±10<sup>-6</sup>), then our product is 1 (±2·10<sup>-6</sup>). | |Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10<sup>-6</sup>) and 1 (±10<sup>-6</sup>), then our product is 1 (±2·10<sup>-6</sup>). | ||
|- | |- | ||
Line 32: | Line 34: | ||
|- | |- | ||
|Precise number × Garbage = Garbage | |Precise number × Garbage = Garbage | ||
− | |<math>\mathop\sigma(X\times Y) | + | |<math>\mathop\sigma(X\times Y)=</math><br><br><math>\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}</math> |
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers. | |Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers. | ||
|- | |- | ||
|√<span style="border-top:1px solid; padding:0 0.1em;">Garbage</span> = Less bad garbage | |√<span style="border-top:1px solid; padding:0 0.1em;">Garbage</span> = Less bad garbage | ||
− | |<math>\mathop\sigma(\sqrt X) | + | |<math>\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} </math> |
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''. | | When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''. | ||
|- | |- | ||
|Garbage<sup>2</sup> = Worse garbage | |Garbage<sup>2</sup> = Worse garbage | ||
− | |<math>\mathop\sigma(X^2) | + | |<math>\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)</math> |
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors. | |Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors. | ||
|- | |- | ||
|<math>\frac{1}{N}\sum(</math>N pieces of statistically independent garbage<math>)</math> = Better garbage | |<math>\frac{1}{N}\sum(</math>N pieces of statistically independent garbage<math>)</math> = Better garbage | ||
− | |<math>{\sigma}_\bar{ | + | |<math>{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}</math> |
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}. | |By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}. | ||
|- | |- | ||
|Precise number<sup>Garbage</sup> = Much worse garbage | |Precise number<sup>Garbage</sup> = Much worse garbage | ||
− | |<math>\mathop\sigma(b^X) | + | |<math>\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)</math> |
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number. | |The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number. | ||
|- | |- | ||
Line 56: | Line 58: | ||
|- | |- | ||
|<math>\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}</math> = Much worse garbage, possible division by zero | |<math>\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}</math> = Much worse garbage, possible division by zero | ||
− | |<math>\mathop\sigma | + | |<math>\mathop\sigma(\frac{a}{X-Y})=</math><br><br><math>|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}</math> |
|Indeed, as with above, if error bars overlap then we might end up dividing by zero. | |Indeed, as with above, if error bars overlap then we might end up dividing by zero. | ||
|- | |- | ||
Line 64: | Line 66: | ||
|} | |} | ||
− | The title text refers to the computer science maxim of "garbage in, garbage out," which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily | + | The title text refers to the computer science maxim of "garbage in, garbage out," which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved. |
==Transcript== | ==Transcript== |