<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=172.69.23.27</id>
		<title>explain xkcd - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=172.69.23.27"/>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php/Special:Contributions/172.69.23.27"/>
		<updated>2026-04-17T05:49:06Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190990</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190990"/>
				<updated>2020-04-20T16:34:25Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190989</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190989"/>
				<updated>2020-04-20T16:33:24Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; width:115px; height:1px; background:black' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190988</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190988"/>
				<updated>2020-04-20T16:32:44Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; width:110px; height:2px; background:black' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190987</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190987"/>
				<updated>2020-04-20T16:32:06Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; width:110px; height:1px; background:solid black' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190986</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190986"/>
				<updated>2020-04-20T16:31:14Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; 10px solid black; width:110px; height:5px' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190985</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190985"/>
				<updated>2020-04-20T16:29:54Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; 10px solid black; width:110px;' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190984</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190984"/>
				<updated>2020-04-20T16:29:25Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline; 1px solid black; width:110px;' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190983</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190983"/>
				<updated>2020-04-20T16:24:11Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; 1px solid black; width:110px;' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190982</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190982"/>
				<updated>2020-04-20T16:23:47Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; 1px solid black; width:110px;' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190981</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190981"/>
				<updated>2020-04-20T16:22:50Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&amp;lt;hr style='display:inline-block; width:100px;' /&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 110px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190980</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190980"/>
				<updated>2020-04-20T16:21:24Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number &lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 110px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
    = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
&amp;lt;hr style='display:inline-block; width:100px;' /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190979</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190979"/>
				<updated>2020-04-20T16:19:48Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number &lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 110px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
    = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190978</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190978"/>
				<updated>2020-04-20T16:18:40Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number &lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 110px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
    = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190977</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190977"/>
				<updated>2020-04-20T16:15:58Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 110px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190976</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190976"/>
				<updated>2020-04-20T16:15:18Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number   &amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 100px&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190975</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190975"/>
				<updated>2020-04-20T16:14:44Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number   &amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black; width: 10%&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190974</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190974"/>
				<updated>2020-04-20T16:11:22Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number   &amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 10px solid black;&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190973</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190973"/>
				<updated>2020-04-20T16:06:02Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number   &amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&amp;lt;div&amp;gt;&lt;br /&gt;
    &amp;lt;p style=&amp;quot;border-bottom: 1px solid black;&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;/p&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190972</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190972"/>
				<updated>2020-04-20T16:01:19Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number   &amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190971</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190971"/>
				<updated>2020-04-20T16:00:39Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
&amp;lt;SPAN STYLE=&amp;quot;text-decoration:underline&amp;quot;&amp;gt;Precise number&amp;lt;/SPAN&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190970</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190970"/>
				<updated>2020-04-20T15:56:17Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&lt;br /&gt;
_______________ = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190969</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190969"/>
				<updated>2020-04-20T15:55:48Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Test&lt;br /&gt;
test&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
______________   = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&lt;br /&gt;
= Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190968</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190968"/>
				<updated>2020-04-20T15:55:13Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
______________   = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
&lt;br /&gt;
= Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190967</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190967"/>
				<updated>2020-04-20T15:54:44Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
______________   = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
= Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190966</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190966"/>
				<updated>2020-04-20T15:53:40Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Transcript */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Precise number&lt;br /&gt;
______________   = Much worse garbage, possible division by zero&lt;br /&gt;
Garbage-Garbage&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190961</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190961"/>
				<updated>2020-04-20T08:04:13Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Explanation */ tagging suspect formula&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;{{fact}}&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190960</id>
		<title>2295: Garbage Math</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2295:_Garbage_Math&amp;diff=190960"/>
				<updated>2020-04-20T07:58:20Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Explanation */ correct and clarify&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2295&lt;br /&gt;
| date      = April 17, 2020&lt;br /&gt;
| title     = Garbage Math&lt;br /&gt;
| image     = garbage_math.png&lt;br /&gt;
| titletext = 'Garbage In, Garbage Out' should not be taken to imply any sort of conservation law limiting the amount of garbage produced.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a ZILOG Z80. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic illustrates the &amp;quot;{{w|garbage in, garbage out}}&amp;quot; concept using mathematical expressions. It shows how, if you have garbage as inputs to your calculations, then you will likely get garbage as a result, except when you multiply by zero, which eliminates all uncertainty of the result. &lt;br /&gt;
&lt;br /&gt;
The propagation of errors in {{w|arithmetic}}, other {{w|mathematical operations}}, and {{w|statistics}} is described in colloquial terms. Numbers with low precision are termed garbage, while numbers with high precision are called precise. The table below quantifies the change in precision from the operands to their result in terms of their {{w|variance}}, represented by &amp;amp;sigma;, the Greek lowercase letter sigma, equal to the {{w|standard deviation}}, or the square root of the variance. Variance or standard deviation are common specifications of uncertainty (as an alternative to, for example, a {{w|tolerance interval}}.)&lt;br /&gt;
&lt;br /&gt;
The {{w|accuracy and precision}} of mathematical operations correspond to the rules of {{w|Propagation_of_uncertainty#Example_formulae|propagation of uncertainty}}, where a &amp;quot;garbage&amp;quot; number would correspond to an estimate with a high degree of uncertainty, and a precise number has low uncertainty. The uncertainty of the result of such operations will usually correspond to the term with the highest uncertainty. The rule about N pieces of independent garbage used to calculate an {{w|arithmetic mean}} reflects how the {{w|central limit theorem}} predicts that the uncertainty (or {{w|standard error}}) of an estimate will be reduced when independent estimates are averaged.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Formula as shown&lt;br /&gt;
!Resulting uncertainty&lt;br /&gt;
!Explanation&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|{{Nowrap|If we know absolute error bars, then adding two precise numbers will}} at worst add the sizes of the two error bars. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our sum is 2 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;). It is possible to lose a lot of relative precision, if the resultant sum is close to zero as a result of adding a number to its approximate negation, a phenomenon known as {{w|catastrophic cancellation}}. Therefore, both of the numbers must be positive for the stated assertion to be true.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Precise number = Slightly less precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Here, instead of absolute error, relative error will be added. For example, if our precise numbers are 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;) and 1 (±10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;), then our product is 1 (±2·10&amp;lt;sup&amp;gt;-6&amp;lt;/sup&amp;gt;).&lt;br /&gt;
|-&lt;br /&gt;
|Precise number + Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X+Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|If one of the numbers has a high absolute error, and the numbers being added are of comparable size, then this error will be propagated to the sum. &lt;br /&gt;
|-&lt;br /&gt;
|Precise number × Garbage = Garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X\times Y)=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;\sqrt{\mathop\sigma(X)\times Y^2+\mathop\sigma(Y)\times X^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, if one of the numbers has a high relative error, then this error will be propagated to the product. Here, this is independent of the sizes of the numbers.&lt;br /&gt;
|-&lt;br /&gt;
|√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\sqrt X)=\frac{\mathop\sigma(X)}{2\times\sqrt X} &amp;lt;/math&amp;gt;&lt;br /&gt;
| When the square root of a number is computed, its relative error will be halved. Depending on the application, this might not be all that much ''better'', but it's at least ''less bad''.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; = Worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X^2)=2\times X\times\mathop\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|Likewise, when a number is squared, its relative error will be doubled. This is a corollary to multiplication adding relative errors.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{1}{N}\sum(&amp;lt;/math&amp;gt;N pieces of statistically independent garbage&amp;lt;math&amp;gt;)&amp;lt;/math&amp;gt; = Better garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;{\sigma}_\bar{x}\ = \frac{\sigma_x}{\sqrt{N}}&amp;lt;/math&amp;gt;&lt;br /&gt;
|By aggregating many pieces of statistically independent observations (for instance, surveying many individuals), it is possible to reduce relative error to the {{w|Standard_error#Standard_error_of_the_mean|standard error of the mean}}. This is the basis of statistical sampling and the {{w|central limit theorem}}.&lt;br /&gt;
|-&lt;br /&gt;
|Precise number&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(b^X)=b^{2\times X}\times\mathop{\mathrm{ln}}b\times\sigma(X)&amp;lt;/math&amp;gt;&lt;br /&gt;
|The exponent is very sensitive to changes, which may also magnify the effect based on the magnitude of the precise number.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage – Garbage = Much worse garbage&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(X-Y)=\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|This line involves catastrophic cancellation. If both pieces of garbage are about the same (e.g. if their error bars overlap), then it is possible that the answer is positive, zero, or negative.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;math&amp;gt;\frac{\text{Precise number}}{\text{Garbage}-\text{Garbage}}&amp;lt;/math&amp;gt; = Much worse garbage, possible division by zero&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(\frac{a}{X-Y})=&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;math&amp;gt;|\frac a{X-Y}|\times\sqrt{\mathop\sigma(X)^2+\mathop\sigma(Y)^2}&amp;lt;/math&amp;gt;&lt;br /&gt;
|Indeed, as with above, if error bars overlap then we might end up dividing by zero.&lt;br /&gt;
|-&lt;br /&gt;
|Garbage × 0 = Precise number&lt;br /&gt;
|&amp;lt;math&amp;gt;\mathop\sigma(0)=0&amp;lt;/math&amp;gt;&lt;br /&gt;
|Multiplying anything by 0 results in 0, an extremely precise number in the sense that it has no error whatsoever since we supply the 0 ourselves. This is equivalent to discarding garbage data from a statistical analysis.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The title text refers to the computer science maxim of &amp;quot;garbage in, garbage out,&amp;quot; which states that when it comes to computer code, supplying incorrect initial data will produce incorrect results, even if the code itself accurately does what it is supposed to do. As we can see above, however, when plugging data into mathematical formulas, this can possibly magnify the error of our input data, though there are ways to reduce this error (such as aggregating data). Therefore, the quantity of garbage is not necessarily conserved.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
[A series of mathematical equations are written from top to bottom]&lt;br /&gt;
&lt;br /&gt;
Precise number + Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number × Precise number = Slightly less precise number&lt;br /&gt;
&lt;br /&gt;
Precise number + Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
Precise number × Garbage = Garbage&lt;br /&gt;
&lt;br /&gt;
√&amp;lt;span style=&amp;quot;border-top:1px solid; padding:0 0.1em;&amp;quot;&amp;gt;Garbage&amp;lt;/span&amp;gt; = Less bad garbage&lt;br /&gt;
&lt;br /&gt;
Garbage² = Worse garbage&lt;br /&gt;
&lt;br /&gt;
1/N Σ (N pieces of statistically independent garbage) = Better garbage&lt;br /&gt;
&lt;br /&gt;
(Precise number)&amp;lt;sup&amp;gt;Garbage&amp;lt;/sup&amp;gt; = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Garbage – Garbage = Much worse garbage&lt;br /&gt;
&lt;br /&gt;
Precise number / ( Garbage – Garbage ) = Much worse garbage, possible division by zero&lt;br /&gt;
&lt;br /&gt;
Garbage × 0 = Precise number&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1930:_Calendar_Facts&amp;diff=187777</id>
		<title>1930: Calendar Facts</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1930:_Calendar_Facts&amp;diff=187777"/>
				<updated>2020-02-26T18:22:01Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: /* Table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1930&lt;br /&gt;
| date      = December 18, 2017&lt;br /&gt;
| title     = Calendar Facts&lt;br /&gt;
| image     = calendar_facts.png&lt;br /&gt;
| titletext = While it may seem like trivia, it (causes huge headaches for software developers / is taken advantage of by high-speed traders / triggered the 2003 Northeast Blackout / has to be corrected for by GPS satellites / is now recognized as a major cause of World War I).&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
[[Randall]] presents what appears to be a generator of 156,000 facts [20 x 13 x (8 + 6 x 7) x 12], about calendars, most of which are false or have little meaning{{Citation needed}}. The facts are seeded by a mishmash of common tidbits about the time of year.&lt;br /&gt;
&lt;br /&gt;
The formula for each generated fact goes as follows: &amp;quot;Did you know that '''[a recurring event]''' '''[occurs in an unusual manner]''' because of '''[phenomena or political decisions]'''? Apparently '''[wild card statement]'''.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This is [[:Category:Supermoon|the fifth time]] that Randall has referred to the phenomenon of a {{w|supermoon}}, which he typically makes fun of, most prominently in [[1394: Superm*n]].&lt;br /&gt;
&lt;br /&gt;
The title text continues the chart with supposed real-life consequences of the trivia in the comic.&lt;br /&gt;
&lt;br /&gt;
There are multiple online generators of Calendar 'facts' using this formula [https://www.pibweb.com/xkcd_calendar.php here] and [http://yahel.com/calendarfacts/ here].&lt;br /&gt;
&lt;br /&gt;
All 156 000 possible combinations can be found [https://www.dropbox.com/s/866fwtpwvd0z9hq/combinations%20xkcd%201930.txt?dl=0 here], lovingly assembled by hand (or rather, by a python script) for your entertainment. A random fact generator (including title text), written in Python, can be found [https://gist.github.com/petersohn/6c8f9d124bd961e909d2dc9a967ade2e here].&lt;br /&gt;
&lt;br /&gt;
==Table==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Entry&lt;br /&gt;
! What it is&lt;br /&gt;
! Relation to other entries&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Recurring Events&lt;br /&gt;
|-&lt;br /&gt;
| The [Fall/Spring] {{w|Equinox}}&lt;br /&gt;
| The time of year at which the apparent position of the overhead sun passes the equator. During the equinox, the time that the Sun is above the horizon is 12 hours across the globe.&lt;br /&gt;
| Before the adoption of the {{w|Gregorian calendar}} in 1582, the equinoxes fell on earlier and earlier dates as the centuries went by, due to the {{w|Julian calendar}} year being 365.25 days on average compared to the tropical Earth year of 365.2422 days. {{w|Pope Gregory}}'s decision to remove the leap days on years that were multiples of 100 but not 400 corrected the average length of the calendar year to 365.2425 days.&lt;br /&gt;
|-&lt;br /&gt;
| The [Winter/Summer] {{w|Solstice}}&lt;br /&gt;
| The time of year when the apparent position of the overhead sun reaches its most extreme latitude. During the Winter and Summer solstices the days are the shortest and longest respectively.&lt;br /&gt;
| Similar to the equinoxes, the solstices were also falling on earlier dates every year before the Gregorian calendar.&lt;br /&gt;
|-&lt;br /&gt;
| The [Winter/Summer] {{w|Olympics}}&lt;br /&gt;
| The Olympic Games occur during the summer and the winter, alternating between the two seasons every two years.&lt;br /&gt;
| The Olympic Games do not have any set dates, and seem to only be included humorously as something else that alternates between occurring during the summer and winter.&lt;br /&gt;
|-&lt;br /&gt;
| The [latest/earliest] [sunset/sunrise]&lt;br /&gt;
| The extremes of times that the sun crosses a horizon according to a clock that keeps a fixed 24 hours as opposed to varying with the sun like a sundial.&lt;br /&gt;
| The latest sunset and earliest sunrise occur around the summer solstice; the latest sunrise and earliest sunset occur around the winter solstice. They do not occur exactly on these dates due to the {{w|equation of time}} causing drift in the times that sunsets and sunrises occur.&lt;br /&gt;
|-&lt;br /&gt;
| Daylight [saving/savings] time&lt;br /&gt;
| {{w|Daylight saving time}}, commonly referred to as daylight savings time, is the practice of setting clocks ahead, typically by one hour, during the summer months of the year. &lt;br /&gt;
| Daylight saving time will push the time of certain events such as sunrise and sunset past their &amp;quot;natural&amp;quot; times. For example, solar noon will occur around 1:00 PM instead of 12:00 noon when daylight saving time is active, making it the &amp;quot;wrong&amp;quot; time.&lt;br /&gt;
|-&lt;br /&gt;
| Leap [day/year]&lt;br /&gt;
| Because the durations of celestial events are not generally nice multiples of each other, they will tend to fall out of sync with each other. Leap days are days inserted into specific years to bring the calendar back into sync, and the years on which these {{w|leap day}}s occur are called {{w|leap year}}s.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| {{w|Easter}}&lt;br /&gt;
| Easter is a holiday celebrating the death and resurrection of {{w|Jesus}}. It is defined as the Sunday after the first full moon after the spring equinox. This complicated formula has a long tradition behind it, known as {{w|Computus}}.&lt;br /&gt;
| When Pope Gregory decided to change the calendar in 1582, it was because the spring equinox was putting Easter on unexpectedly early dates.&lt;br /&gt;
|-&lt;br /&gt;
| The [harvest/super/blood] moon&lt;br /&gt;
|&lt;br /&gt;
* The {{w|harvest moon}} is the full moon that appears closest to the autumnal equinox.&lt;br /&gt;
* The {{w|supermoon}} is a phenomenon in which the moon is full at its closest approach to the Earth.&lt;br /&gt;
* The {{w|blood moon}} is a moon that appears tinted red during a total lunar eclipse because of light refracted from the Earth's atmosphere. It can also refer to the {{w|hunter's moon}}, the full moon directly after the harvest moon.&lt;br /&gt;
| Each of these lunar events happens approximately once a year.&lt;br /&gt;
* The harvest moon appears exactly once because it has a particular definition based on the time of year.&lt;br /&gt;
* The cycle of the distance of the full moon lasts about 13.5 months (14 full moons). However, because a supermoon is defined as any full moon that is within 10 percent of the closest relative distance possible (with 0 being perigee and 1 being apogee), it happens multiple times a cycle, for a total of usually 3 to 4 times per year.&lt;br /&gt;
* The blood moon during a lunar eclipse appears between zero to two times a year. The hunter's moon appears exactly once like the harvest moon.&lt;br /&gt;
|-&lt;br /&gt;
| Toyota Truck Month&lt;br /&gt;
| Toyota offers a discount for {{w|Toyota Tacoma|Tacoma}} trucks one month a year. Mainly notable because radio and television ads hype this discount up as &amp;quot;Toyota Truck Month&amp;quot;.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| {{w|Shark Week}}&lt;br /&gt;
| Every year, the {{w|Discovery Channel}} dedicates a week during the summer to programming featuring or about sharks.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Unusual manners in which the events occur&lt;br /&gt;
|-&lt;br /&gt;
| happens [earlier/later/at the wrong time] every year&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | The solstices and equinoxes happened earlier every year ''before'' the decree by Pope Gregory in 1582. The earliest sunrise happens one hour later than it &amp;quot;should&amp;quot; happen due to daylight saving time having turned the clocks forward one hour.&lt;br /&gt;
|-&lt;br /&gt;
| drifts out of sync with the [sun/moon]&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | The Sun and Moon are generally what calendars are based on. If something were to drift out of sync, some corrective mechanism would have to be put in to put it back. This is the motivation behind leap years, leap months (in countries with lunisolar calendars) and leap seconds.  &lt;br /&gt;
|-&lt;br /&gt;
| drifts out of sync with the zodiac&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | The dates on which the Sun crosses the constellations in the traditional zodiac has shifted in the past centuries due to the precession of the Earth's axis. In the period of time traditionally known as {{w|Aries}} (March 21–April 20), for example, the Sun actually points to {{w|Pisces}} instead.&lt;br /&gt;
|-&lt;br /&gt;
| drifts out of sync with the [Gregorian/Mayan/lunar/iPhone] calendar&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
*The {{w|Gregorian calendar}} is a solar calendar with a mean calendar year length of 365.2425 days. &lt;br /&gt;
*The {{w|Mayan calendar}} is based on two cycles or counts, with a 260-day count combined with a 365-day &amp;quot;vague&amp;quot; solar year.&lt;br /&gt;
*A {{w|lunar calendar}} is based on Moon's phases, with each {{w|lunation}} being approximately 29.5 days, and a lunar year lasting roughly 354 days. An example of a lunar calendar is the {{w|Islamic calendar}}.&lt;br /&gt;
*The {{w|iPhone calendar}} is listed humorously due to its data synchronization issues.&lt;br /&gt;
|-&lt;br /&gt;
| drifts out of sync with the atomic clock in {{w|Colorado}}&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
{{w|NIST-F1}} is an {{w|Atomic clock}} used as a reference for official time in the USA.&lt;br /&gt;
|-&lt;br /&gt;
| might [not happen/happen twice] this year&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Some events may have a period of slightly more or slightly less than one year. If an event has a period of slightly less than one year (e.g. the Islamic calendar), it can occur twice in the same year (e.g. the year 2000 had two {{w|Eid al-Fitr}}s—one on January 8, and one on December 28). If an event has a period of slightly more than one year, there can be a year in which it does not occur at all, instead occurring near the end of the previous year and the beginning of the next.&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Cause (phenomena or political decisions)&lt;br /&gt;
|-&lt;br /&gt;
| time zone legislation in [Indiana/Arizona/Russia]&lt;br /&gt;
| Some states or provinces have time zone legislation that sets the standard time to something other than what the natural longitude of that location would suggest.&lt;br /&gt;
|&lt;br /&gt;
* The state of {{w|Arizona}} generally does not observe daylight saving time, keeping their clocks on {{w|UTC-7:00}} Mountain Standard Time year round. However, the {{w|Navajo nation}} reservation inside Arizona does observe it, causing the two regions to have different times in the summer and the same time in the winter.&lt;br /&gt;
* Time zones in Russia are all one hour ahead of what their longitude would suggest, which puts them in a &amp;quot;permanent&amp;quot; state of daylight saving time. (For example, {{w|St. Petersburg}} is 30°E, which means that its natural time zone is {{w|UTC+2:00}}, but its time zone is actually {{w|UTC+3:00}}.) From 1981 until 2011 Russia used to have the daylight saving time on top of it as well. The other changes include the abolition of the one-hour shift in 1991 and a return it back in 1992, and an increase to two hours in 2011 and a restoration back to one hour in 2014.&lt;br /&gt;
* {{w|Indiana}} has {{w|Time in Indiana|a complicated history}} with daylight saving time, likely related to the state being split between two time zones.&lt;br /&gt;
|-&lt;br /&gt;
| a decree by the Pope in the 1500s&lt;br /&gt;
| In 1582, Pope Gregory introduced the Gregorian Calendar, the calendar we use today, to replace the Julian Calendar. The calendar applied retroactively to the birth of Jesus Christ, which means that they had to skip 10 days, going straight from October 4 to October 15, 1582, during the switchover.&lt;br /&gt;
| The introduction of the Gregorian calendar brought Easter and the dates that months started back in sync with what they were in the 3rd century AD.&lt;br /&gt;
|-&lt;br /&gt;
| the precession of&lt;br /&gt;
| The Earth's axis is slowly changing position, in a phenomenon called the {{w|Axial precession|precession of the equinoxes}}. &lt;br /&gt;
| The precession of the equinoxes causes the seasons to occur about 20 minutes earlier than would be expected with the Earth's position relative to the stars, which could be construed as the equinox happening &amp;quot;later every year&amp;quot; if you use the stars as your frame of reference.&lt;br /&gt;
|-&lt;br /&gt;
| the libration of&lt;br /&gt;
| The Moon is {{w|tidal locking|tidally locked}} to its orbit around the Earth, which means that the same side of it tends to face the Earth at any given point in time. However, there are slight variations in the angle over the course of a month, which are known as {{w|libration}}.&lt;br /&gt;
| The libration of the Moon does not affect anything else in the chart, and seems only be included humorously as another example of a celestial phenomenon.&lt;br /&gt;
|-&lt;br /&gt;
| the nutation of&lt;br /&gt;
| Besides precession, there is also a smaller wobbling effect called {{w|Astronomical nutation|nutation}}.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| the libation of&lt;br /&gt;
| A {{w|libation}} is a drink, often used in the context of a ritual offering of liquid to a deity by pouring it onto the ground or into something that collects it.&lt;br /&gt;
| This entry seems to have been included simply as a humorous misspelling of the word &amp;quot;libration&amp;quot;. Certainly libation of any of the entities listed would be inadvisable.{{Citation needed}}&lt;br /&gt;
|-&lt;br /&gt;
| the eccentricity of&lt;br /&gt;
| Orbital eccentricity is the deviation of a body's orbit from a perfect circle. Orbital travel is faster when it's closer to the body being orbited and slower when farther away.&lt;br /&gt;
| The Earth's eccentric orbit causes the equinoxes and solstices to occur at irregular intervals. For example, summer in the northern hemisphere lasted 93 days in 2017, while fall only lasted 90 days.&lt;br /&gt;
|-&lt;br /&gt;
| the obliquity of&lt;br /&gt;
| The tilt of the Earth's axis relative to the ecliptic is also known as its obliquity.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| the Moon&lt;br /&gt;
| The Moon is the primary satellite of the Earth.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| the Sun&lt;br /&gt;
| The Sun is the star that the Earth orbits around.&lt;br /&gt;
| The Sun is the basis for many timekeeping events, such as the day and year.{{Citation needed}}&lt;br /&gt;
|-&lt;br /&gt;
| the Earth's axis&lt;br /&gt;
| The Earth's axis of rotation defines the Geographic North and South Pole, as well as the lines of latitude.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| the Equator&lt;br /&gt;
| The Equator is the line on the Earth's surface which is equidistant from both poles of the Earth's axis.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| the Prime Meridian&lt;br /&gt;
| The Prime Meridian is the line that starts at the geographic North Pole, runs through the {{w|Royal Observatory, Greenwich|Greenwich Royal Observatory}} in London, and ends at the South Pole. It is the basis for longitude when calculating coordinates for positions on the surface of the Earth.&lt;br /&gt;
| The Prime Meridian (and in particular the Greenwich Observatory) gives us Greenwich Mean Time (GMT), which is the basis for UTC and the time zone system.&lt;br /&gt;
|-&lt;br /&gt;
| the International Date Line&lt;br /&gt;
| The International Date Line is a line on the opposite side of the Earth as the Prime Meridian that separates regions that use time set behind UTC versus regions that are set ahead of UTC. It has many irregularities due to political changes that put certain countries or islands on either side of the divide contrary to their natural longitude.&lt;br /&gt;
| The irregular shape of the International Date Line means that certain regions of the Pacific Ocean (such as Kiribati) are more than 24 hours ahead of some other regions (such as Baker Island and American Samoa), which may cause problems with timekeeping.&lt;br /&gt;
|-&lt;br /&gt;
| the Mason-Dixon Line&lt;br /&gt;
| The Mason-Dixon line is a line delineating a portion of the border between Pennsylvania, Maryland, and Delaware.&lt;br /&gt;
| The Mason-Dixon line is included as a humorous example as another imaginary geographic line.&lt;br /&gt;
|-&lt;br /&gt;
| magnetic field reversal&lt;br /&gt;
| The Earth's magnetic field has been reversed several times in its geologic history, so that what we would currently call the &amp;quot;magnetic North Pole&amp;quot; was near the geographic South Pole about 780,000 years ago, before the most recent reversal.&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| an arbitrary decision by Benjamin Franklin&lt;br /&gt;
| Benjamin Franklin wrote [http://www.webexhibits.org/daylightsaving/franklin3.html a letter to the Journal of Paris in 1784] in which he advised them to rise with the sun in order to save candlelight, after he observed that the Parisians were getting up at the same time by the clock and burning a lot of candles in the winter as a result.&lt;br /&gt;
An &amp;quot;arbitrary decision by Benjamin Franklin&amp;quot; also likely refers humorously to Franklin having defined positive charge to be that which is left on a glass rod by rubbing it with silk. As described in [[567: Urgent Mission]], this had the unfortunate consequence of assigning a negative value to the charge of the electron, which was later identified as the fundamental carrier of electric charge.&lt;br /&gt;
| Benjamin Franklin is often touted as &amp;quot;the father of daylight saving time&amp;quot;, despite him never actually proposing to alter the clocks.&lt;br /&gt;
|-&lt;br /&gt;
| an arbitrary decision by Isaac Newton&lt;br /&gt;
| Possibly a reference to how Newton divided the colour spectrum into the now-familiar seven colours of the rainbow, on a somewhat arbitrary basis. Newton did spend time working on the problem of calendar reform, but it's unlikely that any decisions he made as a result would affect anything, since he never published his work, and by the time it gained attention the Gregorian Calendar had been widely adopted.&lt;br /&gt;
| The spectrum fact is one of those standard bits of trivia of the kind the chart alludes to. Although it has nothing to do with time-keeping, Newton is the sort of person who seems like he should have made decisions like this. &lt;br /&gt;
|-&lt;br /&gt;
| an arbitrary decision by FDR&lt;br /&gt;
| Franklin Delano Roosevelt set all time zones one hour ahead year-round during World War II. The law was repealed after the war ended.&lt;br /&gt;
Additionally, he changed the date of Thanksgiving from the last Thursday in November to the third Thursday in November as a way to increase the length of the Christmas shopping season. It was later changed to the fourth Thursday after his death.&lt;br /&gt;
| Setting the time permanently one hour ahead would make everything happen at the &amp;quot;wrong&amp;quot; time celestially.&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Related 'fact'&lt;br /&gt;
|-&lt;br /&gt;
| It causes a predictable increase in car accidents.&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | The week following daylight saving time, car accidents increase by about 5-7%&amp;lt;ref&amp;gt;http://www.cbc.ca/news/canada/end-of-daylight-saving-time-2015-6-eye-opening-facts-1.3296353&amp;lt;/ref&amp;gt;.&lt;br /&gt;
|-&lt;br /&gt;
| That's why we have leap seconds.&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Leap seconds occur because the time required for one rotation of the Earth is actually slightly longer than the 86,400 seconds in a standard UTC day. The Earth's rotation is slowing down by about 2 × 10&amp;lt;sup&amp;gt;-5&amp;lt;/sup&amp;gt; seconds every year due to tidal friction caused by the Moon's gravity; however, this is not one of the possible entries in the list of phenomena.&lt;br /&gt;
|-&lt;br /&gt;
| Scientists are really worried.&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
|-&lt;br /&gt;
| it was even more extreme during the [Bronze Age/Ice Age/Cretaceous/1990s].&lt;br /&gt;
| This may be reference to debates over climate change, where global temperature changes during these periods are frequently cited as supposedly proving / disproving human-related change.&lt;br /&gt;
| Solar events, such as sunspot activity, are often invoked as explaining temperature change in these debates. However, while there are a number of potential sun-related 'facts' that could be generated, none touch on sunspots.&lt;br /&gt;
|-&lt;br /&gt;
| There's a proposal to fix it, but it [will never happen/actually makes things worse/is stalled in Congress/might be unconstitutional].&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Time zone reform is surprisingly a very controversial and politicized issue, with special interests on either side looking to modify it to fit their needs. Examples of proposals to modify the scheme include:&lt;br /&gt;
&lt;br /&gt;
* Extend the duration of daylight saving time by one month, which was done in 2007 in many states as part of an energy-saving proposal by George W. Bush.&lt;br /&gt;
* Reduce the duration of daylight saving time back to its original span, or further.&lt;br /&gt;
* Eliminate daylight saving time altogether, going back to using standard time.&lt;br /&gt;
* Abolish daylight saving time but advance the time zone by one hour, effectively instating daylight saving time year round. This was done during World War II.&lt;br /&gt;
* Abolish daylight saving time and advance the time zone by 30 minutes, splitting the difference between the current standard time and daylight saving time.&lt;br /&gt;
* Abolish daylight saving time, but make government offices open one hour earlier in the summer, encouraging private businesses to do the same. This was done by Warren G. Harding in 1922 because he felt that changing the clocks was a &amp;quot;deception&amp;quot;, but was rolled back the next year as it caused mass chaos in terms of what businesses decided to do to adapt to the change in business hours.&lt;br /&gt;
* Reduce the number of time zones in the United States to two, consolidating Pacific time into Mountain time (UTC-7:00), and Eastern time into Central time (UTC-6:00). This was proposed in a [https://qz.com/142199/the-us-needs-to-retire-daylight-savings-and-just-have-two-time-zones-one-hour-apart/ 2013 article in Quartz] by Allison Schrager.&lt;br /&gt;
&lt;br /&gt;
At best, these time zone proposals will be fraught with controversy, with both sides arguing for the benefits of their time system. Some proposals, such as the 30-minute and 20-minute suggestions, would put the minute hands of the entire United States out of sync with the rest of the world, defeating the purpose of time zones with hourly UTC offsets in the first place, which could be construed as &amp;quot;making things worse&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
| It's getting worse and no one knows why.&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | &lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;3&amp;quot; | Title Text: Consequences&lt;br /&gt;
|-&lt;br /&gt;
| causes huge headaches for software developers&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Trying to support time zones correctly for all dates present and historic is a mishmash of different regional laws, time zones, and DST changes. The headache is best exemplified in [https://www.youtube.com/watch?v=-5wpm-gesOY this video] by Tom Scott.&lt;br /&gt;
|-&lt;br /&gt;
| is taken advantage of by high-speed traders&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | A leap second must be taken into account by trading software, and may cause bugs if not accounted properly. Because leap seconds happen at midnight UTC, it might happen in regular trading hours for somebody living in Seattle, where the time zone is UTC-08:00. Somehow, a high-frequency trader may try to take advantage of any bugs in the software if they are not built to handle this particular case. This scenario is relatively unlikely because the market software can keep its own &amp;quot;market-official time&amp;quot; and synchronize with the correct time while the market is closed.&lt;br /&gt;
|-&lt;br /&gt;
| triggered the 2003 Northeast Blackout&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | The {{w|Northeast blackout of 2003}} was caused by a race condition in the energy management software at a power plant in Ohio. In a race condition the result of a computation is different depending on the order of completion of the operations, even though the result is supposed to be independent of that order.  Race conditions can theoretically be caused by mismatched timestamps.&lt;br /&gt;
|-&lt;br /&gt;
| has to be corrected for by GPS satellites&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Because {{w|Global Positioning System}} (GPS) satellites are further from the earth than surface receivers, their clocks run faster than clocks on the surface due to general relativity. But they are also slower because they are moving faster than surface receivers, as explained by special relativity.  Also, their clocks are not updated for leap seconds. All these factors mean that GPS satellites have a different timekeeping standard than clocks on the ground which are generally synchronized to Greenwich solar time.&lt;br /&gt;
|-&lt;br /&gt;
| is now recognized as a major cause of World War I.&lt;br /&gt;
| colspan=&amp;quot;2&amp;quot; | Daylight saving time was first implemented in World War I as a fuel-saving measure. Randall seems to be humorously implying that World War I was started in order to implement these fuel-saving measures during peacetime as well.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Examples of true complete statements==&lt;br /&gt;
&lt;br /&gt;
# Did you know that '''the spring equinox''' '''drifts out of sync with the zodiac''' because of '''the precession of the Earth's axis'''? Apparently '''it was even more extreme during the Ice Age'''.&lt;br /&gt;
# Did you know that '''daylight saving time''' '''might happen twice this year''' because of '''time zone regulation in Russia'''? Apparently '''there's a proposal to fix it, but it actually makes things worse'''. (True in Russia in 1981)&lt;br /&gt;
# Did you know that '''leap year''' '''might not happen this year''' because of '''a decree by the pope in the 1500s'''? Apparently '''there's a proposal to fix it, but''' '''it will never happen'''. While it may seem like trivia, '''it causes huge headaches for software developers'''. (The Pax calendar proposes that 2018 be a leap year. If anyone finds a calendar in which 2017 is a leap year, I'd love to see it!)&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;big&amp;gt;-Calendar Facts-&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:[Shown below is a branching flow chart of sorts that begins at the phrase &amp;quot;Did you know that&amp;quot;, then flows through various paths to build up a sentence. (Note that the &amp;quot;→&amp;quot; arrow symbol is used below to indicate a new branch with no intermediate text from a previous branch.)]&lt;br /&gt;
&lt;br /&gt;
:Did you know that:&lt;br /&gt;
::the ( Fall | Spring ) Equinox&lt;br /&gt;
::the ( Winter | Summer ) ( Solstice | Olympics )&lt;br /&gt;
::the ( Earliest | Latest ) ( Sunrise | Sunset )&lt;br /&gt;
::Daylight ( Saving | Savings ) Time&lt;br /&gt;
::Leap ( Day | Year )&lt;br /&gt;
::Easter&lt;br /&gt;
::the ( Harvest | Super | Blood ) Moon&lt;br /&gt;
::Toyota Truck Month&lt;br /&gt;
::Shark Week&lt;br /&gt;
:→&lt;br /&gt;
::happens ( earlier | later | at the wrong time ) every year&lt;br /&gt;
::drifts out of sync with the&lt;br /&gt;
:::Sun&lt;br /&gt;
:::Moon&lt;br /&gt;
:::Zodiac&lt;br /&gt;
:::( Gregorian | Mayan | Lunar | iPhone ) Calendar&lt;br /&gt;
:::atomic clock in Colorado&lt;br /&gt;
::might ( not happen | happen twice ) this year&lt;br /&gt;
:because of&lt;br /&gt;
::time zone legislation in ( Indiana | Arizona | Russia )&lt;br /&gt;
::a decree by the pope in the 1500s&lt;br /&gt;
::( precession | libration | nutation | libation | eccentricity | obliquity ) of the &lt;br /&gt;
:::Moon &lt;br /&gt;
:::Sun &lt;br /&gt;
:::Earth's axis &lt;br /&gt;
:::equator &lt;br /&gt;
:::prime meridian &lt;br /&gt;
:::( International Date | Mason-Dixon ) Line&lt;br /&gt;
::magnetic field reversal&lt;br /&gt;
::an arbitrary decision by ( Benjamin Franklin | Isaac Newton | FDR )&lt;br /&gt;
:?&lt;br /&gt;
:Apparently&lt;br /&gt;
::it causes a predictable increase in car accidents.&lt;br /&gt;
::that's why we have leap seconds.&lt;br /&gt;
::scientists are really worried.&lt;br /&gt;
::it was even more extreme during the&lt;br /&gt;
:::Bronze Age.&lt;br /&gt;
:::Ice Age.&lt;br /&gt;
:::Cretaceous.&lt;br /&gt;
:::1990s.&lt;br /&gt;
::there's a proposal to fix it, but it&lt;br /&gt;
:::will never happen.&lt;br /&gt;
:::actually makes things worse.&lt;br /&gt;
:::is stalled in congress.&lt;br /&gt;
:::might be unconstitutional.&lt;br /&gt;
::it's getting worse and no one knows why.&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Charts]]&lt;br /&gt;
[[Category:Comics featuring real people]]&lt;br /&gt;
[[Category:Daylight saving time]]&lt;br /&gt;
[[Category:Time]]&lt;br /&gt;
[[Category:Science]]&lt;br /&gt;
[[Category:Astronomy]]&lt;br /&gt;
[[Category:Supermoon]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1363:_xkcd_Phone&amp;diff=187535</id>
		<title>Talk:1363: xkcd Phone</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1363:_xkcd_Phone&amp;diff=187535"/>
				<updated>2020-02-20T01:38:38Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Special:Contributions/108.162.249.216|108.162.249.216]] 14:14, 4 August 2014 (UTC) Mobile phones haven't always been electronic. Remember the old 'bricks' from the '80s and '90s? More machine than computer.&lt;br /&gt;
&lt;br /&gt;
This seems like an SCP artifact [[Special:Contributions/108.162.249.220|108.162.249.220]] 10:09, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
^someone get on this, please [[User:Whiskey07|Whiskey07]] ([[User talk:Whiskey07|talk]]) 16:28, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I really dislike the tone of the explanation. I mean it's so negative about the features! Not that they are all useful, but isn't this a wiki and should be neutral? It takes also the fun out of it. I would like a screaming while falling phone and the relativity thing would be great for teaching relativity! [[User:RecentlyChanged|RecentlyChanged]] ([[User talk:RecentlyChanged|talk]])&lt;br /&gt;
&lt;br /&gt;
Where can i get one of these? :D [[User:UniTrader|UniTrader]] ([[User talk:UniTrader|talk]]) 04:11, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm pretty sure the &amp;quot;scream when falling&amp;quot; thing and the &amp;quot;flightaware&amp;quot; stuff can be done somehow with Tasker. [[Special:Contributions/141.101.103.206|141.101.103.206]] 04:23, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
;Designer?&lt;br /&gt;
&lt;br /&gt;
I suspect it was either Black Hat or Beret Guy, but I'm not sure which. A collaboration? [[Special:Contributions/173.245.54.45|173.245.54.45]] 04:47, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This sounds like something straight out of aperture. {{unsigned ip|108.162.221.55}}&lt;br /&gt;
&lt;br /&gt;
;Simulates alternate speeds of light&lt;br /&gt;
&lt;br /&gt;
Yes, useless as a feature on all the time; but it would be a cool app. [[User:Markhurd|Mark Hurd]] ([[User talk:Markhurd|talk]]) 05:57, 2 May 2014 (UTC)&lt;br /&gt;
:Absolutely. Where can I get an app like that?[[Special:Contributions/108.162.225.157|108.162.225.157]] 06:22, 2 May 2014 (UTC)&lt;br /&gt;
::Here: [http://play.google.com/store/apps/details?id=com.pandorica.xkcdclock XKCD Clock] [[Special:Contributions/173.245.53.114|173.245.53.114]] 16:11, 7 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Travelling at above the simulated speed of light should give an imaginary time dilation, not a negative time dilation.&lt;br /&gt;
gamma = 1/sqrt{1-v^2/c^2}&lt;br /&gt;
Thus, after such travel, the value of the clock would be a complex number. [[Special:Contributions/108.162.219.35|108.162.219.35]] 15:42, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Does the alternate speed of light simulator also dynamically adjust the mass of the phone? Better yet, does it also dynamically adjust the mass of its comoving surroundings (person holding phone, vehicle phone is traveling in)?[[Special:Contributions/172.69.23.27|172.69.23.27]] 01:38, 20 February 2020 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Changed the speed of light to 2.99x10^8'''&lt;br /&gt;
:You guys should probably clarify that the relativisic affects actually depend on how long your trip is or how long you wait to sync your phone.  For relativity to be observable on a 12 hour trip, Minimum speed for a phone would have to be 300 m/s or 3000 m/s for the clock to measure even a microsecond/millisecond difference in time. This is well known thanks to the certain  [https://en.wikipedia.org/wiki/Time_dilation#Velocity_and_gravitational_time_dilation_combined-effect_tests time dilation experiments with planes]. Your GPS chip helps account for an error of [https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System#Relativity 7 to 47 microseconds per day]. My point is in terms of time dilation, relativity mattering depends on how long a trip or waiting for synchronization is. By synching, I literally mean with the atomic time clock or with a GPS satellite. The synchronization of your phone with satellites is actually a couple of hundred microseconds, so normally even a light changing clock might not have as noticable changes as you might think. [[Special:Contributions/108.162.238.225|108.162.238.225]] 13:49, 2 May 2014 (UTC) --[[Special:Contributions/108.162.238.225|108.162.238.225]] 13:49, 2 May 2014 (UTC)&lt;br /&gt;
Yeah sorry forgot to login. does anyone know how to do the indices formatting other than eg 2.99x10(littlex) rather then 2.99x10^x? [[User:Jonv4n|Jonv4n]] ([[User talk:Jonv4n|talk]]) 06:29, 2 May 2014 (UTC)&lt;br /&gt;
: Whas&amp;lt;sup&amp;gt;sup&amp;lt;/sup&amp;gt;? [[Special:Contributions/141.101.89.220|141.101.89.220]] 07:43, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
; relativistic effect&lt;br /&gt;
Forgive me if I'm wrong, I'm not a physicist but the above explanation says that relativistic time dilation affects only occur at a significant fraction of the speed of light. It is my understanding that time dilation occurs at any speed, but is only perceptible/noticeable/measurable at very large fraction of the speed of light. Unless I'm mistaken the above it should reflect this. [[Special:Contributions/173.245.56.91|173.245.56.91]] 22:24, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
; putting &amp;quot;Relative&amp;quot; back into relativity&lt;br /&gt;
First time poster, please forgive my transgressions :)&lt;br /&gt;
My understanding regarding relativistic effects is that, for a given frame of reference (e.g. phone operator travelling at 0.9c) would be absolutely none. Relativistic effects (as I understand them) would only apply between two different frames of reference. The only effect I can see in this case is if you are moving towards, or away from the phone while operating, and red/blue shift of the radio frequencies. In general, wifi and bluetooth are used locally so wouldn't apply; only the phone network would be affected.&lt;br /&gt;
&lt;br /&gt;
Also, perhaps the adjustable speed of light is a reference to the the game &amp;quot;A slower speed of light&amp;quot; by MIT Game Lab http://gamelab.mit.edu/games/a-slower-speed-of-light/ (in which you walk around collecting objects; each object slows light down, and increases relativistic effects).&lt;br /&gt;
[[User:Jaybee|Jaybee]] ([[User talk:Jaybee|talk]])&lt;br /&gt;
&lt;br /&gt;
'''Phone may attract/trap insects; this is normal.'''&lt;br /&gt;
Funnier if you take it as a reference to the [http://www.slate.com/blogs/the_slatest/2014/04/07/mazda_issues_recall_because_spiders_invade_fuel_tank_causing_fire_risk.html spider problems] Mazda keeps on having... {{unsigned ip|108.162.215.64}}&lt;br /&gt;
&lt;br /&gt;
About the attracting insects ... I would expect this to be normal feature in night. Trapping, however ... -- [[User:Hkmaly|Hkmaly]] ([[User talk:Hkmaly|talk]]) 09:08, 2 May 2014 (UTC)&lt;br /&gt;
: There are other indications that the phone is at least partly biological, this being the strongest evidence of that. Insects could be the power source for the biological part(s). [[Special:Contributions/173.245.54.45|173.245.54.45]] 14:07, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This could also be a reference to computer bugs and the [http://commons.wikimedia.org/wiki/File:H96566k.jpg Harvard Mark II]. [[Special:Contributions/199.27.128.249|199.27.128.249]] 08:31, 15 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Siri'''&lt;br /&gt;
&lt;br /&gt;
Could the Siri bit be a reference to Portal?  When I first read it, I remembered this GLaDOS quote: &amp;quot;Your Aperture Science Weighted Companion Cube will never threaten to stab you, and in fact cannot speak. If your Weighted Companion Cube does speak, please disregard its advice.&amp;quot;  Could be completely wrong; just a thought.  [[Special:Contributions/173.245.54.51|173.245.54.51]] 10:09, 2 May 2014 (UTC)&lt;br /&gt;
:Perhaps Siri is beling likened to the &amp;quot;ATMOS&amp;quot; device in the Doctor Who episode &amp;quot;The Sontaran Stratagem&amp;quot; [[User:Esp666|Esp666]] ([[User talk:Esp666|talk]]) 11:20, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Lamest. Comic. Ever. And I'm not just saying that because he doesn't mention the Ubuntu Touch OS. ''– [[User:Tbc|tbc]] ([[User talk:Tbc|talk]]) 12:22, 2 May 2014 (UTC)''&lt;br /&gt;
&lt;br /&gt;
'''Realistic case'''&lt;br /&gt;
&lt;br /&gt;
Car telephones and the first cellphones were rather expensive, at least in Germany fake &amp;quot;realistic cases&amp;quot; were sold without any working electronics in it. Usage was to impress silly friends. {{unsigned ip|173.245.52.204}}&lt;br /&gt;
&lt;br /&gt;
I thought this was aimed at the iPhone.  Apparently these have an elegant case, but I have never actually seen one.  Everyone I know covers their iPhone with some hideous plastic monstrosity, since the design is not practical.--[[Special:Contributions/108.162.218.59|108.162.218.59]] 14:10, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Could possibly be a reference to the &amp;quot;Realistic&amp;quot; brand, which was used on various products sold by Radio Shack (U.S. electronics retail chain) from 1954 to some time in the '90s.[[Special:Contributions/108.162.219.76|108.162.219.76]] 16:14, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am not a native speaker of English. I thought the joke was on the double meaning of &amp;quot;case&amp;quot;, meaning both &amp;quot;something that happened or might happen&amp;quot; (like &amp;quot;realistic scenario&amp;quot;) and &amp;quot;something that covers something else&amp;quot;. Does that make sense to you guys? [[Special:Contributions/108.162.219.17|108.162.219.17]] 10:06, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Screaming when in free fall: my first Android app!'''&lt;br /&gt;
&lt;br /&gt;
I love the bit about screaming when in free fall: that was the first Android app I hacked together back in 2009 (based on the tricorder app).  [[User:Nealmcb|Nealmcb]] ([[User talk:Nealmcb|talk]]) 13:49, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
 I actually made this app that simulates that [https://play.google.com/store/apps/details?id=it.siluxmedia.frefall freeFall app][[Special:Contributions/108.162.212.32|108.162.212.32]] 19:03, 30 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Title Text'''&lt;br /&gt;
&lt;br /&gt;
Hover-over title text was truncated; love it.&lt;br /&gt;
14:43, 2 May 2014 (UTC)[[User:Pocono Chuck|Pocono Chuck]] ([[User talk:Pocono Chuck|talk]])&lt;br /&gt;
: you must have an really old firefox browser -- you should update !!! [[Special:Contributions/199.27.130.210|199.27.130.210]] 16:23, 2 May 2014 (UTC)&lt;br /&gt;
:: Happened to me.  Using whatever the latest IE is at the moment.  It cut off at nause-. [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:13, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Price includes 2-year Knicks contract.''' ... but a contract with the Knicks would only appeal to pro basketball players.&lt;br /&gt;
&lt;br /&gt;
Nonsense.  Lots of &amp;quot;regular&amp;quot; folks would buy this phone it it meant they got to play in the NBA. [[Special:Contributions/199.27.128.84|199.27.128.84]] 16:26, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
: I agree with this.  A whole lot of people who think they have &amp;quot;skillz&amp;quot; would buy the phone if they got into the NBA. [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:14, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This may also be an indirect way of stating that it is incredibly expensive, seeing as those sort of contracts usually involve ''you'' getting compensated. --[[Special:Contributions/108.162.216.33|108.162.216.33]] 13:41, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Your mobile world just went digital&amp;quot; is an inversion of the marketing-speak that was common when what we'd now regard as smartphones first began to be adopted by the mainstream (iPhone/G1 era, since Symbians, Blackberries, and early WinMo tended to be business or enthusiast devices). People already ubiquitously e-mailed, browsed the Web, etc...what was presented as 'new' was that you could now do it from your phone. [[Special:Contributions/173.245.54.58|173.245.54.58]] 19:09, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I think the &amp;quot;Under certain circumstances, wireless transmitter may control God&amp;quot; statement might be a reference to how transmitting devices have to comply with FCC regulation and not interfere with aircraft or government communications. Perhaps this phone is intended to be noncompliant so as to control high-level electronics, even at supernatural levels. [[Special:Contributions/173.245.56.66|173.245.56.66]] 21:11, 2 May 2014 (UTC)Dbrak&lt;br /&gt;
&lt;br /&gt;
'''Frictionless''' &lt;br /&gt;
&lt;br /&gt;
You could hold a frictionless phone just by hooking your little finger under the bottom edge, regardless of friction gravity will hold it into your hand. Just like you could leave it in a bowl without it jumping out. [[Special:Contributions/108.162.229.72|108.162.229.72]] 19:12, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Unless you held your pinky perfectly balanced, horizontal and motionless, a frictionless object would slide right off it, as it would off any flat surface that is not perfectly horizontal.&lt;br /&gt;
14:13, 3 May 2014 (UTC) [[Special:Contributions/108.162.242.4|108.162.242.4]] 13:15, 3 May 2014 (UTC)DCollins&lt;br /&gt;
&lt;br /&gt;
Wouldn't you be able to hold it somewhat like a normal phone, if you hold a finger under the bottom of it? [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:08, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Saying a frictionless phone can't be held is like saying prisoners would slide out of prisons if they had frictionless surfaces.  [[Special:Contributions/108.162.237.218|108.162.237.218]] 14:50, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
If it was frictionless, it would be only slightly harder to hold than a wet bar of soap. --[[Special:Contributions/108.162.219.186|108.162.219.186]] 22:56, 22 September 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
;Why the hell this funny phone isn't available at the xkcd store?&lt;br /&gt;
I would buy if the price would be in the range of other articles there. Just for fun...--[[User:Dgbrt|Dgbrt]] ([[User talk:Dgbrt|talk]]) 19:30, 2 May 2014 (UTC)&lt;br /&gt;
:Yeah, having a phone that causes symptoms of radiation poisoning would be very funny indeed. [[Special:Contributions/172.68.10.44|172.68.10.44]] 11:03, 16 July 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Root needed'''&lt;br /&gt;
&lt;br /&gt;
I think that needing root for ajust the volume may be a allusion people needing to root Android to change fonts or to take screenshots (untill version 4.x). [[User:FlavianusEP|FlavianusEP]] ([[User talk:FlavianusEP|talk]]) 23:04, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Alternative meaning: The spirit of xkcd'''&lt;br /&gt;
&lt;br /&gt;
I think there's a secondary possible interpretation for this comic -- that the various features of the phone represent the overall &amp;quot;spirit&amp;quot; or &amp;quot;attitude&amp;quot; of xkcd, in a way reminiscent of an early strip -- http://xkcd.com/207/ -- about &amp;quot;what xkcd means.&amp;quot; More specifically, a common theme in xkcd is taking advanced concepts in science and technology, and applying them to whimsical, humorous, impractical, or outright impossible uses. Several of phone's features -- such as the simulated speed of light -- touch on the same theme. Wordplay, another common xkcd theme, is present as well; and the anthropomorphism of technology, along with making devices appear 'cute', is also present, and also is something that has come up in xkcd many times in the past.&lt;br /&gt;
&lt;br /&gt;
The comic is called &amp;quot;xkcd Phone&amp;quot;, after all -- I think simultaneously with being a parody of phone advertisements, the comic is also meant to show us what a phone that fits into the xkcd world would be like. [[Special:Contributions/108.162.241.114|108.162.241.114]] 17:52, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I agree; it seems like a basic comic at first glance, but I'm wondering if there's a meta-meaning if you put all of the pieces together. Each feature, and warning, is a clue to the overarching purpose of the phone, or to the true joke that this phone embodies. [[User:Imtrbl|imtrbl]] ([[User talk:Imtrbl|talk]]) 19:11, 8 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Blowing out candles....'''&lt;br /&gt;
&lt;br /&gt;
For the birthday candles thing: I do remember seeing a video ad for an app back when the iphone was first opened up to outside developers that would turn the phone into a fan, and it demonstrated that it was strong enough to blow out a birthday candle. Seemed quite useless at the time. Still does today for that matter {{unsigned ip|108.162.215.47}}&lt;br /&gt;
&lt;br /&gt;
'''Side-facing camera'''&lt;br /&gt;
&lt;br /&gt;
I thought the joke here was that the camera ''only'' contained a side-facing camera rather than a side camera in addition to a front and back camera.  While you can see the camera on the side, you don't see a camera on the front and they don't talk about a rear camera.  It'd be pretty annoying to use a side-facing camera for anything but the surreptitious case you described. [[User:S|S]] ([[User talk:S|talk]]) 16:58, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Do not submerge in water'''&lt;br /&gt;
&lt;br /&gt;
I assumed this was also referencing the 4chan, etc pranks with the waterproof iPhone [[Special:Contributions/173.245.55.79|173.245.55.79]] 18:41, 5 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Wireless'''&lt;br /&gt;
&lt;br /&gt;
A completely wireless phone would not be unuseable. The only wire phones need nowadays is for recharging their battery, but this can be done by induction, like with the Qi system with which some Nokia and Google (Nexus) phones are compatible. [[User:Zoyd|Zoyd]] ([[User talk:Zoyd|talk]]) 12:25, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''FoxTrot reference?''' I think this one could be a reference to th J-Pad in the foxtrot webcomic http://www.foxtrot.com/2012/03/03042012/ . Foxtrot's author has already published a guest comic for XKCD a few years ago, so Randall should know about it. [[Special:Contributions/108.162.212.203|108.162.212.203]] 16:31, 11 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Cell Phones Cure Cancer?'''&lt;br /&gt;
&lt;br /&gt;
I know, I know, the editor who added &amp;quot;makes fun of the WHO claiming that cell phones might cause cancer despite huge studies showing the opposite&amp;quot; probably didn't mean that, but it's kind of amusing to interpret it that way. [[User:Nyperold|Nyperold]] ([[User talk:Nyperold|talk]]) 19:42, 15 June 2016 (UTC)&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1363:_xkcd_Phone&amp;diff=187534</id>
		<title>Talk:1363: xkcd Phone</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1363:_xkcd_Phone&amp;diff=187534"/>
				<updated>2020-02-20T01:37:12Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Special:Contributions/108.162.249.216|108.162.249.216]] 14:14, 4 August 2014 (UTC) Mobile phones haven't always been electronic. Remember the old 'bricks' from the '80s and '90s? More machine than computer.&lt;br /&gt;
&lt;br /&gt;
This seems like an SCP artifact [[Special:Contributions/108.162.249.220|108.162.249.220]] 10:09, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
^someone get on this, please [[User:Whiskey07|Whiskey07]] ([[User talk:Whiskey07|talk]]) 16:28, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I really dislike the tone of the explanation. I mean it's so negative about the features! Not that they are all useful, but isn't this a wiki and should be neutral? It takes also the fun out of it. I would like a screaming while falling phone and the relativity thing would be great for teaching relativity! [[User:RecentlyChanged|RecentlyChanged]] ([[User talk:RecentlyChanged|talk]])&lt;br /&gt;
&lt;br /&gt;
Where can i get one of these? :D [[User:UniTrader|UniTrader]] ([[User talk:UniTrader|talk]]) 04:11, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm pretty sure the &amp;quot;scream when falling&amp;quot; thing and the &amp;quot;flightaware&amp;quot; stuff can be done somehow with Tasker. [[Special:Contributions/141.101.103.206|141.101.103.206]] 04:23, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
;Designer?&lt;br /&gt;
&lt;br /&gt;
I suspect it was either Black Hat or Beret Guy, but I'm not sure which. A collaboration? [[Special:Contributions/173.245.54.45|173.245.54.45]] 04:47, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This sounds like something straight out of aperture. {{unsigned ip|108.162.221.55}}&lt;br /&gt;
&lt;br /&gt;
;Simulates alternate speeds of light&lt;br /&gt;
&lt;br /&gt;
Yes, useless as a feature on all the time; but it would be a cool app. [[User:Markhurd|Mark Hurd]] ([[User talk:Markhurd|talk]]) 05:57, 2 May 2014 (UTC)&lt;br /&gt;
:Absolutely. Where can I get an app like that?[[Special:Contributions/108.162.225.157|108.162.225.157]] 06:22, 2 May 2014 (UTC)&lt;br /&gt;
::Here: [http://play.google.com/store/apps/details?id=com.pandorica.xkcdclock XKCD Clock] [[Special:Contributions/173.245.53.114|173.245.53.114]] 16:11, 7 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Travelling at above the simulated speed of light should give an imaginary time dilation, not a negative time dilation.&lt;br /&gt;
gamma = 1/sqrt{1-v^2/c^2}&lt;br /&gt;
Thus, after such travel, the value of the clock would be a complex number. [[Special:Contributions/108.162.219.35|108.162.219.35]] 15:42, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Does the alternate speed of light simulator also dynamically adjust the mass of the phone? Better yet, does it also dynamically adjust the mass of its comoving surroundings (person holding phone, vehicle phone is traveling in)?&lt;br /&gt;
&lt;br /&gt;
'''Changed the speed of light to 2.99x10^8'''&lt;br /&gt;
:You guys should probably clarify that the relativisic affects actually depend on how long your trip is or how long you wait to sync your phone.  For relativity to be observable on a 12 hour trip, Minimum speed for a phone would have to be 300 m/s or 3000 m/s for the clock to measure even a microsecond/millisecond difference in time. This is well known thanks to the certain  [https://en.wikipedia.org/wiki/Time_dilation#Velocity_and_gravitational_time_dilation_combined-effect_tests time dilation experiments with planes]. Your GPS chip helps account for an error of [https://en.wikipedia.org/wiki/Error_analysis_for_the_Global_Positioning_System#Relativity 7 to 47 microseconds per day]. My point is in terms of time dilation, relativity mattering depends on how long a trip or waiting for synchronization is. By synching, I literally mean with the atomic time clock or with a GPS satellite. The synchronization of your phone with satellites is actually a couple of hundred microseconds, so normally even a light changing clock might not have as noticable changes as you might think. [[Special:Contributions/108.162.238.225|108.162.238.225]] 13:49, 2 May 2014 (UTC) --[[Special:Contributions/108.162.238.225|108.162.238.225]] 13:49, 2 May 2014 (UTC)&lt;br /&gt;
Yeah sorry forgot to login. does anyone know how to do the indices formatting other than eg 2.99x10(littlex) rather then 2.99x10^x? [[User:Jonv4n|Jonv4n]] ([[User talk:Jonv4n|talk]]) 06:29, 2 May 2014 (UTC)&lt;br /&gt;
: Whas&amp;lt;sup&amp;gt;sup&amp;lt;/sup&amp;gt;? [[Special:Contributions/141.101.89.220|141.101.89.220]] 07:43, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
; relativistic effect&lt;br /&gt;
Forgive me if I'm wrong, I'm not a physicist but the above explanation says that relativistic time dilation affects only occur at a significant fraction of the speed of light. It is my understanding that time dilation occurs at any speed, but is only perceptible/noticeable/measurable at very large fraction of the speed of light. Unless I'm mistaken the above it should reflect this. [[Special:Contributions/173.245.56.91|173.245.56.91]] 22:24, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
; putting &amp;quot;Relative&amp;quot; back into relativity&lt;br /&gt;
First time poster, please forgive my transgressions :)&lt;br /&gt;
My understanding regarding relativistic effects is that, for a given frame of reference (e.g. phone operator travelling at 0.9c) would be absolutely none. Relativistic effects (as I understand them) would only apply between two different frames of reference. The only effect I can see in this case is if you are moving towards, or away from the phone while operating, and red/blue shift of the radio frequencies. In general, wifi and bluetooth are used locally so wouldn't apply; only the phone network would be affected.&lt;br /&gt;
&lt;br /&gt;
Also, perhaps the adjustable speed of light is a reference to the the game &amp;quot;A slower speed of light&amp;quot; by MIT Game Lab http://gamelab.mit.edu/games/a-slower-speed-of-light/ (in which you walk around collecting objects; each object slows light down, and increases relativistic effects).&lt;br /&gt;
[[User:Jaybee|Jaybee]] ([[User talk:Jaybee|talk]])&lt;br /&gt;
&lt;br /&gt;
'''Phone may attract/trap insects; this is normal.'''&lt;br /&gt;
Funnier if you take it as a reference to the [http://www.slate.com/blogs/the_slatest/2014/04/07/mazda_issues_recall_because_spiders_invade_fuel_tank_causing_fire_risk.html spider problems] Mazda keeps on having... {{unsigned ip|108.162.215.64}}&lt;br /&gt;
&lt;br /&gt;
About the attracting insects ... I would expect this to be normal feature in night. Trapping, however ... -- [[User:Hkmaly|Hkmaly]] ([[User talk:Hkmaly|talk]]) 09:08, 2 May 2014 (UTC)&lt;br /&gt;
: There are other indications that the phone is at least partly biological, this being the strongest evidence of that. Insects could be the power source for the biological part(s). [[Special:Contributions/173.245.54.45|173.245.54.45]] 14:07, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This could also be a reference to computer bugs and the [http://commons.wikimedia.org/wiki/File:H96566k.jpg Harvard Mark II]. [[Special:Contributions/199.27.128.249|199.27.128.249]] 08:31, 15 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Siri'''&lt;br /&gt;
&lt;br /&gt;
Could the Siri bit be a reference to Portal?  When I first read it, I remembered this GLaDOS quote: &amp;quot;Your Aperture Science Weighted Companion Cube will never threaten to stab you, and in fact cannot speak. If your Weighted Companion Cube does speak, please disregard its advice.&amp;quot;  Could be completely wrong; just a thought.  [[Special:Contributions/173.245.54.51|173.245.54.51]] 10:09, 2 May 2014 (UTC)&lt;br /&gt;
:Perhaps Siri is beling likened to the &amp;quot;ATMOS&amp;quot; device in the Doctor Who episode &amp;quot;The Sontaran Stratagem&amp;quot; [[User:Esp666|Esp666]] ([[User talk:Esp666|talk]]) 11:20, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Lamest. Comic. Ever. And I'm not just saying that because he doesn't mention the Ubuntu Touch OS. ''– [[User:Tbc|tbc]] ([[User talk:Tbc|talk]]) 12:22, 2 May 2014 (UTC)''&lt;br /&gt;
&lt;br /&gt;
'''Realistic case'''&lt;br /&gt;
&lt;br /&gt;
Car telephones and the first cellphones were rather expensive, at least in Germany fake &amp;quot;realistic cases&amp;quot; were sold without any working electronics in it. Usage was to impress silly friends. {{unsigned ip|173.245.52.204}}&lt;br /&gt;
&lt;br /&gt;
I thought this was aimed at the iPhone.  Apparently these have an elegant case, but I have never actually seen one.  Everyone I know covers their iPhone with some hideous plastic monstrosity, since the design is not practical.--[[Special:Contributions/108.162.218.59|108.162.218.59]] 14:10, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Could possibly be a reference to the &amp;quot;Realistic&amp;quot; brand, which was used on various products sold by Radio Shack (U.S. electronics retail chain) from 1954 to some time in the '90s.[[Special:Contributions/108.162.219.76|108.162.219.76]] 16:14, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I am not a native speaker of English. I thought the joke was on the double meaning of &amp;quot;case&amp;quot;, meaning both &amp;quot;something that happened or might happen&amp;quot; (like &amp;quot;realistic scenario&amp;quot;) and &amp;quot;something that covers something else&amp;quot;. Does that make sense to you guys? [[Special:Contributions/108.162.219.17|108.162.219.17]] 10:06, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Screaming when in free fall: my first Android app!'''&lt;br /&gt;
&lt;br /&gt;
I love the bit about screaming when in free fall: that was the first Android app I hacked together back in 2009 (based on the tricorder app).  [[User:Nealmcb|Nealmcb]] ([[User talk:Nealmcb|talk]]) 13:49, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
 I actually made this app that simulates that [https://play.google.com/store/apps/details?id=it.siluxmedia.frefall freeFall app][[Special:Contributions/108.162.212.32|108.162.212.32]] 19:03, 30 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Title Text'''&lt;br /&gt;
&lt;br /&gt;
Hover-over title text was truncated; love it.&lt;br /&gt;
14:43, 2 May 2014 (UTC)[[User:Pocono Chuck|Pocono Chuck]] ([[User talk:Pocono Chuck|talk]])&lt;br /&gt;
: you must have an really old firefox browser -- you should update !!! [[Special:Contributions/199.27.130.210|199.27.130.210]] 16:23, 2 May 2014 (UTC)&lt;br /&gt;
:: Happened to me.  Using whatever the latest IE is at the moment.  It cut off at nause-. [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:13, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Price includes 2-year Knicks contract.''' ... but a contract with the Knicks would only appeal to pro basketball players.&lt;br /&gt;
&lt;br /&gt;
Nonsense.  Lots of &amp;quot;regular&amp;quot; folks would buy this phone it it meant they got to play in the NBA. [[Special:Contributions/199.27.128.84|199.27.128.84]] 16:26, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
: I agree with this.  A whole lot of people who think they have &amp;quot;skillz&amp;quot; would buy the phone if they got into the NBA. [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:14, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
This may also be an indirect way of stating that it is incredibly expensive, seeing as those sort of contracts usually involve ''you'' getting compensated. --[[Special:Contributions/108.162.216.33|108.162.216.33]] 13:41, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Your mobile world just went digital&amp;quot; is an inversion of the marketing-speak that was common when what we'd now regard as smartphones first began to be adopted by the mainstream (iPhone/G1 era, since Symbians, Blackberries, and early WinMo tended to be business or enthusiast devices). People already ubiquitously e-mailed, browsed the Web, etc...what was presented as 'new' was that you could now do it from your phone. [[Special:Contributions/173.245.54.58|173.245.54.58]] 19:09, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I think the &amp;quot;Under certain circumstances, wireless transmitter may control God&amp;quot; statement might be a reference to how transmitting devices have to comply with FCC regulation and not interfere with aircraft or government communications. Perhaps this phone is intended to be noncompliant so as to control high-level electronics, even at supernatural levels. [[Special:Contributions/173.245.56.66|173.245.56.66]] 21:11, 2 May 2014 (UTC)Dbrak&lt;br /&gt;
&lt;br /&gt;
'''Frictionless''' &lt;br /&gt;
&lt;br /&gt;
You could hold a frictionless phone just by hooking your little finger under the bottom edge, regardless of friction gravity will hold it into your hand. Just like you could leave it in a bowl without it jumping out. [[Special:Contributions/108.162.229.72|108.162.229.72]] 19:12, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Unless you held your pinky perfectly balanced, horizontal and motionless, a frictionless object would slide right off it, as it would off any flat surface that is not perfectly horizontal.&lt;br /&gt;
14:13, 3 May 2014 (UTC) [[Special:Contributions/108.162.242.4|108.162.242.4]] 13:15, 3 May 2014 (UTC)DCollins&lt;br /&gt;
&lt;br /&gt;
Wouldn't you be able to hold it somewhat like a normal phone, if you hold a finger under the bottom of it? [[Special:Contributions/173.245.54.54|173.245.54.54]] 17:08, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
Saying a frictionless phone can't be held is like saying prisoners would slide out of prisons if they had frictionless surfaces.  [[Special:Contributions/108.162.237.218|108.162.237.218]] 14:50, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
If it was frictionless, it would be only slightly harder to hold than a wet bar of soap. --[[Special:Contributions/108.162.219.186|108.162.219.186]] 22:56, 22 September 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
;Why the hell this funny phone isn't available at the xkcd store?&lt;br /&gt;
I would buy if the price would be in the range of other articles there. Just for fun...--[[User:Dgbrt|Dgbrt]] ([[User talk:Dgbrt|talk]]) 19:30, 2 May 2014 (UTC)&lt;br /&gt;
:Yeah, having a phone that causes symptoms of radiation poisoning would be very funny indeed. [[Special:Contributions/172.68.10.44|172.68.10.44]] 11:03, 16 July 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Root needed'''&lt;br /&gt;
&lt;br /&gt;
I think that needing root for ajust the volume may be a allusion people needing to root Android to change fonts or to take screenshots (untill version 4.x). [[User:FlavianusEP|FlavianusEP]] ([[User talk:FlavianusEP|talk]]) 23:04, 2 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Alternative meaning: The spirit of xkcd'''&lt;br /&gt;
&lt;br /&gt;
I think there's a secondary possible interpretation for this comic -- that the various features of the phone represent the overall &amp;quot;spirit&amp;quot; or &amp;quot;attitude&amp;quot; of xkcd, in a way reminiscent of an early strip -- http://xkcd.com/207/ -- about &amp;quot;what xkcd means.&amp;quot; More specifically, a common theme in xkcd is taking advanced concepts in science and technology, and applying them to whimsical, humorous, impractical, or outright impossible uses. Several of phone's features -- such as the simulated speed of light -- touch on the same theme. Wordplay, another common xkcd theme, is present as well; and the anthropomorphism of technology, along with making devices appear 'cute', is also present, and also is something that has come up in xkcd many times in the past.&lt;br /&gt;
&lt;br /&gt;
The comic is called &amp;quot;xkcd Phone&amp;quot;, after all -- I think simultaneously with being a parody of phone advertisements, the comic is also meant to show us what a phone that fits into the xkcd world would be like. [[Special:Contributions/108.162.241.114|108.162.241.114]] 17:52, 3 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
I agree; it seems like a basic comic at first glance, but I'm wondering if there's a meta-meaning if you put all of the pieces together. Each feature, and warning, is a clue to the overarching purpose of the phone, or to the true joke that this phone embodies. [[User:Imtrbl|imtrbl]] ([[User talk:Imtrbl|talk]]) 19:11, 8 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Blowing out candles....'''&lt;br /&gt;
&lt;br /&gt;
For the birthday candles thing: I do remember seeing a video ad for an app back when the iphone was first opened up to outside developers that would turn the phone into a fan, and it demonstrated that it was strong enough to blow out a birthday candle. Seemed quite useless at the time. Still does today for that matter {{unsigned ip|108.162.215.47}}&lt;br /&gt;
&lt;br /&gt;
'''Side-facing camera'''&lt;br /&gt;
&lt;br /&gt;
I thought the joke here was that the camera ''only'' contained a side-facing camera rather than a side camera in addition to a front and back camera.  While you can see the camera on the side, you don't see a camera on the front and they don't talk about a rear camera.  It'd be pretty annoying to use a side-facing camera for anything but the surreptitious case you described. [[User:S|S]] ([[User talk:S|talk]]) 16:58, 4 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Do not submerge in water'''&lt;br /&gt;
&lt;br /&gt;
I assumed this was also referencing the 4chan, etc pranks with the waterproof iPhone [[Special:Contributions/173.245.55.79|173.245.55.79]] 18:41, 5 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Wireless'''&lt;br /&gt;
&lt;br /&gt;
A completely wireless phone would not be unuseable. The only wire phones need nowadays is for recharging their battery, but this can be done by induction, like with the Qi system with which some Nokia and Google (Nexus) phones are compatible. [[User:Zoyd|Zoyd]] ([[User talk:Zoyd|talk]]) 12:25, 6 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''FoxTrot reference?''' I think this one could be a reference to th J-Pad in the foxtrot webcomic http://www.foxtrot.com/2012/03/03042012/ . Foxtrot's author has already published a guest comic for XKCD a few years ago, so Randall should know about it. [[Special:Contributions/108.162.212.203|108.162.212.203]] 16:31, 11 May 2014 (UTC)&lt;br /&gt;
&lt;br /&gt;
'''Cell Phones Cure Cancer?'''&lt;br /&gt;
&lt;br /&gt;
I know, I know, the editor who added &amp;quot;makes fun of the WHO claiming that cell phones might cause cancer despite huge studies showing the opposite&amp;quot; probably didn't mean that, but it's kind of amusing to interpret it that way. [[User:Nyperold|Nyperold]] ([[User talk:Nyperold|talk]]) 19:42, 15 June 2016 (UTC)&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161297</id>
		<title>2034: Equations</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161297"/>
				<updated>2018-08-17T05:00:42Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2034&lt;br /&gt;
| date      = August 17, 2018&lt;br /&gt;
| title     = Equations&lt;br /&gt;
| image     = equations.png&lt;br /&gt;
| titletext = All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
[TODO: Avoid using math markup here because the images of these equations isn't helpful in a transcript. Sigh.]&lt;br /&gt;
[Nine equations are listed and labeled as followed:]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;E = K_0t + \frac{1}{2}pvt^2&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL KINEMATICS EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;K_n = \sum_{i=0}^{\infty}\sum_{\pi=0}^{\infty}(n-\pi)(i-e^{\pi-\infty})&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL NUMBER THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\frac{\partial}{\partial t}\triangledown\cdot p = \frac{8}{23}&lt;br /&gt;
\int\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,&lt;br /&gt;
\rho\,ds\,dt\cdot \rho\frac{\partial}{\partial\triangledown}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL FLUID DYNAMIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM MECHANIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;CH+4 + OH + HEAT \rightarrow H_2O + CH_2 + H_2 EAT&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL CHEMISTRY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM GRAVITY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL GAUGE THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;H(t) + \Omega + G \cdot \land \, ... \begin{cases} ... &amp;gt; 0 &amp;amp; \text{(HUBBLE MODEL)} \\ ... = 0 &amp;amp; \text{(FLAT SPHERE MODEL)} \\ ... &amp;lt; 0  &amp;amp; \text{(BRIGHT DARK MATTER MODEL)} \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL COSMOLOGY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\hat H - \mu_{0} = 0&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL TRULY DEEP PHYSICS EQUATIONS&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Math]]&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161295</id>
		<title>2034: Equations</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161295"/>
				<updated>2018-08-17T04:54:27Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: Fleshing out the transcript a bit more, adding last two equations.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2034&lt;br /&gt;
| date      = August 17, 2018&lt;br /&gt;
| title     = Equations&lt;br /&gt;
| image     = equations.png&lt;br /&gt;
| titletext = All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
[Nine equations are listed and labeled as followed:]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;E = K_0t + \frac{1}{2}pvt^2&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL KINEMATICS EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;K_n = \sum_{i=0}^{\infty}\sum_{\pi=0}^{\infty}(n-\pi)(i-e^{\pi-\infty})&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL NUMBER THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\frac{\partial}{\partial t}\triangledown\cdot p = \frac{8}{23}&lt;br /&gt;
\int\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,&lt;br /&gt;
\rho\,ds\,dt\cdot \rho\frac{\partial}{\partial\triangledown}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL FLUID DYNAMIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM MECHANIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL CHEMISTRY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM GRAVITY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL GAUGE THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;H(t) + \Omega + G \cdot \land \, ... \begin{cases} ... &amp;gt; 0 &amp;amp; \text{(HUBBLE MODEL)} \\ ... = 0 &amp;amp; \text{(FLAT SPHERE MODEL)} \\ ... &amp;lt; 0  &amp;amp; \text{(BRIGHT DARK MATTER MODEL)} \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL COSMOLOGY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\hat H - \mu_{0} = 0&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL TRULY DEEP PHYSICS EQUATIONS&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161294</id>
		<title>2034: Equations</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161294"/>
				<updated>2018-08-17T04:47:31Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: Fleshing out the transcript a bit more.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2034&lt;br /&gt;
| date      = August 17, 2018&lt;br /&gt;
| title     = Equations&lt;br /&gt;
| image     = equations.png&lt;br /&gt;
| titletext = All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
[Nine equations are listed and labeled as followed:]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;E = K_0t + \frac{1}{2}pvt^2&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL KINEMATICS EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;K_n = \sum_{i=0}^{\infty}\sum_{\pi=0}^{\infty}(n-\pi)(i-e^{\pi-\infty})&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL NUMBER THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\frac{\partial}{\partial t}\triangledown\cdot p = \frac{8}{23}&lt;br /&gt;
\int\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,&lt;br /&gt;
\rho\,ds\,dt\cdot \rho\frac{\partial}{\partial\triangledown}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL FLUID DYNAMIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM MECHANIC EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL CHEMISTRY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL QUANTUM GRAVITY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL GAUGE THEORY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;?placeholder?&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL COSMOLOGY EQUATIONS&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\hat H - \mu_{0} = 0&amp;lt;/math&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
ALL TRULY DEEP PHYSICS EQUATIONS&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161293</id>
		<title>2034: Equations</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161293"/>
				<updated>2018-08-17T04:23:27Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: Adding the third equations to transcript.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2034&lt;br /&gt;
| date      = August 17, 2018&lt;br /&gt;
| title     = Equations&lt;br /&gt;
| image     = equations.png&lt;br /&gt;
| titletext = All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
&amp;lt;math&amp;gt;E = K_0t + \frac{1}{2}pvt^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;K_n = \sum_{i=0}^{\infty}\sum_{\pi=0}^{\infty}(n-\pi)(i-e^{\pi-\infty})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;\frac{\partial}{\partial t}\triangledown\cdot p = \frac{8}{23}&lt;br /&gt;
\int\!\!\!\!\int\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\bigcirc\,\,&lt;br /&gt;
\rho\,ds\,dt\cdot \rho\frac{\partial}{\partial\triangledown}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161292</id>
		<title>2034: Equations</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2034:_Equations&amp;diff=161292"/>
				<updated>2018-08-17T04:12:49Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: Adding the first two equations to transcript.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2034&lt;br /&gt;
| date      = August 17, 2018&lt;br /&gt;
| title     = Equations&lt;br /&gt;
| image     = equations.png&lt;br /&gt;
| titletext = All electromagnetic equations: The same as all fluid dynamics equations, but with the 8 and 23 replaced with the permittivity and permeability of free space, respectively.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
&amp;lt;math&amp;gt;E = K_0t + \frac{1}{2}pvt^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;K_n = \sum_{i=0}^{\infty}\sum_{\pi=0}^{\infty}(n-\pi)(i-e^{\pi-\infty})&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:692:_Dirty_Harry&amp;diff=160565</id>
		<title>Talk:692: Dirty Harry</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:692:_Dirty_Harry&amp;diff=160565"/>
				<updated>2018-07-27T18:07:13Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;There's &amp;quot;instantly&amp;quot; twice in a sentence. Because I'm not a native english speaker, I don't know if this is acceptable, and for the same reason I'll not edit it. {{unsigned ip|108.162.212.196}}&lt;br /&gt;
:You were correct, it isn't. Actually that whole sentence bothers me, but I suppose it gets the point across well enough. [[Special:Contributions/108.162.221.50|108.162.221.50]] 07:23, 25 January 2014 (UTC)&lt;br /&gt;
*Technically even directed-energy weapons would run out of shots eventually, since they tend to have batteries, and batteries don't last forever. I suppose you could get around this by using solar power or something, but you would need solar panels larger than the gun itself, most likely. [[User:Jake|Jake]] ([[User talk:Jake|talk]]) 15:27, 11 May 2015 (UTC)&lt;br /&gt;
::It's not that it would have an unlimited amount of shots, it's that they wouldn't be limited in the same way that conventional existing firearms are. Most guns you can find out how many bullets are in them, even with made up firearms you can make an educated guess, based on the size of various things about the gun. With an energy weapon you could hypothetically have a gun with five shots or five hundred shots in the same size battery, depending on whatever factors your sci-fi bothers with (although pretty much all guns in media have [plot] number of shots regardless). -Pennpenn [[Special:Contributions/108.162.250.162|108.162.250.162]] 06:49, 20 July 2015 (UTC)&lt;br /&gt;
::Plus, rather than firing individual bullets, it would shoot a steady beam. That way there's nothing to count. [[Special:Contributions/108.162.238.77|108.162.238.77]] 17:03, 6 May 2017 (UTC)&lt;br /&gt;
Nice try, title text; clearly you have yet to meet The Borg. Bonus points for shields annoying Rainman by adapting to plot velocity instead of total count. [[User:Elvenivle|Elvenivle]] ([[User talk:Elvenivle|talk]]) 03:46, 1 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I removed the sentence, &amp;quot;It should be noted that revolvers don't actually use the 6th slot for reasons of safety&amp;quot; because it's funny but people might think it's true when as far as I can tell it's completely silly. 5 &amp;amp; 7 round capacity revolvers exist, but they're designed that way from the get go--[[Special:Contributions/172.69.23.27|172.69.23.27]] 18:07, 27 July 2018 (UTC)&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:2021:_Software_Development&amp;diff=160232</id>
		<title>Talk:2021: Software Development</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:2021:_Software_Development&amp;diff=160232"/>
				<updated>2018-07-19T03:36:30Z</updated>
		
		<summary type="html">&lt;p&gt;172.69.23.27: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It seems to me that the cannon is a metaphor for powerful hardware. The drill is a metaphor for elegant and efficient code. The computer is so powerful that the fact that the elegance or efficiency of the code is irrelevant to how it is actually used.[[User:Zeimusu|Zeimusu]] ([[User talk:Zeimusu|talk]]) 15:48, 18 July 2018 (UTC)&lt;br /&gt;
&lt;br /&gt;
Hi, first time posting ;)&lt;br /&gt;
To me it seems that the Title text is an example how after some time and many updates the original solution becomes some kind of abomination. Used in abstruse ways for something it was never intended for just because it works and is a quick and simple fix. After some time one always ends up doing unnecessary and arbitrary things in order to get what you actually wanted to achive. Like loading projectiles into a cannon just to use it as a battering ram. {{unsigned ip|162.158.91.137}}&lt;br /&gt;
&lt;br /&gt;
Don't forget the fact that no one wants to figure out how to use the elegant drill, but instead use it for its most obvious if least elegant piece--the stationary pointy bit. -Todd 7/18/2018 17:32 UTC {{unsigned ip|172.69.69.88}}&lt;br /&gt;
&lt;br /&gt;
The way I understand this, Hairy had the cannon done already to make holes in the wall, the typical brute force solution to the problem. But he needed ammo of a certain weight and gave that task to Cueball. Cueball then made a drill, an elegant solution that would do the job better than the canon. Hairy sees the drill and doesn't care about all the fancy functions, all he needed was an object of the proper weight to put 500 of them in the already built cannon. In programming, this shows either a reluctance from Hairy to adapt to the better solution and insist on using the brute force approach. Or, it shows how often programmers tend to make things way more complicated than is needed. Cueball went to remake a new solution for the problem when all he was supposed to do was make a cannonball of the proper weight.-Vince23 17:46, 18 July 2018 (UTC) {{unsigned|Vince23}}&lt;br /&gt;
&lt;br /&gt;
This also shows the results of not clearly defining terms. Cueball interpreted 'drill' to mean 'a hand held drilling machine' whilst Hairy toolkit to mean a 'drill bit'. So when Cueball delivers his component, Hairy just uses it as a 'dumb' piece of ammo. [[User:RIIW - Ponder it|RIIW - Ponder it]] ([[User talk:RIIW - Ponder it|talk]]) 22:31, 18 July 2018 (UTC)&lt;br /&gt;
&lt;br /&gt;
Automatic-Drill Cannon is my new favorite impractical weapon. [[User:ProphetZarquon|ProphetZarquon]] ([[User talk:ProphetZarquon|talk]]) 01:44, 19 July 2018 (UTC)&lt;br /&gt;
&lt;br /&gt;
Sorry if this is amazingly off topic, but is that an automatic-drill cannon or an automatic drill-cannon? Like a Gatling gun for power tools?&lt;/div&gt;</summary>
		<author><name>172.69.23.27</name></author>	</entry>

	</feed>