Difference between revisions of "2323: Modeling Study"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(Explanation)
(Explanation: wlinks)
Line 10: Line 10:
 
{{incomplete|Created by an ABSTRACTLY MODELED BOT. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}
 
{{incomplete|Created by an ABSTRACTLY MODELED BOT. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.}}
  
In this comic, a humorous comparison is drawn between two common types of scientific studies: empirical research, where an experiment is designed to test a scientific theory, and mathematical models, where mathematical formulations are produced to predict how physical systems behave under given circumstances. In empirical studies, hard questions about the limitations of existing theory tend to be addressed in the abstract, the brief summary of the paper that is presented at the beginning of most scientific articles. In modeling studies, assumptions based on existing theory are built into the model, and any problems associated with these assumptions tend to be discussed in the methods section, which outlines the design of an experiment in the case of an empirical study or how the model was designed and the reasoning behind the choices made in the case of a modeling study.  In the empirical study, the proverbial "big red problem box" is stated up-front where everyone who finds the paper will read it, while in the modeling study, it's buried in the middle of the paper, where it's less likely to be read.
+
In this comic, a humorous comparison is drawn between two common types of scientific studies: {{w|empirical research}}, where an experiment is designed to test a scientific theory, and {{w|mathematical modeling}}, where mathematical formulations are produced to predict how physical systems behave under given circumstances. In empirical studies, hard questions about the limitations of existing theory tend to be addressed in the abstract, the brief summary of the paper that is presented at the beginning of most scientific articles. In modeling studies, assumptions based on existing theory are built into the model, and any problems associated with these assumptions tend to be discussed in the methods section, which outlines the design of an experiment in the case of an empirical study, or how the model was designed and the reasoning behind the choices made in the case of a modeling study.  In the empirical study, the proverbial "big red problem box" is stated up-front where everyone who finds the paper will read it, while in the modeling study, it's buried in the middle of the paper, where it's less likely to be read.
  
 
The caption opens like a typical statement in favor of modeling studies, "A mathematical model is a powerful tool for taking hard problems," but while a researcher who works with models might go on to say "...and breaking them down," or "...and studying them in ways that would be impractical for empirical studies," Randall concludes that they can't actually make hard problems any easier.  His title text, "You've got questions, we've got assumptions," plays on the usual platitude of "You've got questions, we've got answers" by pointing out that any answers provided are built on assumptions by the modelers.  In other words, {{w|garbage in, garbage out}}.
 
The caption opens like a typical statement in favor of modeling studies, "A mathematical model is a powerful tool for taking hard problems," but while a researcher who works with models might go on to say "...and breaking them down," or "...and studying them in ways that would be impractical for empirical studies," Randall concludes that they can't actually make hard problems any easier.  His title text, "You've got questions, we've got assumptions," plays on the usual platitude of "You've got questions, we've got answers" by pointing out that any answers provided are built on assumptions by the modelers.  In other words, {{w|garbage in, garbage out}}.

Revision as of 00:27, 23 June 2020

Modeling Study
You've got questions, we've got assumptions.
Title text: You've got questions, we've got assumptions.

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Created by an ABSTRACTLY MODELED BOT. Please mention here why this explanation isn't complete. Do NOT delete this tag too soon.
If you can address this issue, please edit the page! Thanks.

In this comic, a humorous comparison is drawn between two common types of scientific studies: empirical research, where an experiment is designed to test a scientific theory, and mathematical modeling, where mathematical formulations are produced to predict how physical systems behave under given circumstances. In empirical studies, hard questions about the limitations of existing theory tend to be addressed in the abstract, the brief summary of the paper that is presented at the beginning of most scientific articles. In modeling studies, assumptions based on existing theory are built into the model, and any problems associated with these assumptions tend to be discussed in the methods section, which outlines the design of an experiment in the case of an empirical study, or how the model was designed and the reasoning behind the choices made in the case of a modeling study. In the empirical study, the proverbial "big red problem box" is stated up-front where everyone who finds the paper will read it, while in the modeling study, it's buried in the middle of the paper, where it's less likely to be read.

The caption opens like a typical statement in favor of modeling studies, "A mathematical model is a powerful tool for taking hard problems," but while a researcher who works with models might go on to say "...and breaking them down," or "...and studying them in ways that would be impractical for empirical studies," Randall concludes that they can't actually make hard problems any easier. His title text, "You've got questions, we've got assumptions," plays on the usual platitude of "You've got questions, we've got answers" by pointing out that any answers provided are built on assumptions by the modelers. In other words, garbage in, garbage out.

For a more concrete example, consider the 2020 pandemic of COVID-19. Empirical studies measure things like infections, hospitalizations, and deaths, and the circumstances that lead to those events, and attempt to answer questions about how COVID-19 spreads, what measures are effective in preventing its transmission, what those measures' other costs and side effects are, and what therapies are effective in treating cases. These are made difficult by gaps in testing capability, the imperfections of those tests which are available, and the fact that all of the conditions of society are interconnected and constantly changing -- there is no "control universe" or any way to go back and try different ideas. Modeling studies offer the possibility to simulate thousands or millions of possible pandemics, to hopefully figure out those variables' effects in advance and offer guidance to governments and health workers, but without specific knowledge of COVID-19's properties, especially in the early days of the pandemic, modelers must make assumptions about how COVID-19 spreads, kills, and is (or is not) treated. For pandemics especially, which behave exponentially until they are brought under control (or the pathogen burns through its host population), even small changes in model assumptions can lead to orders of magnitude difference between equally-plausible predictions (such as predicted deaths falling from half a million to 20,000). Even if all such predictions are made earnestly, with the best available information, it can lead to distrust of the models and their results, especially if the models are presented to non-experts with too much certainty.

Transcript

Ambox notice.png This transcript is incomplete. Please help editing it! Thanks.

(There are two columns)

(The column on the left is a piece of paper labeled "EMPIRICAL STUDY". The paper consists of the sections "ABSTRACT", "INTRODUCTION", "METHODS", "RESULTS", and "DISCUSSION". Each section consists of several horizontal lines meant to represent blocks of text. In the middle of the "ABSTRACT" section, there is a large red rectangle. Inside this rectangle is the word "PROBLEM" in large red letters.)

(The column on the right is a piece of paper labeled "MODELLING STUDY". It consists of the same sections, but the large red rectangle with the word "PROBLEM" is in the "METHODS" section instead of the "ABSTRACT" section.)

(There is a curvy arrow pointing from the red box in the paper on the left to the red box in the paper on the right.)

(The caption reads "A MATHEMATICAL MODEL IS A POWERFUL TOOL FOR TAKING HARD PROBLEMS AND MOVING THEM TO THE METHODS SECTION."


comment.png add a comment! ⋅ comment.png add a topic (use sparingly)! ⋅ Icons-mini-action refresh blue.gif refresh comments!

Discussion

Also known as

I still have no clue about my subject, partly because I devised this study when I knew even less, but I need to write a paper anyway or I can never finish my PhD programme ...

vs.

I have now fiddled four years with my model assumptions to get the data to fit without, well, fiddling with the data, so please bear with me and my paper, and for heavens sake graduate me so I can save what is left of my soul and sanity ...  ;-) --162.158.94.94 20:23, 22 June 2020 (UTC)

One of my friends who studied thermal engineering remarked that if his model agreed with the test data to within ten degrees, it was acceptable, but if it agreed to less than five degrees, he was suspicious, because it was probably over-fit to the peculiarities of his thermal chamber, thermocouple placement, and so on, and less applicable for the system's real operational environment. --NotaBene (talk) 23:40, 22 June 2020 (UTC)
We got trolled by our physics teacher in high school, during a calorimetry experiment (where you measure the changes in temperature of a system). All our measurements were way off from theoretical results, so we "adjusted" the reported values to make them fit the expected curve. Unfortunately, the prof knew that the thermometers were too inaccurate to produce precise results, so it was more of a test of our honesty, which we all failed miserably :-/162.158.158.167 13:19, 23 June 2020 (UTC)
In our Physics A-Level (normal post-Secondary and pre-University stage, for non-UKians) class, the 'trick' played on us was a 'black box' of components that we had to record the resistance/impedence/whatever of when raising the voltage (can't recall if DC or AC) from zero up to a level and then back down again. One of a number of such tests, to be done by rotating around the lab, you'd be tempted to just run it up and record the down as mirror image, or fudgingly near. Except that there was some sort of latching trip, once a given voltage went through the box, that changed the circuit significantly on the return trip. Only the honest (and possibly honest enough to show the 'error' that crept in, when thinking they'd messed it up - and no time to rerun it from scratch!) gave in the two-slope graph or whatever was the record. Can't tell you whether I was a Goody-Two-Shoes or not, though I like to think I was (and would have known about zenor diodes and self-reinfor ing flip-flop circuits, which this may have crudely used). 141.101.98.130 23:44, 23 June 2020 (UTC)
Looks like your teacher was trying to illustrate the principle of hysteresis
In A-level Chemistry between the filthy glassware, M/1000 silver nitrate and contaminated reagents you would always get brown gunge instead of a red precipitate. But you quickly learned that you were supposed to falsify the data if you wanted a good mark.162.158.78.92 16:06, 24 June 2020 (UTC)
That's the point. If the teacher knows this (why wouldn't they?) and wants to honour scientistic behaviour which includes negative results and sometimes even reflecting on the unexpected outcomes of an experiment, they can set up an experiment that will always yield results, that appear wrong, to see who still reports the expected result. --Lupo (talk) 05:24, 25 June 2020 (UTC)
The scientific Kobayashi Maru. 172.69.35.185 00:03, 24 December 2020 (UTC)
I once proof read a master thesis, where an experimental setting to optimize a problem in certain network arrangements was set up (basically a laboratory with 15 desktop PCs, communicating with each other on a specific protocol, etc.). The guy who wrote it found out on the first afternoon after setting it up, that the professor who found and described the problem he was about to tackle made a mistake, and the problem didn't exist. By that time he had already - due to university standards - handed in the name of his thesis. While negative results in research are also good results, the problem is, that by the same standards of his university his master thesis had to be a certain size - if I remember correctly, at least 50 pages in small font, excluding data and images - he managed to stretch his afternoons work and some subsequential tests on it to the required number of pages though. I am sure there is a lesson to be learned here, but... I haven't figured it out yet. --Lupo (talk) 05:37, 23 June 2020 (UTC)
well, I gues the most important lesson would be "minimum length of text" is not a good requirement for any academic work. ;) Elektrizikekswerk (talk) 06:50, 23 June 2020 (UTC)
No. The most important lesson is "always name your thesis vaguely enough you can scale the content between 5% and 2000% or what you originally planned to do". -- Hkmaly (talk) 22:15, 23 June 2020 (UTC)
The lesson is that people are so used to blindly following rules, instead of considering whether the reasons of the rules are relevant and appropriate, that this community produced a thesis paper that met few of the reasons to write one. Usually you would quickly explain that you need to change the title of the paper and this would be accepted because it makes so much sense. If it's not, there is some higher-up who would support you over anything that ensued. 162.158.62.179 23:49, 25 June 2020 (UTC)
That is true for most assignements, but not for a master thesis, which is - at least here in Germany - a very strict process, that has to be legal-proof for your whole career. So fiddling with the process can result in someone sabotaging you decades later for it. --Lupo (talk) 10:32, 6 July 2020 (UTC)

Various "<Problem> Denier" groups, (Climate Change, Covid, other things not necessarily starting with "C") do tend to lose their shit over "models" that aren't right (whether 1% out or 50%, they'll take any 'error', or just the failure to model what happened later because the model was heeded and behaviours changed to avoid the outcome) ironically using their clutched-at-straws to model all future models as wrong/intentionally-misleading-for-nefarious-intent. They also misunderstand the models (witness them dragging out old "85% chance Hillary will win" predictions against the roughly(-and-slightly-more-than) 50% of the votes she got - a different measure and far from incompatible with the other), whether innocently or deliberately, to 'prove' their point. And that's just done by regular Joes/Josephines. I'm sure you can be far more competently incompetent in your modelling (i.e. sneak sneaky shit past more and more learned people) if you're an actual modeller yourself who feels the need to drive towards an end for which you then look for the means. (Or modes, or medians.) 162.158.155.168 11:58, 23 June 2020 (UTC)

I'm nearly 18 hours late reading this comic, but the above is exactly why I'm so surprised to see it. Given Randall's apparent faith in mathematical modeling from other comics that this should be linked to (including the infamous vertical hockey stick temperature graph stretching back several millennia, and all the pro-Hillary bandwagon comics) I found this comic shocking in the extreme- he clearly knows the limitation of the method, and yet is still a true believer. Either that or he's finally growing up on the "A man who is not a liberal when he is young has no heart, a man who is not a conservative when he is old has no brains" spectrum. Seebert (talk) 13:27, 23 June 2020 (UTC)
You seem to have taken the exact opposite of the message of the post above you. The point was that the science is accurate--the problem were people interpreting it wrong. They didn't get that Trump's 85 percent chance of losing meant he'd win roughly 1 in 7 times--only a little less than the probability you role a 1 on a single die. People mixed up his chances of winning with what percentage of the vote he'd get. Plus they lack an intuitive sense of how percentages work, which is why FiveThirtyEight moved to using "1 in X" numbers instead.
And I have no idea what any of this has to do with political beliefs: thinking models are inaccurate wouldn't make you change political philosophies. Plus, well, the aphorism you gave has been found to be untrue--it's quite uncommon for liberals or progressives to become more conservative as they age. What does happen is that what counts as progressive changes, which makes sense. The whole concept is trying to make progress, of continually changing. Saying women should be able to vote was progressive in the 1920s, for example. It's not now.
Anyways, I hope I've fought some misconceptions. I find a lot of our disagreements are based on these sorts of things, so I make it my goal to clear this stuff up--even if it means I sometimes come off like a know-it-all. Trlkly (talk) 07:03, 24 June 2020 (UTC)
Well, whoever makes statements like the one paraphrased above from the 2016 US election, or merely one like "there is a 75% chance of rain tomorrow", is a moronic pseudoscientist, and ought to be flogged, tarred, feathered, and sentenced to clean out public toilets 8h/d for two months, in that order. Such "measures" (of course they aren´t, they are merely a statement about how firmly one believes in his model extrapolating past measurement results into the future) have only one advantage for the "statistican" and newspapers, they can never be proved wrong. --162.158.92.44 20:38, 23 June 2020 (UTC)
How utterly ridiculous! There is NO way I want the person cleaning any toilet seat that I am going to use, to be covered in tar and feathers. That stuff is catching. If anyone is getting treated like that, it HAS to be in a different order.162.158.159.66 08:07, 25 August 2020 (UTC)
Ummm. If you run a thousand related but variously hedging weather simulations and 750 of them suggest rain (for a given set of criteria - temporal, geographical and terminological limits), then there's 75% chance of rain. This doesn't mean it'll rain only 75% of the typical raincloud or be raining steady for just 45 minutes in any hour. And the same with polling. No, you can't prove it "wrong" (unless you said 0% or 100% chance and it did or did not happen; anyone who said such things would be taking their own risk), and that's the point. If the models suggest a majority of at least one vote (EC, ideally, but based on the balloting levels) for one party in 85% of circumstances, it is valid to suggest an 85% chance. However tightly packed the scatter is across all half-reasonable patterns. (Which can be enumerated, for those that understand the enumerations, but how many who don't understand the original figure would understand any additional ones?) So you can't prove it wrong, just an unfortunate 'miss' (like a bet that two dice won't come up snake-eyes; even more certain, but it still does fail to go the promised way), and yet some would say it invalidates all modelling. That they don't like the look of. They'll happily use spurious/selective models that seem to share their viewpoint. (As will many different people with many different viewpoints, of course. Hopefully enough people consider enough competent models to appreciate enough of the true uncertainty. But I'm not sure the models support the more optimistic levels of 'enough'.) 141.101.98.130 23:44, 23 June 2020 (UTC)
You'd judge a model by how well it predicts reality. If there is rain 75% of the time the weatherman says there is a 75% chance of rain, then they are using a good model and are right. You can write down their predictions and check this. (you have to combine both approaches). See comment by Seebert below. 162.158.62.179 23:59, 25 June 2020 (UTC)

Dilbert makes the same point the next morning in a slightly different way--Seebert (talk) 13:30, 23 June 2020 (UTC)