3026: Linear Sort
Linear Sort |
![]() Title text: The best case is O(n), and the worst case is that someone checks why. |
Explanation[edit]
Sorting algorithms are a fundamental part of computer science, with various methods differing in efficiency, ease of implementation, and resource usage. Efficiency is often described using Big O notation, which expresses how the runtime of an algorithm scales with the size of the input. For example, "O(n)" ("linear time") means the runtime grows proportionally to the size of the input, while "O(n log n)" means it grows slightly faster than linear. Faster algorithms, like O(n), are generally preferred for large datasets. However, it is known that no general sorting algorithms with linear runtime exist.
The comic presents a humorous "linear time" sorting algorithm that first uses merge sort, a well-known O(n log n) algorithm, to sort the list. It then "sleeps" for an additional amount of time to artificially make the runtime scale linearly with the size of the input. Specifically, it pauses for (1 million) * length(list) - (time spent sorting)
seconds, which is perhaps large enough (in the case of all practical implementations) to stretch to a knowable point beyond the actual time spent sorting, ensuring the overall runtime appears to grow linearly. This is a joke because the actual sorting is still O(n log n); the additional sleep time is simply wasted time to give the illusion of linear time. It's also a joke because it makes the sort so slow that it's useless, with a "sort" of one item taking upwards of 11 days, two items taking 23 days, three taking 34 days, and so on. Another 'sort' that technically works but takes more time than is necessary, by definition, is the sleep sort.
The humor lies in the absurdity of intentionally slowing down a sorting algorithm to match a desired runtime profile. This defeats the purpose of optimization, as the goal of sorting algorithms is typically to minimize time spent, not to pad it with unnecessary delays. (Delays may be necessary for other functional reasons, but are an antithesis of the kind of optimality sought here.) If the artificial sleep were removed, the algorithm would revert to its true O(n log n) efficiency, making the "linear sort" label both deceptive and wastefully unnecessary.
The title text extends the joke by referencing "best" and "worst" cases, concepts in algorithm analysis that describe how the runtime varies with different inputs. For the "linear sort," the best and worst cases are identical because the sleep function forces the runtime to always be O(n), regardless of the input. The "worst case for the author," however, is when someone examines the code, exposes the fraud, and damages their reputation—a humorous twist on the idea of worst-case scenarios.
Transcript[edit]
- [The panel shows five lines of code:]
- function LinearSort(list):
- StartTime=Time()
- MergeSort(list)
- Sleep(1e6*length(list)-(Time()-StartTime))
- return
- [Caption below the panel:]
- How to sort a list in linear time



Discussion
First in linear time!Mr. I (talk) 13:28, 18 December 2024 (UTC)
Due to the fact that O(nlog(n)) outgrows O(n), the Linear Sort is not actually linear. 162.158.174.227 14:21, 18 December 2024 (UTC)
- If your sleep() function can handle negative arguments "correctly", then I guess it could work. 162.158.91.91 16:27, 18 December 2024 (UTC)
- Yes, on a machine where sleep() allowed negative values (somewhat similarly but more limited than TwoDucks), the algorithm would take linear time regardless of the used constant in place of 1e6. Also, with a smaller constant, the so-called linear optimization is not completely dissimilar to Radix sort, which has time-complexity of O(mn), where m is the bitlength of the item, which becomes linear for any item of limited bitlength (such as int64_t). In school we were taught that this is effectively linear, but that is deceptive, since the actual sort time grows to log(n) by virtue of requiring longer memory per item to fit more items in such a list, because a radix sort of 16 bit integers would be limited to useful lists of up to 65536 unique values to sort, and you'd need to grow them to 32 bit integers. If the sleep constant was chosen precisely to match the worst case Timsort would take - and I pick timsort because in addition to having O(n) best case, equal items won't be swapped or take time for such swaps - the time complexity deception would be identical to that of Radix sort: The algorithm would be linear, but only until you exceed e^(sleeping steps) unique items in the list (same as radix sort, although radix sort becomes unusable, and LinearSort() only becomes slower), and the time wasted is comparable as it in both cases bounded by a number proportional to the bitlength of the (longest) value, which is usually larger than log(n'), and never smaller, if n' are the number of distinct values. So, in some ways, 1e6 is corresponding to m in a radix sort. 172.68.190.145 12:10, 23 December 2024 (UTC)
- It relies on 1 second being long enough to outcompete the maximum input length provided. The joke is that most sort operations that take an entire second or more are considered too slow to be worth doing. 02:30, 22 December 2024 (UTC)
That was fast... Caliban (talk) 15:35, 18 December 2024 (UTC)
Do I even want to know what Randall's thinking nowadays? ⯅A dream demon⯅ (talk) 16:02, 18 December 2024 (UTC)
- Does anyone every want to know what Randall is thinking nowadays? :P 198.41.227.177 22:02, 19 December 2024 (UTC)
The title text would be more correct if Randall used e.g. Timsort instead of Mergesort. They both have the same worst-case complexity O(n*log(n)), but the former is linear if the list was already in order, so best-case complexity is O(n). Mergesort COULD also be implemented this way, but its standard version is never linear. Bebidek (talk) 16:35, 18 December 2024 (UTC)
According to my estimates extrapolated from timing the sorting of 10 million random numbers on my computer, the break-even point where the algorithm becomes worse than linear is beyond the expected heat death of the universe. I did neglect the question of where to store the input array. --162.158.154.35 16:37, 18 December 2024 (UTC)
- If the numbers being sorted are unique, each would need a fair number of bits to store. (Fair meaning that the time to do the comparison would be non-negligible.) If they aren't, you can just bucket-sort them in linear time. Since we're assuming absurdly large memory capacity. 162.158.186.253 17:14, 18 December 2024 (UTC)
What system was the person writing the description using where Sleep(n) takes a parameter in whole seconds rather than the usual milliseconds? 172.70.216.162 17:20, 18 December 2024 (UTC)
- First, I don't recognize the language, but sleep() takes seconds for python, C (et. al.), and no doubt many others. Second, the units don't have to be seconds, they just have to be whatever `TIME()` returns, and multiplicable by 1e6 to yield a "big enough" delay. Of course, no coefficient is big enough for this to actually be linear in theory for any size list, so who cares? To be truly accurate, sleep for `e^LENGTH(LIST)`, and it really won't much matter what the units are, as long as they're big enough for `SLEEP(e)` to exceed the difference in the time it takes to sort two items versus one item. Use a language-dependent coefficient as needed. Jlearman (talk) 18:02, 18 December 2024 (UTC)
- Usual where, is that the Windows API? The sleep function in the POSIX standard takes seconds. See https://man7.org/linux/man-pages/man3/sleep.3.html . 162.158.62.194 18:57, 18 December 2024 (UTC)
If I had a nickel for every time I saw an O(n) sorting algorithm using "sleep"… But this one is actually different. The one I usually see feeds the to-be-sorted value into the sleep function, so it schedules "10" to be printed in 10 seconds, then schedules "3" to be printed in 3 seconds, etc., which would theoretically be linear time, if the sleep function was magic. Fabian42 (talk) 17:25, 18 December 2024 (UTC)
This comic also critiques/points out the pitfalls of measuring time complexity using Big-O notation, such as an algorithm or solution that runs in linear time still being too slow for its intended use case. Sophon (talk) 17:46, 18 December 2024 (UTC)
Current text is incorrect, but I'm not sure how best to express the correction -- there do exist O(n) sorting algorithms, they're just not general-purpose, since they don't work with an arbitrary comparison function. See counting sort. 172.69.134.151 18:25, 18 December 2024 (UTC)
Hi! I'm just gonna say this before everyone leaves and goes on their merry way. Significant comic numbers coming soon: Comics 3100, 3200, 3300, etc, Comic 3094 (The total number of frames in 'time'), Comic 4000, Comic Whatever the next April fools day comic will be, and Comic 4096. Wait for it...DollarStoreBa'al (talk) 20:42, 18 December 2024 (UTC)
- Comic 3141.592654172.70.163.144 09:16, 19 December 2024 (UTC)
As everyone observed, the stated algorithm is not theoretically linear, but only practically linear (in that the time and space to detect O(n log n) exceeds reasonable (time, space) bounds for this universe). Munroe's solution is much deeper than that though - it trivially generalises to a _constant_ O(1) bound. [run a sort algorithm, wait 20 years, give the answer]. That's the preferred way of repaying loans, too. 172.69.195.27 (talk) 21:46, 18 December 2024 (UTC) (please sign your comments with ~~~~)
Continues comic 3017's theme of worst-case optimization. 172.70.207.115 00:32, 19 December 2024 (UTC)
It looks as though this function does not actually do the sort in Linear Time, it only returns in Linear Time. The MERGESORT Function itself looks to only take one parameter and does not have an obvious return value indicating that it performs an in-place sort on the input mutable list. This means that the list is sorted at the speed of the MERGESORT function, but flow control is only returned after Linear Time. For a single threaded program calling this function there is no practical difference, but it would make a difference if some other thread was concurrently querying the list. A clearer linear time sort might look like this:
function LinearSort(list): StartTime=Time() SortedList=MergeSort(list) Sleep(1e6*length(list)-(Time()-StartTime)) return SortedList
Leon 172.70.162.70 (talk) 17:31, 19 December 2024 (please sign your comments with ~~~~)
- There's such a thing as pass-by-reference, variously implemented depending upon the actual programming language used. It's even possible to accept both list (non-reference, to force a return of sorted_list) and listRef (returns nothing, or perhaps a result such as number_of_shuffles made), for added usefulness, though of course that'd need even more pseudocode to describe. For the above/comic pseudocode, it's not so arbitrary that a programmer shouldn't know how to implement it in their instance.
- I might even set about to do something like use a SetStartTime() and CheckElapsedTime() funtion, if there's possible use; the former making a persistant (private variable) note of what =Time() it is, perhaps to an arbitrary record scoped to any parameterID it is supplied, and the latter returning the 'now' time minus the stored (default or explicitly IDed) moment of record. I could then have freely pseudocoded the extant outline in even briefer format, on the understanding what these two poke/peek functions are. Which is already left open to the imagination for MergeSort(). 172.69.43.182 18:04, 19 December 2024 (UTC)
There are situations where you want to return in O(1) time or some other time that is not dependent on the input data to prevent side-channel data leaks. While the run-time of Randall's "O(n)" algorithm has an obvious dependencies on the input data, using the "Randall Algorithm" to obscure a different algorithm can reduce the side-channel opportunities. A more sure-fire way would be to have the algorithm return in precisely i seconds, where i is the number of seconds between now and the heat death of the universe. 172.71.167.89 17:49, 19 December 2024 (UTC)
- Please write an explanation for non-programmers!
I don't understand this explainxkcd. The comic itself was less confusing. Can please someone who really gets this stuff write a section of the explanation that explains the joke to people like me who do not have a theoretical programming degree? I know that is a tall task but right now it reads as rambling and a bunch of 0(n) that makes no sense to me. I can cut and paste a bash script together and make it work. I can understand that putting a sleep for a million seconds in a loop somewhere makes it slow. But a layperson explanation of what makes a sort linear, what is linear, what is funny about that approach, would be better than all the arguing about 0(n) because we don't get it. Thanks in advance! You folks are awesome! 172.71.147.210 20:51, 19 December 2024 (UTC)
- Maybe this would be a good start:
- --cut here--
- An algorithm is a step-by-step way of doing things.
- A sorting algorithm is a step-by-step way to sort things.
- There are several commonly used sorting algorithms. Some have very little "overhead" (think: set-up time or requiring lots of extra memory) or what I call "molassas" (yes, I just made that up) (think "taking a long time or lots of extra memory for each step") but they really bog down if you have a lot of things that need sorting. These are better if you have a small list of items to sort.
- Others have more "overhead" or "molasses" but don't bog down as much when you have a lot of things that need sorting. These are better if you have a lot of things to sort.
- A linear sorting algorithm would take twice as long to sort twice as many unsorted items. If it took 100 seconds to sort 100 items, then it would take 200 seconds to sort 200, 300 seconds to sort 300, and so on. Algorithms that take "twice as long to do twice as much" are said to run in "Order(n)" or "O(n)" time, where "n" is the number of items they are working on, or in the case of a sorting algorithm, the number of items to be sorted.
- For traditional sorting algorithms that don't use "parallel processing" (that is, they don't do more than one thing in any given moment), a linear sorting algorithm with very little "overhead" or "molasses" would be the "holy grail" of sorting algorithms. For example, a hypothetical linear sorting algorithm that took 1/1000th of a second to "set things up" (low "overhead") and an additional 1 second to sort 1,000,000 numbers (not much "molasses") would be able to sort 2,000,000 numbers in just over 2 seconds, 10,000,000 numbers in just over 10 seconds, and 3,600,000,000 numbers in a hair over an hour.
- The reality is that there is no such thing as a general-purpose linear sorting algorithm that has very little overhead (in both time and memory) and very little "molasses." All practical general-purpose sorting algorithms either use parallel processing, they have a lot of overhead (set-up time or uses lots of memory), a lot of "molasses" (takes a long time or uses lots of memory for EACH item in the list) or they are "slower than linear," which means they bog down when you give them a huge list of things to sort. For example, let's say the "mergesort" in Randall's algorithm doesn't have much "overhead" or "molasses" and it sorts 1,000,000 items in 1 second. It's time is "O(nlog(n))" which is a fancy way of saying if you double the number, you'll more than double the time. This means sorting 2,000,000 items will take more than 2 seconds, and sorting 4,000,000 items will take more than twice as long as it takes to sort 2,000,000. Eventually all of those "more than's" add up and things slow to a crawl.
- The joke is that Randall "pretends" to be the "holy grail" by being a linear sorting algorithm, but he has lots of "molasses" because his linear sorting algorithm takes 1 million seconds for each item in the list, compared to the 1,000,000 items per second in the hypothetical "linear sorting algorithm" I proposed.
- As others in the discussion point out, Randall's "algorithm" is "busted" (breaks, doesn't work, gives undefined results) if the mergesort (which is a very fast sort if you have a large list if items) is sorting a list so big that it takes over 1 million seconds per item to sort anyways. I'll spare you the math, but if the mergesort part of Randall's "algorithm" could do 1,000,000 numbers in 1 second with a 1/1000th of a second to "set things up," it would take a huge list to get it to "bust" Randall's "algorithm."
- --cut here--
- 162.158.174.202 21:44, 19 December 2024 (UTC)
- Layman's guide to O(n) time, second try:
- --cut here--
- First, "O" is "Order of" as in "order of magnitude." It's far from exact.
- O(1) is "constant time" - the time it takes me to give you a bag that contains 5000 $1 bills doesn't depend on how many bills there are in the bag. It would take the same amount of time if the bag had only 500, 50, or even 5 bills in it.
- O(log(n)) is "logarithmic time" - the time is the time it takes me to write down how many bills are in the bag. If it's 5000, I have to write down 4 digits, if it's 500, 3, if it's 50, 2, if it's 5, only 1.
- O(n) is "linear time" - the time it takes me to count out each bill in the bag depends on how many bills there are. It takes a fixed amount of time to count each bill. If there's 5000 $1 bills it may take me 5000 seconds to count them. If there's 500 $1 bills, it will take me only 500 seconds.
- O(nlog(n)) is "linear times logarithmic time" - the time it takes me to sort a pre-filled bag of money by serial number using a good general-purpose sorting algorithm (most good general-purpose sorting algorithms are O(nlog(n)) time). If it takes me 2 seconds to sort two $1 bills, it will take me about 3 or 4 times 5000 seconds to sort 5000 $1 bills. The "3 or 4" is very approximate, the important thing is that "logarithm of n" (in this case, logarithm of 5000) is big enough to make a difference (by a factor of 3 or 4 in this case) but far less than "n" (in this case, 5000).
- O(n2) is "n squared" time, which is a special case of "polynomial time." "Polynomial time" includes things like O(n3) and O(n1,000,000). Many algorithms including many "naive" sorting algorithms are in this category. If I used a "naive" sorting algorithm to sort 5000 $1 bills by serial number, instead of it taking about 15,000-20,000 seconds, it would take about 5,000 times 5,000 seconds. I don't know about you, but I've got better things to do with 25,000,000 seconds than sort paper money.
- It gets worse (O(2n) anyone? No thanks!), but you wanted to keep it simple.
- 198.41.227.177 23:30, 19 December 2024 (UTC)
- Personally, I've got better things to do than sort dollar bills, full stop.172.70.91.130 09:37, 20 December 2024 (UTC)
- O() notations is about behavior with large values, not small values. Try the "handing a bag of bills" algorithm with a few million dollar bills. You're going to need a forklift. Getting a forklift is not, in practice, instantaneous. Big N notation is almost always a joke for people trying to solve real problems. It only works on an abstract machine with some really weird (not physically achievable) properties. 162.158.155.141 20:54, 20 December 2024 (UTC)
Friendly reminder that some users of this site are just here to learn what the joke is, and not to read the entire Wikipedia article on Big O Notation. Perhaps the actual explanation could be moved up a bit, and some of the fiddly Big-O stuff could be moved down? I'd do it myself, but I'm not really sure which is which. 172.70.176.28 06:42, 20 December 2024 (UTC)
- I mean, it is fairly fundamental to the joke, and therefore to the explanation. It might be possible to slim it down a bit, but I don't think you can explain the joke without some explanation of Big O.172.70.91.130 09:37, 20 December 2024 (UTC)
I've just come to the conclusion that I will never 100% understand 3026. Dogman15 (talk) 10:14, 20 December 2024 (UTC)
- Tell me that again when you've actually tried the official process...
function Understand(comic): StartTime=Time() ReadExplanation(comic) Sleep(1e12*length(comic)-(Time()-StartTime)) return
- 172.70.162.56 11:10, 20 December 2024 (UTC)
- The article should start off "This is a joke about Big-O notation and sorting algorithms, a topic in introductory computer science education." then continue with something like "An algorithm is computer code for solving a general problem. Big-O notation is a method for describing the efficiency of algorithms." and maybe something like "Randall has designed an algorithm that appears more efficient than commonly considered possible, claiming to solve a popular challenge of many decades, by trying to game how the Big-O approach to analysis ignores the real speed of an algorithm, instead considering how it changes when the data is changed." 172.68.54.209 02:43, 22 December 2024 (UTC)
Here's my crack at a shorter explanation of the joke, without explaining the entirety of the Big-O notation Wikipedia article and without getting unnecessarily pedantic. (Please keep this in mind when critiquing this explanation! I probably know whatever simplification you notice.)
- The joke here consists of two parts: (1) a linear-time sort of a list is mathematically impossible, and yet (2) a linear-time algorithm is presented, with it being roughly correct because Big-O notation hides the full picture on purpose. The title-text joke is that someone realizes (1) and investigates (2) because of the purposeful full-picture-hiding.
- Let's start with part (2): how Big-O notation is a bit handwavy and inexact. This is not to say it's not useful in computer science research and explaining differences between algorithms, but it inherently and on purpose hides the full picture. It's kind of like rounding away unnecessary digits when doing a back-of-the-envelope physics calculation, except in Big-O, the thing that is rounded is a mathematical formula. The formula is for calculating the time it takes for an algorithm to run (whether in (nano)seconds or something abstract like "number of operations"), and it will be in terms of n, which is basically "how many things does your algorithm need to process" (in this case, it's the size of a list). An algorithm might be calculated to have a running time of something complicated like 32n2.796+1.31n+6500, but it's Big-O "rounding" would be expressed as O(n2.796). This is because as n grows larger and larger (into the billions), the extra stuff is irrelevant: except in special cases, an algorithm with a running time of O(n) will take less time than an algorithm taking O(n2) time, because no matter what the stuff you "rounded away" was, the former will eventually be less than the latter once n grows big enough.
- With the relevant bits of Big-O notation explained, we can look at the problem of sorting a list. This is a classic problem in computer science and it comes up in coursework all the time, so Randall assumes a lot of his audience will be familiar with it. Part (1) of the joke is that a linear, i.e. O(n) time, sorting of a list is mathematically impossible: just checking whether a list is sorted in the first place requires comparing every pair of elements at least once, taking O(n) time, and after this you have to swap elements that are out of place and check again. If you build an algorithm carefully you can get away with doing log(n) "scans" back and forth along the list, ending up doing log(n) scans of n time each, which comes to O(n × log(n)) time. This "O(n log n)" time is accepted as the lowest general sorting algorithm average-case run-time, and all improvements to sorting algorithms are in improving the stuff that Big-O notation hides – remember how we rounded away all those factors as unnecessarily complicated and irrelevant? Turns out they're actually relevant in practice! They can be fine-tuned for real computers and practical inputs; the mergesort in the comic is special because it's guaranteed to always take the same time, no matter the input.
- Putting both parts together: the "linear sort" presented is "linear", taking O(n) time, not because it has actually magically found a way to cheat at math and do sorting faster than is possible, but because O(n) notation hides the fact that it just waits for a million (milli)seconds for each item in the list: O(n) looks faster than O(n log n), but what's actually going on is that 1,000,000n is way slower than mergesort's O(n log n).
- Curiously, this is actually a thing that does happen in computer science, although not as blatantly as this: there are some problems for which there exists an algorithm with a "better" Big-O-notated time, but which for whatever technical reason run worse in practice on real computers than apparently-slower algorithms.
(And again, please remember that I've on purpose left out irrelevant technical details! I know about radix sort etc., and I know the difference between O, Θ and Ω, and I know about space complexity also; I do actually have a master's degree in this stuff and know what I'm talking about.) 172.69.136.141 16:09, 23 December 2024 (UTC)
- Actually I already thought of an improvement to this explanation (if it were to be used as the main explanation): it's unnecessary to bring up Big-O notation in the first place, until explaining the title text. The comic itself just talks about linear time, and mergesort (and sorting in general) could just be explained as requiring "more than linear time" because of the repeated comparisons I already mentioned. (O(n log n) is "quasilinear" or "log-linear" time but introducing that term can – and in my opinion should – be avoided). The title text explanation requires explaining that "O(n) means linear", and a bit about how Big-O notation is "rounding" away the complicated parts of the formula. 172.69.136.165 16:42, 23 December 2024 (UTC)
Why is the prose so terrible on this site? Who writes "As one can image in most contexts one would wish for...." and thinks other people can understand it? Please run your text through ChatGPT, it's free now. 172.71.147.54 17:31, 23 December 2024 (UTC)
Regarding this edit, I have my reservations. While Log2 may be 'logical' in a system using binary, there's no reason why the algorithm cannot be implemented upon a trinary-based "machine code" system, or one in quad (and I actually have created a four-instruction 'ultra-RISC' microcode kernel, of sorts, that used base-4 principles, not base-2), or decimal/BCD, and then byte-size is commonly 8-bit with 16-bit, 32-bit and 64-bit extensions to the basic unit and could translate to an equivalent higher-base if you are bothered about n vs. logx n efficiency at lower ns and higher xs. Could even be natural-log (for reasons entirely unrelated to the hardware/firmware/software it is implemented on, just what it needs to do). Most of the time, we don't care if it's O(log10 n) or O(ln n) or whatever, because the difference is an appropriate constant multiple. That's something generally not retained... we may have O(m log n), for independent variable m, but something like O(2 log n) is treated as O(log n) equivalent, like O(2) is just O(1) in the final analysis, and why the O(1e8*n) reality sneaks through here as O(n). So, by actual implementation, you can't actually say that O(n log n) will be gte O(n) always. With 1<n<log_base, it won't be. ...Whether or not we're free to consider either generality, though, I definitely won't ask you how algorithms actually stack up next to each other where run under n=0 conditions! But at least O(n-1) isn't a common thing... ;) 172.69.194.19 18:01, 23 December 2024 (UTC)
- What does this possibly have to do with explaining the comic? 172.68.23.81 18:07, 23 December 2024 (UTC)
- Hi, I made this edit. I removed that bit because it's too pedantic for an ExplainXKCD explanation, and in my opinion completely unnecessary. We don't need to explain Big-O, we need to explain the comic. In addition, while the math is strictly speaking true in that x>x*loga(x) when x<a, in computer science literature and discussion base-2 logs are typically assumed to be the default; if it isn't, it needs to be marked as such. Additionally the whole point of Big-O is growth: sure, that inequality holds when x is small, but we're not interested in it being small. As for ternary or whatever, those never come up. A few ternary machines existed in the 60s, sure, whatever, and occasionally someone experiments with something weird, but of the billions of computers that are in use, all are binary. So basically my message is to remember what forum we're speaking in. 172.69.136.141 18:43, 23 December 2024 (UTC)
- Interesting, and it probably can do without saying the bit that is removed, but not for the reason given in the removal. As pointed out, above, practical timing issues don't depend upon the base computer being binary or not ("assume log-base-2 because computers are all binary" isn't really a useful argument). Some theoretical function creating and using a basic binary tree might (according to its operational needs) scale its operational speed by log-base-2 as a key factor, but a version that uses a k-d tree by log-base-k (you can imagine them being ultiately functionally equivalent by way of a Cantor-pairing(/tripling/etc) equivalence, if you want justification for this hypothetical choice between). They'd both be considered O(... log N ...), give or take any other accompanyting factors (or overriding terms). But you can't say that an O(... log N ...) solution will take a-specific-base-of-N multiple of time, certainy not base-2. 172.70.58.45 20:01, 23 December 2024 (UTC)
- I do understand your point, I have a degree in this stuff also, but the useful argument here was succinctly put in above as "What does this possibly have to do with explaining the comic?" 162.158.134.148 21:36, 23 December 2024 (UTC)
- Interesting, and it probably can do without saying the bit that is removed, but not for the reason given in the removal. As pointed out, above, practical timing issues don't depend upon the base computer being binary or not ("assume log-base-2 because computers are all binary" isn't really a useful argument). Some theoretical function creating and using a basic binary tree might (according to its operational needs) scale its operational speed by log-base-2 as a key factor, but a version that uses a k-d tree by log-base-k (you can imagine them being ultiately functionally equivalent by way of a Cantor-pairing(/tripling/etc) equivalence, if you want justification for this hypothetical choice between). They'd both be considered O(... log N ...), give or take any other accompanyting factors (or overriding terms). But you can't say that an O(... log N ...) solution will take a-specific-base-of-N multiple of time, certainy not base-2. 172.70.58.45 20:01, 23 December 2024 (UTC)
Thank you to whomever put us out of our misery. 172.70.211.83 21:07, 23 December 2024 (UTC)
- Could do with more info... BRB... /goes to put +2345 bytes extra in with 'useful' additional information. 172.68.205.178 21:22, 23 December 2024 (UTC)
- Please don't. 162.158.134.148 21:36, 23 December 2024 (UTC)
Here’s what I don’t get about this joke: why would the recipient of this program ever initially think it was O(n)? Are there people who check the complexity of an algorithm by literally running it on a bunch of different inputs then fitting the time taken to a curve? That should be illegal! 162.158.154.104 (talk) 10:14, 24 December 2024 (please sign your comments with ~~~~)
- It's clearly targeted at those who take a statement that it's O(n) at face value. e.g. sorting(!) any comparison list by the 'best sorting performance'. You'd have it pop up at the top just like a search engine's Sponsored Link, and there'll be people who'll 'click on it' through sheer lack of analysis. And it'd not be an outright lie, so an uncritical analysis wouldn't mark it as actually wrong/delete it (it could sneak by an AI's basic 'understanding', perhaps, if not a human with any concept of algorithmic trolling) when it's probably far easier to establish its dubiously established bona fides than most sorting algorithm claims to O()ness.
- But, really, it's just a Wicker Man suggestion. (Like a Straw Man one, where you destroy it specifically by burning.) 172.70.91.30 13:10, 24 December 2024 (UTC)
What about true linear sort? Will it work?
Function TrueLinearSort(list): MergeSort(list) Return
Now, what efficiency this variant has in Big O notation? 172.71.98.138 (talk) 08:23, 25 December 2024 (please sign your comments with ~~~~)
- Might depend upon the implementation of MergeSort(), as intrinsic parallelisation is possible (but becomes less relevent as you pass into n>2*threads), though it tends to be O(n log n) for worst and average cases, which the call to the unadulterated function that calls it won't do much to change. 172.70.162.170 12:40, 25 December 2024 (UTC)