Talk:1666: Brain Upload

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search

I don't think you can assume Cueball is doing it to live longer. I figured he did it just to allow the researcher to experiment with the system. If I uploaded my consciousness into a computer, *it* might 'live' longer than me, but *I* will not live any longer for having done it. demiller9 162.158.68.47 16:15, 11 April 2016 (UTC)

I think that the bit in the middle speculating about why is irrelevant to the explanation of the comic. All that the explanation really needs to say is that the computer crashes because the consciousness being uploaded to it is stupid. It doesn't matter why he is doing it so long as people understand what he is doing.

03:25, 12 April 2016 (UTC)~

Are we completely certain that its Cueball? Initially from reading it i would be more inclined to believe that its Beret Guy 162.158.133.138 16:59, 11 April 2016 (UTC)

I don't see how we can ascertain the identity of the individual with the headgear. Since the defining characteristics are in the hat or hair of the reoccurring characters, and this person has neither visible. 173.245.54.41 18:27, 11 April 2016 (UTC)

Given no visible other clues (hair visible from under the headgear, hat laying on bench or nearby, etc.) I'd call him or her cueball, as that's the default when nothing else is known. If he were beret guy, I'd expect more off-beat or non sequitur dialog. -boB (talk) 19:17, 11 April 2016 (UTC)
Also the beret is stapled to his head. Not sure if it can be taken off. A Cueball seems more reasonable. -Pennpenn 108.162.250.155 01:53, 12 April 2016 (UTC)

The irony of the comic is that uploading human consciousness is supposed to be one possible milestone toward reaching the technological singularity--either by optimizing the conscious mind to enhance its capabilities to beyond human levels and letting it enhance itself further, or, at the very least, by being able to deploy mental labor at a massive scale at the cost of hardware components (no costs in raising and educating a biological human), which would presumably increase the quality of living for people above a certain threshold of wealth (Elysium type scenario)--yet, the conscious mind that has been uploaded appears to be limited by a terribly wasteful focus on unimportant details. 162.158.142.217 21:34, 11 April 2016 (UTC)

We don't know what the parameters of the environment to which the consciousness has been uploaded. The apparent triviality may be relevant in the frame, i.e, what if a mouse in a maze suddenly suspected the scientists watching it also worked on a reward system? If the scientists suspend the hypothesis the mouse has become absurd, overthinking could then be the parameter being tested. Elvenivle (talk) 02:18, 13 April 2016 (UTC)


title text

Randall's title text seems to have an error. It says "I spent ..." where it should say "It spent ..." as the Computer is supposed to behave like the human. 162.158.83.240 21:08, 11 April 2016 (UTC) It makes sense to me - I think he means that as he has taken 20 minutes to make the choice, his mind is working slowly, so what the computer uploaded would also look like it froze. komadori (talk) 21:17, 11 April 2016 (UTC)

No it is not the computer but probably Randall who used this time (or joking that he did), trying to show by this why such an upload would probably fail... Kynde (talk) 09:30, 12 April 2016 (UTC)
It seems perfectly reasonable to refer to a copy of your consciousness as "I" if you were intimately aware of (and narrating) your copy's actions. Interface freeze from the perspective of the expected controls may indicate autonomy (the "rogue" agent continues while master control chokes on its own timeout). Also, the observer (now the dual "I") may have the privilege of watching time go by much faster in the system's reference frame. Elvenivle (talk) 02:18, 13 April 2016 (UTC)
Copy?

Now it gets quite philsophical, but what, actually, is the difference between a copy and transfer - at least in this case? If you see a transfer as copying with deleting the original there's no difference (Think: Star Trek Transporter technology https://en.wikipedia.org/wiki/Transporter_%28Star_Trek%29). So from this point of view I see no reason why the copy shouldn't "feel" as the original - given that the copy we are talking about is not just the "hard" data (e.g. memories) but the "soft" consciousness as well. Nice blog article about this topic: http://waitbutwhy.com/2014/12/what-makes-you-you.html There's no reason why the original and the copy woudln't feel "real". So, all I want to say is, I think the wording of "though since it's a copy rather than a transfer it's doubtful the human would feel like the copy is really them." is not quite accurate or even absolutely wrong Elektrizikekswerk (talk) 07:38, 12 April 2016 (UTC)

I would feel as original and wouldn't feel the copy is real. The copy will feel like original and may not feel I'm real. -- Hkmaly (talk) 12:38, 12 April 2016 (UTC)
Lurker popping in, the title text:
"I just spent 20 minutes deciding whether to start an email with 'Hi' or 'Hey', so I think it transferred correctly."
explains why the AI version of Cueball isn't responding. It hasn't decided on what the first words of the first AI should be, just like he didn't for 20 minutes when thinking about the first words of a mundane email is.108.162.246.112 08:59, 12 April 2016 (UTC)

This may have something to do with 269. 162.158.214.139 15:32, 12 April 2016 (UTC)

Also, upload != move, so I don't think crashing would affect anything. 162.158.214.139 15:34, 12 April 2016 (UTC)

Reboot and not responding

Human brain is actually constructed in way which makes extremely hard for lock up. In normal operating condition, even when you think whether to start an email with 'Hi' or 'Hey', your brain also does breathing, heart beating, it keeps your position stable (only position which you can keep stable without brain is lying on floor), it processes signals checking if you are thirsty, hungry or sleepy ... lot of work. Computer, on the other hand, can lock so hard it wouldn't be able to keep internal clocks running. Although if it's application and not operating system which is locked, you can often see mouse still moving - which requires lot of processing if it's on USB. On a related note, it's not true brain can't reboot - in most cases, human brain will automatically reboot itself by going to sleep after some period of time. -- Hkmaly (talk) 12:38, 12 April 2016 (UTC)

That's not how computers are designed, nor how the brain works :-) Besides the central processing unit there are many peripherals that behave autonomously, from discrete electronic devices like fans to the myriad of those that have their own embedded processor, including ethernet chip, keyboard, mouse, graphic card, etc. They continue working, it's just that the main CPU doesn't listen to them anymore. Similarly, the brain is not a single entity but multiple areas interconnected, one being the medulla oblongata that controls breathing and reflexes. In absence of input or overrides, it continues its job automatically, as can be seen in any patient in a coma. Ralfoide (talk) 16:51, 12 April 2016 (UTC)
..I recently saw the physics concept that "time slows down as your velocity approaches C" means that at C your frame is timeless (one tick runs forever). If we apply that idea to a multicore computer with a master clock and individual (synchronized) CPU clocks: then time flows at the same rate for everycore. What happens if a core desyncs from the master clock (the velocity of time is still the same everywhere, but local counters skew)? A core that never ticks would appear to be locked up (or perhaps spinning forever at maximum clip), but what if that core "figures out" how to do multiple operations per tick and then only updates its ticks in relation to the other cores sometimes? Is it locked up, or just hard to see moving? Elvenivle (talk) 02:18, 13 April 2016 (UTC)

Rather than immortality, this technology could be used to 'look forward' into someone's future. What happens when this consciousness is in this situation, or that one? Will it be good or bad, would it commit a crime, can it exceed its parameters and break out of the system? Can you rehabilitate a consciousness? What works? Is it psychotic? Can it be (and is it plastic enough to rebound)? Can it be tricked into revealing secrets, because if it can, you can? Can you test whether it knows if it should talk about the test, or if it even knows of one? Assuming you know yourself best, can you use it to set up your own personal improvement grounds (call it hell if you like)? All in infinitisimal real time. Lots of possibilities, while the real consciousness...all copies would think they're real...sits and waits for the work to finish. Elvenivle (talk) 03:38, 13 April 2016 (UTC)

I'm moving the following from the article to discussion as I don't believe a length discussion on the benefits and pitfalls of brain transfer is really explaining this comic:

It's unclear why they are doing this, and this is clearly in some future setting [citation needed]. It could be a new procedure Megan as a scientist has just invented, and she asked Cueball to be the subject. Or it could be a common procedure in this future setting (though it is perhaps not as common that the recipient computer would become unresponsive after the transfer) -- perhaps as a way of "backing up" a person's brain. If the process can work in reverse, perhaps a person could recover from brain damage. It could also be a way to become immortal in some sense, though since it's a copy rather than a transfer it's doubtful the human would feel like the copy is really them. Nevertheless, upon a person's death someone enough like him (whether on a computer or perhaps transferred to a "blank" body) could continue to live, so that he would consider that immortality.

TheHYPO (talk) 14:21, 14 April 2016 (UTC)

I propose removing the mention of the halting problem as it's completely irrelevant to the comic. 162.158.102.217 22:00, 22 April 2016 (UTC)