Difference between revisions of "Talk:1002: Game AIs"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(even harder games)
Line 16: Line 16:
 
~ Could the description of tic-tac-toe link to xkcd 832 which explains the strategy? [[Special:Contributions/162.158.152.173|162.158.152.173]] 13:13, 27 January 2016 (UTC)
 
~ Could the description of tic-tac-toe link to xkcd 832 which explains the strategy? [[Special:Contributions/162.158.152.173|162.158.152.173]] 13:13, 27 January 2016 (UTC)
  
 +
Not shown in this comic: {{w|1000 Blank White Cards}}, and, even further down,{{w|Nomic}}. [[User:KangaroOS|Kangaro]][[User talk:KangaroOS|OS]] 01:22, 19 May 2016 (UTC)
  
 
Saying that computers are very close to beating top humans as of January 2016 is misleading at best. There is not enough details in the BBC article, but it sounds like the Facebook program has about a 50% chance of beating 5-dan amateurs. In other words, it needs a 4-stone handicap (read: 4 free moves) to have a 50% chance to win against top-level amateurs, to say nothing about professionals. If a robotic team could have a 50% chance to beating Duke University at football (a skilled amateur team), would you say they were very close to being able to consistently beat the Patriots (a top-level professional)? If anything that underestimates the skill difference in Go, but the general point stands. {{unsigned ip|173.245.54.38}}
 
Saying that computers are very close to beating top humans as of January 2016 is misleading at best. There is not enough details in the BBC article, but it sounds like the Facebook program has about a 50% chance of beating 5-dan amateurs. In other words, it needs a 4-stone handicap (read: 4 free moves) to have a 50% chance to win against top-level amateurs, to say nothing about professionals. If a robotic team could have a 50% chance to beating Duke University at football (a skilled amateur team), would you say they were very close to being able to consistently beat the Patriots (a top-level professional)? If anything that underestimates the skill difference in Go, but the general point stands. {{unsigned ip|173.245.54.38}}

Revision as of 01:22, 19 May 2016

Mornington Crescent would be impossible for a computer to play, let alone win... -- 188.29.119.251 (talk) (please sign your comments with ~~~~) It is unclear which side of the line jeopard fall upon. Why so close to the line I wonder. DruidDriver (talk) 01:04, 16 January 2013 (UTC)

Because of Watson (computer). (Anon) 13 August 2013 24.142.134.100 (talk) (please sign your comments with ~~~~)

Could the "CounterStrike" be referring instead to the computer game which can have computer-controlled players? --131.187.75.20 15:49, 29 April 2013 (UTC)

I agree, this is far more likely. 100.40.49.22 10:21, 11 September 2013 (UTC)

On the old blog version of this article, a comment mentioned Ken tweeting his method right after this comic was posted. He joked that they would asphyxiate themselves to actually see heaven for seven minutes. I don't know how to search for tweets, or if they even save them after so much time, but I thought it should be noted. 108.162.237.161 07:11, 27 October 2014 (UTC)

I disagree about the poker part. Reading someone's physical tells is just a small part of the game. Theoretically there is a Nash equilibrium for the game, the reason why it hasn't been found is that the amount of ways a deck can be shuffled is astronomical (even if you just count the cards that you use) and you also have to take into account the various betsizes. A near perfect solution for 2 player limit poker has been found by the Cepheus Poker Project: http://poker.srv.ualberta.ca/.


~ Could the description of tic-tac-toe link to xkcd 832 which explains the strategy? 162.158.152.173 13:13, 27 January 2016 (UTC)

Not shown in this comic: 1000 Blank White Cards, and, even further down,Nomic. KangaroOS 01:22, 19 May 2016 (UTC)

Saying that computers are very close to beating top humans as of January 2016 is misleading at best. There is not enough details in the BBC article, but it sounds like the Facebook program has about a 50% chance of beating 5-dan amateurs. In other words, it needs a 4-stone handicap (read: 4 free moves) to have a 50% chance to win against top-level amateurs, to say nothing about professionals. If a robotic team could have a 50% chance to beating Duke University at football (a skilled amateur team), would you say they were very close to being able to consistently beat the Patriots (a top-level professional)? If anything that underestimates the skill difference in Go, but the general point stands. 173.245.54.38 (talk) (please sign your comments with ~~~~)

How about bearing one of the top players five times in a row and being scheduled to play against the world champion in March? http://www.engadget.com/2016/01/27/google-s-ai-is-the-first-to-defeat-a-go-champion/ Mikemk (talk) 06:18, 28 January 2016 (UTC)
However DeepMind ranked AlphaGo close to Fan Hui 2P and the distributed version has being at the upper tier of Fan's level. http://www.nature.com/nature/journal/v529/n7587/fig_tab/nature16961_F4.html
The official games were 5-0 however the unofficial were 3-2. Averaging to 8-2 in favor of AlphaGo.
Looking at http://www.goratings.org/ Fan Hui is ranked 631, while Lee Sedol 9P, whom is playing in March, is in the top 5.108.162.218.47 06:12 5 February 2016 (UTC)
Original poster here (sorry, not sure how to sign). Okay, you all are right. Go AI has advanced a lot more than I had understood. I'm still curious how the game against Lee Sedol will go, but that that is even an interesting question shows how much Go AI has improved. 173.245.54.33 (talk) (please sign your comments with ~~~~)

Google's Alpha Go won, 3 games to none! Mid March 2016Saspic45 (talk) 03:06, 14 March 2016 (UTC)

Okay, for some reason they played all five games. Lee Sedol won game 4 as the computer triumphed 4-1.Saspic45 (talk) 08:44, 17 March 2016 (UTC)