Talk:1002: Game AIs

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search

Mornington Crescent would be impossible for a computer to play, let alone win... -- 188.29.119.251 (talk) (please sign your comments with ~~~~) It is unclear which side of the line jeopard fall upon. Why so close to the line I wonder. DruidDriver (talk) 01:04, 16 January 2013 (UTC)

Because of Watson (computer). (Anon) 13 August 2013 24.142.134.100 (talk) (please sign your comments with ~~~~)

Could the "CounterStrike" be referring instead to the computer game which can have computer-controlled players? --131.187.75.20 15:49, 29 April 2013 (UTC)

I agree, this is far more likely. 100.40.49.22 10:21, 11 September 2013 (UTC)

On the old blog version of this article, a comment mentioned Ken tweeting his method right after this comic was posted. He joked that they would asphyxiate themselves to actually see heaven for seven minutes. I don't know how to search for tweets, or if they even save them after so much time, but I thought it should be noted. 108.162.237.161 07:11, 27 October 2014 (UTC)

I disagree about the poker part. Reading someone's physical tells is just a small part of the game. Theoretically there is a Nash equilibrium for the game, the reason why it hasn't been found is that the amount of ways a deck can be shuffled is astronomical (even if you just count the cards that you use) and you also have to take into account the various betsizes. A near perfect solution for 2 player limit poker has been found by the Cepheus Poker Project: http://poker.srv.ualberta.ca/.


~ Could the description of tic-tac-toe link to xkcd 832 which explains the strategy? 162.158.152.173 13:13, 27 January 2016 (UTC)

Saying that computers are very close to beating top humans as of January 2016 is misleading at best. There is not enough details in the BBC article, but it sounds like the Facebook program has about a 50% chance of beating 5-dan amateurs. In other words, it needs a 4-stone handicap (read: 4 free moves) to have a 50% chance to win against top-level amateurs, to say nothing about professionals. If a robotic team could have a 50% chance to beating Duke University at football (a skilled amateur team), would you say they were very close to being able to consistently beat the Patriots (a top-level professional)? If anything that underestimates the skill difference in Go, but the general point stands. 173.245.54.38 (talk) (please sign your comments with ~~~~)

How about bearing one of the top players five times in a row and being scheduled to play against the world champion in March? http://www.engadget.com/2016/01/27/google-s-ai-is-the-first-to-defeat-a-go-champion/ Mikemk (talk) 06:18, 28 January 2016 (UTC)
However DeepMind ranked AlphaGo close to Fan Hui 2P and the distributed version has being at the upper tier of Fan's level. http://www.nature.com/nature/journal/v529/n7587/fig_tab/nature16961_F4.html
The official games were 5-0 however the unofficial were 3-2. Averaging to 8-2 in favor of AlphaGo.
Looking at http://www.goratings.org/ Fan Hui is ranked 631, while Lee Sedol 9P, whom is playing in March, is in the top 5.108.162.218.47 06:12 5 February 2016 (UTC)
Original poster here (sorry, not sure how to sign). Okay, you all are right. Go AI has advanced a lot more than I had understood. I'm still curious how the game against Lee Sedol will go, but that that is even an interesting question shows how much Go AI has improved. 173.245.54.33 (talk) (please sign your comments with ~~~~)

Google's Alpha Go won, 3 games to none! Mid March 2016Saspic45 (talk) 03:06, 14 March 2016 (UTC)

Okay, for some reason they played all five games. Lee Sedol won game 4 as the computer triumphed 4-1.Saspic45 (talk) 08:44, 17 March 2016 (UTC)

Not shown in this comic: 1000 Blank White Cards, and, even further down, Nomic. KangaroOS 01:22, 19 May 2016 (UTC)

Is the transcript (currently in table format) accessible for blind users? Should it be? 162.158.58.237 10:48, 19 February 2017 (UTC)

At the very least the transcript needs to be fixed so that it factually represents the comic. Jeopardy is in the wrong spot with just a quick glance which is all I have time for here at work. 162.158.62.231 16:58, 24 August 2017 (UTC)

AlphaStar beat Mana pretty decisively, but it was cheating and Mana won the game where it wasn't, and it could only play on a certain map in Protoss vs Protoss. However, that was a while ago. Google dropped AlphaStar on ladder under barcode usernames, and it's been doing rather well... but Serral (one of the world's best players) recently beat it pretty decisively. 172.68.211.244 01:49, 19 September 2019 (UTC)

In case anyone is checking up on the AlphaStar thing: AlphaStar definitely plays at a high human level in Starcraft 2 now (2020), without doing much that seems 'humanly impossible' (ie, like cheating, as it did during the MaNa matchup), but it's in relatively limited maps, and it not only loses fairly regularly if not most of the time to top-ranked humans like Serral, it also loses to essentially random grab-bags of very good players like Lowko on occasion. Like, Lowko's much better than me but he's not tournament-level good and he beat it.
Also, technically, AlphaStar isn't even *one* program. It's an ensemble of many programs, each one specific to a different SC2 race and specializing in different strategies. Maybe if there were a 'seamless' amalgam where it were 'choosing' a strategy it could be arguably one program, but it's literally a totally separately trained neural network for each 'agent'.
Furthermore, when you watch it play sometimes it does extremely stupid things like trap its tanks in its own base. SC2 is, at least this year, still a human endeavor at high tiers. 172.68.38.44 01:21, 26 July 2020 (UTC)

Who is Ken Jennings?

I feel like it should be relatively easy to make a computer program that can learn the rules of Mao without knowing them to begin with. There has to be some feedback: a player gets penalties if he breaks the rules. This can be used to write a self-learning algorithm.

The tricky part is that rules in Mao aren't limited to a function that states whether or not you can play a card based on the cards already played. Rules can be about how you play the card, how you sit, what you say, what you do if you play a certain card, etc. Rules can also apply out of turn. You could be required to do something in reaction to another player doing something (e.g. congratulate a player if they play a King), or penalised for e.g. speaking to the player whose turn it currently is. In order for a computer to compete successfully, it would need to ingest a lot of peripheral information and run some sophisticated learning that accounts for far more than simply the state of the cards. Particularly within a regular group of players, there are rules that will be reused a lot, e.g. certain cards acting as Uno special cards, but there is no guarantee these will appear and players can make up arbitrary rules. --Tom

Sorry, but I just undid a new editor's addition of the text (for Snakes And Ladders) of "except that as the game has to be played on software so that an AI can participate, a (pseudo)-random number generator takes the place of physical dice in dictating players' movement, making it no longer truly random. Also, while physical snakes-and-ladders boards are fixed in their design, the graphical representation of these boards can be altered at will." - Firstly because it was bullet-point-added, when it wasn't really supposed to be, but then I decided it needed much more editing than I thought worthwhile. To bullet my thoughts, though:

  • See the Beer Pong example for "AI in reality",
  • Even if there's not a robot arm shaking a physical dice (which is pseudorandom itself, at least in a Newtonian perspective, but tends to be Ok) as long as the supposed-PRNG is not controlled/filtered by the AI then it's perfectly valid for use,
  • Machine-vision is a thing. I'm sure there's a trivial way to make a generalised board-image-decoder to get around the artistic differences (possibly even machine learning, to make actual use of the AI).

....The main issue is that solving all these 'problems' leaves little for the backend 'gameplay and strategy' AI to do, as it must just pursue the path to victory (or not) that the dice/whatever formulaically dictate. So I want to honour the edit by mentioning it, but ultimately decided it should be undone. Pending perhaps a different approach to it, if anyone decides I was not fair (or correct) in my decision to do it this way. 172.70.162.147 13:09, 12 February 2022 (UTC)

If Snakes and Ladders and Seven Minutes in Heaven are mentioned, why not Game of Life? --162.158.159.132 17:24, 3 March 2024 (UTC)

That's (as I suspected) a disambiguation page, making me still wonder whether you mean Conway's Game of Life (which is a bit open-ended) or The Game of Life (which, if I remember the gameplay well enough is basically going to be the same luck-based-'challenge' as Snakes And Ladders with an expanded set of thematic bells-and-whistles added).
Such a confluence of names, however, makes me want to suggest Mastermind (board game) (well within 'solved' for just brute-force logic) or Mastermind (British game show) (you'd definitely need a Watson-level AI, though you probably you could zhoosh it up with a ChatGPT front-end). 172.71.242.160 19:04, 3 March 2024 (UTC)