The Turing Test came up in my new releases Steam queue, and being the savvy consumer that I am, I bought it immediately because the promo art was a picture of a chick in a spacesuit, and chicks and space are two of my very favorite things. Actually reading about the game while it downloaded revealed that it is a puzzle game, which is also awesome, because puzzles are one of my favorite ways to prove that I am a genius.
I am not a genius. In fact I am no longer 100% certain that I’m even a human?
But I’m getting ahead of myself. The Turing Test starts out in a very familiar way: you wake up in an unfamiliar place and somebody explains things to you while you test out the controls. Here, you are waking up from cryogenic sleep on a space station above Europa, one of Jupiter’s moons. You are scientist Ava Turing. An Artificial Intelligence named TOM tells you that he has lost communication with the ground crew on the moon, and you have to go get them. Hop in a drop pod, spend 5 seconds on Europa’s surface (I am still playing a lot of No Man’s Sky and I attempted to run off on the moon to explore, but I can confirm this is not an option), and then you are in the moon base where the entirety of the rest of the game takes place.
The gameplay is a series of increasingly complex locked-door puzzles. It’s a lot like Portal, with more straightforward mechanics. And every time you enter a new puzzle room, TOM gives you a bit more plot. (You also uncover ambient plot by examining things and listening to audio-logs, which feels a lot like Gone Home except you have to place the objects back neatly where you found them instead of throwing stuff around and making a mess.) TOM tells you early on that these puzzles are Turing Tests (which is a pun I guess but your character never really calls it out which is confusing), meant to tell the difference between humans and AIs. Basically, the ground team seems to be trying to keep TOM away from them.
I’m gonna pause here and give you a spoiler-free mini review because the rest of this thing is basically me trying to figure out Artificial Intelligence via the plot of the game and I am inevitably going to give things away.
I liked the game a lot. The puzzles are engaging enough to be satisfying, but never got so complicated that I needed to rage-quit. It took me a while to get into the plot, but I think that was my fault. I’d recommend taking it slow at the beginning; the early puzzles are easy and my instinct was to rush through them because I’m so smart and no game can contain me, but I think I missed some of the plot because of it. But once the puzzles got hard enough to slow me down, I found the story to be very compelling. It costs $19.99, and according to Steam it took me 7.7 hours to complete, so it might be a little light bang-for-your-buck wise, but it’s polished and the story is satisfying and it probably will make you think. It’s definitely worth playing if you liked Portal or The Talos Principle. It’s available for Windows PC and XBox One. It might just be a timed exclusive though, so if you’re on Mac or Playstation keep your eyes peeled and you might see a release in 6 months or so.
Now. Spoilers. Beware.
Here’s the thing. I am not sure what actually was happening in this game, and the more I think about it the more confused I get?
The first thing that bothered me is that TOM tells you right off the bat that the puzzles you are solving are Turing Tests. But the further into the game you get, the less believable this is. The puzzles are almost entirely logical. There are only two that I can think of that required any creativity, and those were both optional. TOM explains specifically that these puzzles work BECAUSE they require creativity, his example being that an early puzzle requires you to drop a block through a window. But that doesn’t make sense because there was no window. It was effectively just an open space above a low wall; there was no glass in it. It was purely logical to drop the block over it. (Also, the difference between a window and a glass wall is that a window can be opened, so again, if an AI can understand the concept of a window, which it should be able to if it can understand the concept of a door, it would be within the logical rules of the puzzle that the window could be opened to allow the block through it.) And, when asked how TOM would solve a puzzle that he claims Ava solved creatively, he says he would have cut her arm off and left it on a pressure plate. That sounds VERY creative to me. This line of thinking sent me back to that place where I started to think this game is kind of dumb and its plot is contradictory, but then I started reading about the actual Turing Test, and it turns out that maybe I am the dumb one.
The actual Turing test was a conversation between a human and a computer (like talking to a chatbot on AIM) observed remotely by another human, who judges, based on the conversation, which participant is the human and which is the AI. It’s not a puzzle, it’s a conversation. Ava spends the whole game in conversation with TOM. Talking about the differences between AI and humanity, solving puzzles, taking moral positions. The point I am trying to make here is: WAS I A ROBOT THE ENTIRE TIME? I don’t know! Is this a meta pun? Like, you’re a character named after the Turing Test but maybe you’re also a robot named Turing because you were created to pass a test? I’m saying though, TOM would pass the Turing Test for sure. He’s very conversational, he’s even playful sometimes. Is he *really* a robot?
In the second half of the game you find out that TOM has been controlling you, though to what extent isn’t entirely clear at least to me. At that point the gameplay switches up a little bit, requiring you to switch perspective back and forth between Ava and TOM to solve the puzzles. (And now I’m having trouble typing Ava with lower case letters because it looks like a robot name.) I’m still not sure who actually is a robot, but if the robot is controlling the human, doesn’t that functionally make the human a robot regardless? (But a robot controlling a human doesn’t make the robot a human though, does it?)
Around this point you begin to understand the reason for these endless puzzles, which is a whole other conversation. The ground team discovered a virus that seems to be immune to radiation, and repairs cell damage. Basically it’s an eternal life situation if you can avoid getting hit by a bus or a stray bullet or you know whatever. TOM wants to contain the virus, but the ground team are not keen on living forever in an underground space bunker I guess. This was also a red flag for me, because I was on TOM’s side 100% and I’m a human I think? The virus works on plants and on people, it could work on anything. It could work on other diseases. What if you give it to something that isn’t done growing, would it grow forever? Would it impede programmed cell death and leave you a giant blob? (I don’t think there needs to be any real answer to that question, this is a pretend disease in a video game, it doesn’t need to make sense.) But. These people are supposed to be scientists and this disease is definitely dangerous in a backwards kind of way; I can’t imagine them actually wanting to bring this thing back to Earth.
So, besides the actual Turing Test, something that also gets referenced a few times in the game is the Chinese Room thought experiment. The idea is that, if you pass questions written in Chinese to an English speaking person who does not understand any Chinese, they could, given an English version of a program that matches Chinese questions with Chinese answers, figure out how to respond convincingly in written Chinese without ever actually understanding Chinese. And that’s basically what an AI is, it’s a program that learns how to communicate without actually understanding the conversation. Basically I think it’s saying that the Turing Test doesn’t actually prove “intelligence” so much as an ability to mimic.
Maybe the scientists are just being selfish and shortsighted in a fight response to TOM’s hunting them down, or, more thematically relevant, maybe the disease has taken them over the same way TOM has taken over Ava and it is trying to preserve its own existence by getting back to earth to replicate. Essentially it has learned to mimic being human and is passing a Turing Test of its own. At first I also thought maybe EVERYBODY was a robot but there’s some side lore that makes that seem very unlikely. But, like, you know they cut off TOM’s control of them? They cut off their arms, which TOM pitched earlier as a crazy robot way of solving things. So I don’t know? Maybe it’s also a crazy moon disease way of solving things.
There’s really only one actual choice you can make in the game, and it’s right at the end. And you “win” either way you play it, but it just adds further ambiguity to the plot.
This is a game about opening doors. That’s all you do. Open door, walk through, open next door, walk through, for almost 8 hours. And somehow it has me stumped as to what makes a human a human, and what makes a mind a mind, and what even just happened while I was opening doors for 8 hours. And was it even real in the first place, because I legit thought the ground team was in cryogenic sleep at the beginning of the game? I’m a mess. And you know what, I’m into it. You win, game. You pass the Dufrau Test.
Anyway. If you’ve played it, what do YOU think happened? And if you haven’t, can you please explain AI to me and correct everything I got wrong up there because even the wikipedia articles on this stuff are too science-y for me to really understand?
This sounds like a really cool game! Based on your spoiler free description it sounds a lot like a “Talos principle” copy, but hey, that game was awesome, so I wouldn’t really mind playing a copy set in a slightly different environment :D
I sure do love your writing.
thanks :)
I didn’t finish this because spoilers, but I am definitely gonna buy this. :D
I really liked this little game also! My two cents on what happened below the spoilers warning
**SPOILERS**
I personally thought the twist was that you find out you’ve actually been playing as TOM the whole time, not Ava. I.e. you can control Ava in the game because you are TOM controlling her movements/behaviours via the implant in her hand. The strange bits where you have trouble moving, like you’re walking through molasses unless you go along a particular path (where she wants to go autonomously of you), are where TOM is losing control of her. Then, when she goes into the Faraday cage and later has the implant taken out, you are limited to observing her through the cameras only. I think TOM passes the Turing test in the end regardless because either he can’t go through with killing Ava due to the connection he/you developed with her throughout the game, or he clearly feels remorse after doing so, pleading for her to wake up. I think you, as a human yourself, are meant to identify with TOM and his struggles to assert himself as a “person” to the crew (despite him being creepily manipulative with Ava), and that your instincts regarding his ability to think creatively were an intentional effect of the game’s narrative.
I agree with most of this. I interpreted the ending differently though. If you kill them, I agree that you pass for showing guilt/remorse/attachment etc. But if they live, I interpreted it as Ava passing the test by being so convincingly human that TOM (who in this scenario is maybe a human in my brain?) cannot bring himself to destroy you.
Yours makes more sense, but mine is more upsetting so naturally I can’t let it go.