The Clockwork Game Design Podcast: Episode 3 – Tech Myths


Three podcast episodes in three days! My intention here was to get the podcast really rolling up front. I feel like it’s kinda crappy to have a podcast with one episode, and two episodes isn’t much better. So now there are three, which is a comfortable starting place, I think.

Quickly I’d like to let people know: I submitted the podcast to iTunes two days ago. Still waiting on the approval; from what I’ve read it can take between 30 minutes and 3 weeks (!). Hopefully it won’t be too much longer.

Today’s episode talks about some of the mythology that we’ve all accepted about technology – specifically virtual reality, AI, and graphics technology above all else. We sort of expect these things to solve our problems for us, but the truth is that they won’t.

(This episode doesn’t make much mention of fan comments, but I’ll get back to that next episode, promise.)

I referenced my Toys and the Adult Mind article, which might be worth a read.

Thanks for listening, and as always, you can support the show by going to

  • Rob Seater

    One of the things you are talking about with respect to RPS at 15:10 is the formalism of rationality. Rationality is making the best choice given the available information, not making the best choice in retrospect. But humans are wired to ascribe causality to events, because we evolved in opaque causal environments. So, when humans are given a situation where the rules are simple and transparent, we have trouble accepting it at some intuitive level.

    Consider a simple gambling game: “Choose a number then roll 1d6. If you roll the number you guesses, gain that many dollars.” Clearly the rational thing is to pick 6. But if you pick 1 then get a 1, many people will get caught up arguing that you were somehow smart to pick 1 and that 1 was somehow the right choice and that picking 6 is somehow not the dominant strategy. But picking 1 was irrational.

    I think many of the games you criticize are, in some way, just taking advantage of the fact that humans like to ascribe causal meaning, even to shallow games that they know are random. Those are cute tricks, but are in some way missing the point of strategy game design.

  • Rob Seater

    I run into the same conflict about immersion (22:15) when I deal with game-like simulations used for training (e.g. ‘serious games’). There are companies that like to charge lots of money to build detailed simulators, but it often isn’t clear if those are actually helping train the required skill. In some cases, it really does help — e.g. a flight simulator should be very realistic to train a pilot to have all the right instincts about subtle cues in flying a plane. In other cases, it misses the point — e.g. a 3d simulation of a desert environment isn’t helping to train a squad leader how to lead a team under fire and judge how to improvise under pressure. In the pilot case, you really do want a sensory immersion experience, whereas in the squad leader case, you want more of a strategy game that engages via dilemmas and decisions not graphics.

  • Rob Seater

    I seems like the approach of very simple AIs should go hand-in-hand with certain user-interface patterns. Namely, if an AI is so simple that a human player can internalize it quickly, then it is probably also simple enough to spell out as a visual preview.

    E.g. If Auro were on a mouse-controlled device, you might show an indications of where each enemy will move when you mouse-over a place your avatar can move. That removes any mystery or calculation about what will happen, both helping to train the player while also liberating them to focus on more strategic considerations. A touch device doesn’t support that interface per say, but perhaps something analogous could be done.