Clockwork Criteria: 6 Guidelines for Ideal Strategy Game Design

What are the criteria that make something a good “Clockwork Game”?

The Clockwork Game Design model is something I have been working on for the last five years or so. It is specifically an effort to figure out how to make the most elegant and effective strategy games possible. There are certainly practical reasons why you might not want a specific game to be a Clockwork game. But to the extent that you want your strategy game to be elegant, you should adopt as many of these principles as possible.

Above: my book

Below is a list of criteria that strategy games should strive for. I am sorting them by how controversial they are. In other words, I am putting the stuff people pretty much agree upon towards the bottom.

These are not ordered by priority. I am making no statements about which of these is more or less important; just that they are all something to strive for. Continue reading “Clockwork Criteria: 6 Guidelines for Ideal Strategy Game Design”

CGD Podcast Episode 32: Contests of Understanding, and Questioning Gun Worship in Games

screenshot_paxaus_2016_1-1-720x405

Hello everyone! A new episode, finally. This one is a distinct two-parter, coming in at about 45 minutes. I first talk about how games are better described as contests of understanding rather than contests of decisions. The “decisions” aspect of games tends to actually be a bit over-stated.

The second thing I talk about is a new IGN article that asks the question, “Are Guns In Video Games Holding The Medium Back?”

(Above is a screenshot from a new satirical VR game called The American Dream.)

Thanks for listening, and let me know what you think of the episode below.

Liked the episode? Please consider becoming a patron at Patreon.com. You’ll gain access to previews of new episodes, articles, and even get the first glimpses at my games and prototypes.

Improving Go (Not Really)

My official position is that you can’t really “improve” Go. There might be something in there worth salvaging, but you can’t just tweak some rules and make Go better. That’s not because Go is so great, but because tweaking rules on an existing system like that tends to create vastly horrible results.
With that said, it might be an interesting intellectual exercise to sort of try to graft on the Clockwork Game Design concepts onto Go and see what you get.
Every day I throw down failed game design ideas. Today I thought I’d share one with you guys just to get a little game design conversation going. (With the election and everything, things have been a little slow on that front recently.)
Here’s as far as I got, just to get you guys started.
go2
Some basic ideas for it:
  • 13×13 board, as a starting place. Would scale up or down as necessary. Maybe the board shouldn’t even be square, not sure.
  • Fog of war. Basically my idea was that you get a vision range of 2, but this doesn’t actually make any sense in practice for a few reasons. One is that you shouldn’t be able to place pieces on the perimeter (random), let alone across the board in some random fogged spot. And the second is that at some point (possibly 10-12 moves in) you’re going to just see the whole board – bye bye, hidden information. There may be solutions to these problems, but I don’t know. (I’ll come back to this at the end.)
  • At least 1 piece down already, probably more like 3-4 in a random, non-mirrored configuration (this is to avoid guessing what the opponent is doing in the fog).
  • Grey pieces are down in a mirrored configuration. Grey pieces turn your color when you put a piece next to them. Or maybe they do something different?

A few thoughts I had as I was giving up on this:

– Maybe this could be single player somehow? Like having to do with the grey pieces? Probably not.

Back to the fog of war and the problems with it: funny thing about this is, it doesn’t really work, and one of the reasons it doesn’t work is the “I can just lay down pieces willy-nilly whereever I want with no restrictions” element of Go to begin with.

Anyway, like I said, I make these kinds of failed little concepts all the time and since things have been slow around these parts recently, I thought I’d share this one totally non-working, bad idea with you.

How would you apply the Clockwork Game Design design methodology to Go? Just to review, here are some of the demands:
– No memorized openings/closings
– Some source of hidden information
– Ideally, something that looks like a core mechanism
– The game should be no longer than it needs to be (Go’s pretty long)
I’d love to hear your thoughts.

CGD Podcast Ep. 31 – permadeath, structure, the death of game design writing, and more

Hello everyone. Today I’m talking about a new article I read about permadeath/grinding, as well as what I perceive as the death, or at least curving off of, the world of game design writing.

I also read and responded to a Frank Lantz quote (now on the Dinofarm Forums!) on the topic of structure in games and win rates.

You should also check out the game design subreddit if you haven’t already: http://www.reddit.com/r/gamedesign

(By the way… beware the term “beautiful”.)

As always, you can support the show by visiting my Patreon page.

CGD Podcast Episode 30 – Deepities, a new Frank Lantz article, and updates

cgdplogo_twitter

In this episode I discuss the concept of deepities and how it applies to game design writing. I also discuss a new Frank Lantz article on Ian Bogost‘s new book—an article that, it seems to me, pushes against progress in game design in some ways.

(Don’t forget to check out episodes 23 and 24 where I talked with Frank on the show, if you haven’t already.)

Finally, I talk a little bit about some personal updates with me, my 2-3 upcoming games, and Codex (which I’m still playing).

Thanks for listening! If you like the show, show your support by making a pledge on my Patreon page.

Minimize calculation (in games worth playing)

This is a short follow-up to my article, “Uncapped Look-Ahead and the Information Horizon“, in which I proposed the concept of an information horizon: the distance between the current turn, and the point at which information becomes known to a player (usually, but not always, this means that it has become “public information”).

A simpler way to word it is, “how much time do players have to react to new information?” In the case of rolling a die to hit, you have zero time to respond, so in this case the “information horizon” would be right up in the player’s face. Alternatively, drawing cards to a public market or revealing new terrain via fog of war tends to lend the player a few turns / some time to respond to that new information before it affects the gamestate.

I also discussed the issue in Episode 6 of the 3 Minute Game Design YouTube series.

This concept is important because one of my guidelines for strategy game design is that, as I talk about in the video, if the information horizon is too close, the line of causality and the final outcome quickly starts being disassociated with that of the player’s performance—which is what we’re trying to measure in a strategy game, after all. If the information horizon is too far away, we get a “look-ahead contest” situation where it largely comes down to who calculated (solved) more of the available game state. This is mostly a brief review of things I’ve talked about the above linked articles/videos.

The new thing I want to suggest today is: Assuming a reasonable degree of goal feedback efficiency, we should strive for as little calculation as possible. To phrase it another way, in any game that’s good enough to be worth playing, you should try to minimize the amount of calculation that’s possible.

A reasonable degree of goal feedback efficiency

When we look at a game, “goal feedback efficiency” is a rough approximation that we can make that describes how accurate the end state of the game is with regards to player performance. A game with perfect goal feedback efficiency would give a win to the player who made stronger inputs 100% of the time. A game with good goal feedback efficiency would give a win to the player who made stronger inputs somewhere about 90% of the time.

Every game needs to have a pretty high degree of this, without exception. I’m not sure what the number exactly is, but I would say if it gets much lower than, say, 85-ish%, it starts becoming hard to “trust” a game. If you have less efficiency than that, it becomes hard to defend playing the game.

I would not play a 75% efficiency game. Why? Because a quarter of my matches are sending me false signals about my performance. That might not sound like too big a deal, but it becomes a very big deal when the player has no way of really knowing which matches are the false signals and which aren’t.

The classic answer to this problem is that figuring out which matches to believe and which to chalk up to randomness is part of the skill of the game. First, this strikes me as an attempt to make excuses for what exists, rather than an actual suggestion about what makes for good game design.

But beyond that, I don’t think that this is possible in an unsolved game. It’s hard for me to believe that a person could play a complex game, barely cling to enough understanding to pull off a win, and then also, on top of that, have enough additional systemic understanding to determine that this win was because of random effects and not their own agency. In other words: if you have a balanced game, players will be understanding the system just well enough to win or lose – they will be playing at their maximum capacity. So it’s unreasonable to expect players to be able to also interpret their win/loss as to whether it’s just based on randomness or not.

Ideally, games would have a 95+% efficiency rating, and I actually don’t think that that’s too hard to pull off. It doesn’t mean you can’t have some random variance; it just means that the random variance should be sufficiently input variance so that players can account for it, and to the extent that there is some output variance, they’re small enough in impact and there are enough of them so that they mostly average out. Hundreds of small (+-10%) random damage variance over the course of a match is probably OK, but a couple of critical card-draw failures throughout a match probably isn’t.

This is actually a pretty practical concern. Having a low efficiency rating means it will simply take the player too long to explore your system. In the finite number of hours they’re going to give your game, the amount of “effective depth” (depth the player can access) drops quickly after 95% and then plummets when you go much lower.

Of course, today’s game players who are used to playing stuff like Hearthstone will probably do it anyway, but for game designers who want to potentially someday make something better than Hearthstone, it is critical that we understand and internalize this idea.

 

…As little calculation as possible

So, if you’ve got this roughly 95% efficiency rating (which you should!), then we can ask the question: how much calculation should your game allow for? Or put another way: where should your information horizon be? Quick definitions:

Quickly, a couple of terms: Calculation, for the purposes of this article, means solving. It means literally following logical courses of action to their deterministically guaranteed outcomes. When one does “look-ahead” in games, they are typically doing calculation. Sifting through public, deterministic game states in Connect Four is a great example of calculation.

Analysis, on the other hand, is a word I use for the kind of “thinking” in games that doesn’t fall in that category. When you can’t calculate, you use a looser, heuristic estimation process, and I call that analysis.

So back to my claim:

Assuming a reasonable degree of goal feedback efficiency, we should strive for as little calculation as possible.

Obviously we can create a game with zero calculation – perhaps something like the card game War, or maybe (a single match of) Rock, Paper, Scissors. In these, we’ve brought the information horizon to “right up in your face” – once the other player has played Rock, you can’t do anything about that. But we’ve also destroyed our goal feedback efficiency. Wins have nothing to do with player performance.

The point is, you do need some degree of determinism in games; some “causal line” that goes from the player’s input and stretches out into the system to some extent. But by using input randomness smartly and carefully selecting the position of the information horizon, you can (and should) reduce the calculate-able (solvable) parts of your game down to a reasonable level.

This isn’t just a matter of “balancing” goal feedback efficiency and calculation. It’s much more like, goal feedback efficiency has a floor that it really just can’t go below (95, mayyyybe 90%) no matter what, whereas calculation is much more flexible.

This is because the downside to too much calculation is that the game is a little too solvable, but still totally skill based. In short, it’s a little bit too much like Chess. It’s kind of OK for games to lean into being a little bit Chess-like.

chess-game-strategy-intelligence-52993

The downside to too little goal-feedback-efficient is that the game becomes indistinguishable from noise, and totally unplayable to anyone who’s alert to this kind of problem. Granted, there are a lot of people who will happily play this kind of game anyway, as so many popular games fall into this category these days, but my writing has never been about “game design guidelines that help you make games people won’t know better than to play”. My game design guidelines are about helping you make good games.

If you build a strong system, with a well-placed information horizon, this new guideline is going to be met somewhat naturally. But it’s another way to test a system you’re already working with and to understand the information horizon concept.

Enjoyed this article? Consider supporting my work on Patreon.com!