CGD Podcast Bonus – Introducing The Dinofarm Community Podcast

Hi everyone! This week, instead of a normal Clockwork Game Design Podcast episode, I bring you an episode of another podcast that I was very recently on – the Dinofarm Community Podcast. This is a podcast hosted and run by members of the Dinofarm Games community, over on the discord and forums. I came on this episode, #3, to discuss core mechanisms, and we contrasted them with Redless’ idea about core decisions. Overall, it was a good conversation, one that I think Clockwork Game Design podcast listeners will get a lot out of.

Enjoy! And subscribe to the Dinofarm Community Podcast, which will have new episodes weekly.

Minimize calculation (in games worth playing)

This is a short follow-up to my article, “Uncapped Look-Ahead and the Information Horizon“, in which I proposed the concept of an information horizon: the distance between the current turn, and the point at which information becomes known to a player (usually, but not always, this means that it has become “public information”).

A simpler way to word it is, “how much time do players have to react to new information?” In the case of rolling a die to hit, you have zero time to respond, so in this case the “information horizon” would be right up in the player’s face. Alternatively, drawing cards to a public market or revealing new terrain via fog of war tends to lend the player a few turns / some time to respond to that new information before it affects the gamestate.

I also discussed the issue in Episode 6 of the 3 Minute Game Design YouTube series.

This concept is important because one of my guidelines for strategy game design is that, as I talk about in the video, if the information horizon is too close, the line of causality and the final outcome quickly starts being disassociated with that of the player’s performance—which is what we’re trying to measure in a strategy game, after all. If the information horizon is too far away, we get a “look-ahead contest” situation where it largely comes down to who calculated (solved) more of the available game state. This is mostly a brief review of things I’ve talked about the above linked articles/videos.

The new thing I want to suggest today is: Assuming a reasonable degree of goal feedback efficiency, we should strive for as little calculation as possible. To phrase it another way, in any game that’s good enough to be worth playing, you should try to minimize the amount of calculation that’s possible.

A reasonable degree of goal feedback efficiency

When we look at a game, “goal feedback efficiency” is a rough approximation that we can make that describes how accurate the end state of the game is with regards to player performance. A game with perfect goal feedback efficiency would give a win to the player who made stronger inputs 100% of the time. A game with good goal feedback efficiency would give a win to the player who made stronger inputs somewhere about 90% of the time.

Every game needs to have a pretty high degree of this, without exception. I’m not sure what the number exactly is, but I would say if it gets much lower than, say, 85-ish%, it starts becoming hard to “trust” a game. If you have less efficiency than that, it becomes hard to defend playing the game.

I would not play a 75% efficiency game. Why? Because a quarter of my matches are sending me false signals about my performance. That might not sound like too big a deal, but it becomes a very big deal when the player has no way of really knowing which matches are the false signals and which aren’t.

The classic answer to this problem is that figuring out which matches to believe and which to chalk up to randomness is part of the skill of the game. First, this strikes me as an attempt to make excuses for what exists, rather than an actual suggestion about what makes for good game design.

But beyond that, I don’t think that this is possible in an unsolved game. It’s hard for me to believe that a person could play a complex game, barely cling to enough understanding to pull off a win, and then also, on top of that, have enough additional systemic understanding to determine that this win was because of random effects and not their own agency. In other words: if you have a balanced game, players will be understanding the system just well enough to win or lose – they will be playing at their maximum capacity. So it’s unreasonable to expect players to be able to also interpret their win/loss as to whether it’s just based on randomness or not.

Ideally, games would have a 95+% efficiency rating, and I actually don’t think that that’s too hard to pull off. It doesn’t mean you can’t have some random variance; it just means that the random variance should be sufficiently input variance so that players can account for it, and to the extent that there is some output variance, they’re small enough in impact and there are enough of them so that they mostly average out. Hundreds of small (+-10%) random damage variance over the course of a match is probably OK, but a couple of critical card-draw failures throughout a match probably isn’t.

This is actually a pretty practical concern. Having a low efficiency rating means it will simply take the player too long to explore your system. In the finite number of hours they’re going to give your game, the amount of “effective depth” (depth the player can access) drops quickly after 95% and then plummets when you go much lower.

Of course, today’s game players who are used to playing stuff like Hearthstone will probably do it anyway, but for game designers who want to potentially someday make something better than Hearthstone, it is critical that we understand and internalize this idea.

 

…As little calculation as possible

So, if you’ve got this roughly 95% efficiency rating (which you should!), then we can ask the question: how much calculation should your game allow for? Or put another way: where should your information horizon be? Quick definitions:

Quickly, a couple of terms: Calculation, for the purposes of this article, means solving. It means literally following logical courses of action to their deterministically guaranteed outcomes. When one does “look-ahead” in games, they are typically doing calculation. Sifting through public, deterministic game states in Connect Four is a great example of calculation.

Analysis, on the other hand, is a word I use for the kind of “thinking” in games that doesn’t fall in that category. When you can’t calculate, you use a looser, heuristic estimation process, and I call that analysis.

So back to my claim:

Assuming a reasonable degree of goal feedback efficiency, we should strive for as little calculation as possible.

Obviously we can create a game with zero calculation – perhaps something like the card game War, or maybe (a single match of) Rock, Paper, Scissors. In these, we’ve brought the information horizon to “right up in your face” – once the other player has played Rock, you can’t do anything about that. But we’ve also destroyed our goal feedback efficiency. Wins have nothing to do with player performance.

The point is, you do need some degree of determinism in games; some “causal line” that goes from the player’s input and stretches out into the system to some extent. But by using input randomness smartly and carefully selecting the position of the information horizon, you can (and should) reduce the calculate-able (solvable) parts of your game down to a reasonable level.

This isn’t just a matter of “balancing” goal feedback efficiency and calculation. It’s much more like, goal feedback efficiency has a floor that it really just can’t go below (95, mayyyybe 90%) no matter what, whereas calculation is much more flexible.

This is because the downside to too much calculation is that the game is a little too solvable, but still totally skill based. In short, it’s a little bit too much like Chess. It’s kind of OK for games to lean into being a little bit Chess-like.

chess-game-strategy-intelligence-52993

The downside to too little goal-feedback-efficient is that the game becomes indistinguishable from noise, and totally unplayable to anyone who’s alert to this kind of problem. Granted, there are a lot of people who will happily play this kind of game anyway, as so many popular games fall into this category these days, but my writing has never been about “game design guidelines that help you make games people won’t know better than to play”. My game design guidelines are about helping you make good games.

If you build a strong system, with a well-placed information horizon, this new guideline is going to be met somewhat naturally. But it’s another way to test a system you’re already working with and to understand the information horizon concept.

Enjoyed this article? Consider supporting my work on Patreon.com!

CGD Podcast Episode 20: Options in Games

cgdplogo_superwideIn this episode – our 20th! –  I talk about the idea of “optional game rules” and why they are to be avoided. I also go into detail on some experiences designing abilities for Auro. Enjoy!

 

We Should Patch Our Games

I’ve been hearing more and more voices crying out against patching recently, and I wanted to unpack some of what people have said. I think this is one of the many designer-to-player communication issues that crops up in the games conversation, and so here is a designer trying to improve on(“patch”) that aspect, so that hopefully we can have better conversations in the future. Continue reading

Randomness and Game Design

For thousands of years, we’ve relied on randomness of various kinds to help our interactive systems work. While there will always be a place for randomness of all sorts in some kinds of interactive systems, I believe the current assumptions with regard to randomness in strategy games are largely wrong.

The major point I’d like to make is that noise injected between a player’s choice and the result (here referred to as output randomness) does not belong in a strategy game.

dice

What is “randomness”?

For the purposes of this article, randomness refers to “information that enters the game state which is not supposed to ever be predictable.” The process by which random information is generated is designed to be something that humans can never figure out. Classic examples of random systems are rolling dice, shuffling cards, or random number generators. Continue reading

Are Games a Storytelling Medium? (Guest Article by Fabian Fischer)

Editor’s Note: One of our most active Auro beta testers, Fabian Fischer (aka “Nachtfischer”), had written this great piece for his German-language site.  We’ve been talking a lot about story in games on our forum, and I decided it would be great if Fabian could translate and update this article for my site, and that’s just what he did.  I think it pretty much nails why authored story and interactivity don’t go well together.  Enjoy!

 

A while back, Mr. Burgun wrote about this issue. Nevertheless, since there is still a frequent and passionate debate on the matter, so I thought it would not hurt to approach it from a slightly different point of view and throw some new arguments into the mix.

 

Definitions

  • Story in the context of this article describes an authored, linear (not necessarily chronologically linear) sequence of fictional events.

  • Game specifically means a contest of ambiguous decision-making; most readers of this site should be familiar with this.

(Some games look like movies.)

Continue reading