For and against binary goals in strategy games (and against high score)

#42
Does the goal type contribute to interestingness of decisions in a game?
Yes. One disadvantage of win/loss games is that some matches will inevitably end in foregone conclusions: situations where the chance of winning the game reaches (or gets very close to) 0% or 100% before the game actually ends. So the player knows they are going to win/lose the match, but they have to keep playing for a while before they actually get the result they already know they're going to get. In a foregone conclusion there aren't any interesting decisions to make because, in the case of a foregone loss, nothing the player does will matter and, in the case of a foregone win, the player can just "coast" without having to come up with any interesting new plans.

Score games have no such foregone conclusions. No matter how poorly or how well the player has done so far, they will still be able to do better or do worse, and thus influence the score they get at the end of the match. If you are in a foregone conclusion in a win/loss game, it doesn't matter if you start to do worse or better, since the outcome is already determined. Even in the case of merely being very close to 0% or 100%, doing better or worse will have a very low chance of affecting the outcome of the match at all, which is clearly quite demotivating. Of course, in a score game, the player's ability to influence the score decreases as they near the end of the match, but it doesn't drop to zero as it can in a win/loss game.

But on a more fundamental level, I think that it's slightly mistaken to think in terms of interesting decisions. The important thing in a strategy game is that the player has something to learn about how to form plans in the game, and fun in strategy games comes from this learning, not from some property of "interestingness" that is inherent to some decisions. A decision can be said to be "interesting" when it is made conjecturally, i.e. when the player makes it in such a way that it tests some hypothesis about the proper way to form plans. But what is fundamentally important is the process of learning, and the presence of interesting decisions matters only to the degree that it indicates that there is an opportunity for learning.

One important thing for learning is feedback, and score is much more effective as end-of-match feedback than a win/loss, simply because it conveys more useful information. So even beyond the obvious impacts on the interestingness of decisions, score systems will lead to games that are more fun than win/loss systems, because better feedback means more learning.
 
#43
Score games have no such foregone conclusions. No matter how poorly or how well the player has done so far, they will still be able to do better or do worse, and thus influence the score they get at the end of the match.
Score games can still have have forgone conclusions, they're just easier for the designer to avoid since they have more outcome resolution. For example, BBB has short forgone conclusion periods during which it's impossible to get any extra berries before game over.
 
#44
Hypothetically: If the difficulty were set perfectly so that the player had to play well all the way through, and the mechanics were paced so that good late-game play had as much of an effect on victory chance as good early game play, and the win/loss distinction only resolved a turn or two from the end, would that address the 'foregone conclusion' problem with win/loss?

Or maybe a problem that it's impossible to set things up like that? Perfect difficulty can't ever be achieved? Mechanics can't be paced like that without them getting too swingy and output randomnessy towards the end? I'm currently mulling over endgame pacing in my WIP game so that's a live issue for me! (Switching to a score goal is out of the question for this game btw.)
 
#45
Score games can still have have forgone conclusions, they're just easier for the designer to avoid since they have more outcome resolution.
This is true. For any finite resolution there will be some amount of time spent in a foregone conclusion, but at a sufficiently high resolution it's basically negligible. In BBB there is maybe like 5 seconds on average at the end of the match where the conclusion is foregone, I don't know of any well-structured win/loss strategy games that are anywhere near that good at avoiding foregone conclusions.

I qualified that statement with "well-structured" because it's actually pretty easy to avoid foregone conclusions if you don't care about structure. As @richy alluded to, one way to avoid foregone conclusions is just to make the game very swingy, so that at any moment the player could randomly get screwed over or receive a large power boost. Many roguelikes and roguelites are like this.

As to your question, @richy , about whether it's possible to create a win/loss game that avoids foregone conclusions while also having good structure, I'm highly doubtful, mostly because the foregone conclusion problem is actualy a special case of a more general problem that follows directly from the properties of win/loss games. This more general problems is that as the chance of victory gets further from 50% (in either direction), the chance that new actions will change the outcome of the match gets lower. A foregone conclusions is merely the most extreme example of this problem, but the same problem persists (to a lesser degree, of course) when the chance isn't quite 100% or 0%, but instead only 99% or 1%, and it even persists to a small degree at 75% or 25%. The only way to avoid the problem entirely would be to always have the winrate stay at 50% until right near the end of the match, but if the winchance stays at 50% then clearly nothing you are doing actually matters to the outcome of the match, so it wouldn't really be what we would call a "strategy game".
 
Last edited:
#46
I'm still not sure of the truth on this but @Hopenager your last couple of posts have been really useful! I agree with bits but disagree on other bits.

E.g It seems more likely to me that score-based goals could lead to foregone conclusions than binary goals. In a win/loss game we can make it so players have the ability to close out wins quickly. Similarly losses can be made to happen quickly once inevitable. Surely it's a score goal which runs the risk of forcing the player to chug all the way to the required score?

I've thought for several days about this next thing you said and finally think it's very insightful but draws a questionable conclusion:
"The only way to avoid the problem entirely would be to always have the winrate stay at 50% until right near the end of the match, but if the winchance stays at 50% then clearly nothing you are doing actually matters to the outcome of the match."
The way I look at it, a player who spends most of a match with win and loss evenly poised like that is definitely doing at least one thing that matters a lot, namely not losing. That is, successfully resisting the attempts of the game mechanics/opponent to to draw their win chance down to zero.

This seems good, and I think viewing win/loss strategy matches as steady marches from an initial 50% chance through 60%, 70%, 80%, 90%, and finally 100% victory at the end, may be a mistake. IMO the meaning of gameplay (like life??? o_O) is at a maximum when fully ambiguous, and as you said, we should want players to always be thinking and learning and playing moves conjecturally. A game which keeps the player fully uncertain about whether they'll win or lose for as long as possible seems ideal. The opposite to "nothing you are doing actually matters" - arguably it matters the maximum amount!

You said a similar thing yourself in the same paragraph:
"This more general problems is that as the chance of victory gets further from 50% (in either direction), the chance that new actions will change the outcome of the match gets lower."
So I'm not sure whether you're contradicting yourself, or just saying that you feel that both evenly-poised, and uneven, positions lack meaning and it's because of the win/loss format in general.

For me I do agree the second point seems like something needing attention when designing mechanics. Clearly when randomness is involved we can expect players to find themselves at (say) mid-match in better-than-normal and worse-than-normal positions, so handling that is a practical design problem.

I don't have answers, but I'm looking for them atm by assuming it'll be a good thing (as per above) to try and make it so win chance remains fairly steady through the match until the point when, over the information horizon, we glimpse that a win or a loss is quite close and perhaps nearly inevitable. Players playing above their skill level will start with a low win chance, and should find their frequent losses coming quite quickly and their occasional wins taking a relatively long time. Players below their skill level will be starting with a high win chance and should find the reverse - many quickish wins and few lengthier losses. And we would 'tune' the mechanics to work best for players on an appropriate skill level where their win chance starts, and mostly remains, at 50%. (Perhaps a new perspective on the good old "what is the correct win %"discussion!!)

The problem is how to make all this happen! As I said, the discussion is really stimulating even if I'm not ready to get on board the score-goal bandwagon! I'm interested in any other thoughts you have on the subject, especially if what I said above is utter nonsense (please say why!) :D
 
#47
In a win/loss game we can make it so players have the ability to close out wins quickly. Similarly losses can be made to happen quickly once inevitable. Surely it's a score goal which runs the risk of forcing the player to chug all the way to the required score?
What do you mean by "required score"? My notion of score games has nothing like a "required score", so I'm not sure what you're talking about. Also, the idea that the player would have to "chug all the way to" a particular score implies that the game uses something like the classic accumulation-style score system where you can just survive for an unlimited amount of time and you get more points the longer you survive, but that is not the only way to do score systems, and it is not the way I advocate. You can have a score game that always lasts a constant amount of time, and the player gets better at the game by learning how to be more efficient in the time they are given, not by learning to increase the length of the game. There would be no "chugging along" in a game like that.

I agree that there are often ways to improve win/loss games so that they are less suceptible to foregone conclusions, but I'm skeptical about how far this process can be taken. I'd love to be proven wrong, though. I think that having structure makes it harder to avoid foregone conclusions. Let me give an example to demonstrate what I mean:

Imagine that there is a game that lasts 100 turns, and you win if you have 500 gold at the end, and lose otherwise. This game might have a foregone conclusion if the player does very poorly for the early game and still has 0 gold at the end of 50 turns, and so to eliminate that situation we might add a rule that says "At turn X, you lose if you don't have more than 5*(X-10) gold", so in other words you lose if you still have 0 gold at turn 10, or only 5 gold at turn 11, 10 gold at turn 12, 200 gold at turn 50, etc. This effectively eliminates a lot of foregone conclusion situations, BUT it only works under the assumption that how well the player is doing is roughly proportional to the amount of gold they have. If the game has structure and allows multiple strategies, one of the strategies might involve heavy investment in the early game in something other than gold, and then spending the late game cashing in on the investment to create a lot of gold. So the player could have 0 gold at turn 50 but actually be in a very good spot.

More generally, to the degree that a game has multiple viable long-term strategies (which is a feature of games with good structure), it will be hard to find a single variable/resource that describes how well the player is doing, and therefore it will be hard to avoid foregone conclusions by changing the win/loss conditions. You could maybe try to define a loss condition in terms of multiple variables in order to account for multiple strategies, but that would quickly become pretty hard for the player to comprehend.

With regard to your other point, I don't think I was contradicting myself. When I said "if the winchance stays at 50% then clearly nothing you are doing actually matters to the outcome of the match" I didn't mean that nothing mattered because it was at 50%, but because it stayed at 50%. The number 50 isn't important to my point here, it's just the staying at a constant value that matters; actions at 40% that kept you at 40% would also not matter. This is because the winchance is the ultimate metric as far measuring how well the player is doing at the game, it captures everything that is important about the player's performance in a win/loss game (as EV of score does in a score game). The player's goal is to win, and to do that they try to increase their winchance, and so "doing well" is just another way of saying "increasing the winchance".

It's easy to get caught in the trap of thinking of winchance as a resource or something, where you can be a better position even if you have less it under some circumstances, but winchance is actually something fundamentally different. A resource (or other variable in the gamestate) can only ever be a heuristic measurement of performance, and thus it is fallible, but the winchance is never fallible. If you have a higher winchance you are doing better at the game, and if you are doing better at the game you have a higher winchance, by definition.

I bring this up because I think you were making this mistake, of thinking of winchance as being like a resource, when you made this point:
The way I look at it, a player who spends most of a match with win and loss evenly poised like that is definitely doing at least one thing that matters a lot, namely not losing.
It doesn't matter how long the player has managed to not lose if they still have the same winrate as before. It is no better to be at 50% with 5 turns to go than it is to be at 50% with 20 turns to go, either way you still have a 50% chance of losing. If surviving for longer actually helps you win, that will be reflected by an increase in your winchance.

And I think this is important because it means that this:
t'll be a good thing (as per above) to try and make it so win chance remains fairly steady through the match until the point when, over the information horizon, we glimpse that a win or a loss is quite close and perhaps nearly inevitable.
is mistaken. If the winrate remains constant until near the end of the game, then nothing the player is doing actually matters until near the end (and if what the player is doing does matter, then their winrate cannot be staying constant).
 
#48
Excuse me for barging in at this late stage; a very engaging discussion so far!

When I said "if the winchance stays at 50% then clearly nothing you are doing actually matters to the outcome of the match" I didn't mean that nothing mattered because it was at 50%, but because it stayed at 50%. The number 50 isn't important to my point here, it's just the staying at a constant value that matters; actions at 40% that kept you at 40% would also not matter. This is because the winchance is the ultimate metric as far measuring how well the player is doing at the game, it captures everything that is important about the player's performance in a win/loss game (as EV of score does in a score game). The player's goal is to win, and to do that they try to increase their winchance, and so "doing well" is just another way of saying "increasing the winchance".
Hopenager: a thoroughly argued point, and I absolutely agree on your distinction between win-chance and resources, in that the maximisation of the former is always good, whereas that is not necessarily true of the latter. However, the win-chance staying at 50% does not indicate a poor player performance.

What really matters for evaluating the player’s performance over Δt (a given change in time) is not the resulting win-chance, nor is it even Δ(win-chance). The true measure of player performance is the likelihood of doing at least as well - or conversely, at least as badly - as they actually did, with regard to Δ(win-chance).

To put that into English, if the player’s win-chance changes by +2% over a given turn, that in itself is meaningless - what matters is the how likely it was to do at least that well. If the chance of achieving +2% or better was one in ten, then the player has performed very well. If the odds were even, they’ve done on par. If the odds were against them that they’d get +2% or less, then the player has performed poorly.

Therefore, that win-chance remained at 50% throughout much of the match does not necessarily indicate poor play. If the chance of maintaining win-chance had stayed around one in two throughout the match, then this would have been on-par play.

The question for the designer is where they would like win-chance to sit and how they would like it to evolve. Having win-chance above or below 50% means the system is partially solved, to the extent that win-chance deviates from 50%. Therefore, given that strategy games should be maximally ambiguous, in a well-designed strategy game, win-chance should tend towards 50%.

P.S. This principle is most obviously true in multiplayer games. In a well-matched and well-played game of chess, for example, we would hope both players to be fighting tooth and nail almost to the last turn, each with as close to even chances of winning as the other for as long as possible. This principle, may, however, seem counter-intuitive in single-player games, where we would naturally expect good play to result in a high win-chance. I still think this principle applies, though - single-player games, just as multi-player games, should make good play result in higher win-chances, but not high win-chances. The designer has to balance the imperative to provide feedback (ie. reward and punish) with the imperative to keep win-chance close to 50%.
 
Last edited:
#49
if the winchance stays at 50% then clearly nothing you are doing actually matters to the outcome of the match
Not so.

win-rate-diagram.png

In this diagram orange rectangles represent player decision points, blue ovals represent random events, and the black triangles represent game-over states. The numbers in the middle of each node represent the win-rate (or, equivalently, the EV) of the node. The percentages on the edges between nodes represent the chance of that edge being taken.

Clearly the choice between the two blue ovals matters to the outcome of the match, the bottom one leads to a win 50% of the time rather than merely 25% of the time. But if the player is perfect and picks the bottom oval 100% of the time, then their win-rate at the orange rectangle is also 50%. So despite having made a meaningful decision at the orange rectangle, their win-rate remains unchanged.

Other examples could be constructed, including ones with imperfect players who do not make moves with 100% certainty.
 
Last edited:
#50
Clearly the choice between the two blue ovals matters to the outcome of the match, the bottom one leads to a win 50% of the time rather than merely 25% of the time. But if the player is perfect and picks the bottom oval 100% of the time, then their win-rate at the orange rectangle is also 50%. So despite having made a meaningful decision at the orange rectangle, their win-rate remains unchanged.
No, because the winchance at the orange rectangle is only 50% once the player has already committed to choosing the lower of the two circles. If they haven't yet decided which to pick then the winchance in the orange rectangle would be less than 50%, and thus the decision to go for the lower of the two circles would increase their winchance.
What really matters for evaluating the player’s performance over Δt (a given change in time) is not the resulting win-chance, nor is it even Δ(win-chance). The true measure of player performance is the likelihood of doing at least as well - or conversely, at least as badly - as they actually did, with regard to Δ(win-chance).
That's an interesting idea, but I'm not sure if this logic holds up. In particular, I think that this statement contains a logical contradiction:
If the odds were against them that they’d get +2% or less, then the player has performed poorly.
This is a contradiction because of another odd property of winchance: the value of winchance at any point in time is actually determined by the potential values of winchance in the future. For instance, if, at some point in the future, there is a 40% chance that you will have a 70% winchance, and a 60% chance that you will have a 50% winchance, then the current winchance can be calculated as 0.4*0.7+0.6*0.5 = 58%, and it can't possibly be anything else.

Therefore, there are constraints on the types of winrate-possibility-scenarios that can exist. For instance "Imagine that you have a 60% winchance, and there is a 90% chance that next turn you will have a 99% winchance, and a 10% chance that next turn you will have an 20% winchance" is actually a mathematically incoherent scenario, because the winrate can be calculated as 0.9*0.99+0.1*0.2 = 0.911, and therefore it can't be 60%.

For the same reason, it's impossible for the the odds to be against getting +2% or less, and it's even impossible for the odds of getting +2% to be even. The average Δ(win-chance) over all possible scenarios at a particular point in the future must be 0% (this follows from the property that I described above), so if we assume that there was a 50% of getting +2%, then that can be "balanced out" by there being a 50% chance of getting -2%. BUT if there is a 50% chance of getting +2% or lower, that can't be balanced out, because any negative Δ(win-chance) possibilities we used to balance it out would be lower than +2%, and therefore it would increase the chance of getting +2% or lower above 50%, and this is a contradiction.
I still think this principle applies, though - single-player games, just as multi-player games, should make good play result in higher win-chances, but not high win-chances.
But the winchance must collapse to either 0% or 100% by the end of the match, and so it isn't possible to spend the whole match close to 50%. Even if the majority of the match takes place rather close to 50%, it will have to stray far from 50% near the end. And if a game naturally stays close to 50% for all but the late-game, the late-game would be much more consequential than the early-game. A player who cares about learning to play better would thus feel as though everything before the late-game was mostly a waste of time that they merely have to suffer through to get to the end, because the late-game is where stuff really starts to matter. If you don't want the player to have that feeling, if you want the whole game to be roughly consistently consequential, the player's ability to change the winchance should be about constant.

Of course, my recommendation would be to ditch win/loss design entirely and instead move to score-based games. In a score game, the player is maximizing the expected value of their score (or EV(score)) instead of their winchance. Increasing the EV does not inherently move the game away from ambiguity, but in a win/loss game, increasing the winchance above 50% does.
 
#51
...it's impossible for the the odds to be against getting +2% or less, and it's even impossible for the odds of getting +2% to be even. The average Δ(win-chance) over all possible scenarios at a particular point in the future must be 0% (this follows from the property that I described above), so if we assume that there was a 50% of getting +2%, then that can be "balanced out" by there being a 50% chance of getting -2%. BUT if there is a 50% chance of getting +2% or lower, that can't be balanced out, because any negative Δ(win-chance) possibilities we used to balance it out would be lower than +2%, and therefore it would increase the chance of getting +2% or lower above 50%, and this is a contradiction.
(This is actually quite important as I will soon argue.) I hadn’t noticed this fact: you are absolutely right to say that [average ΔWC = 0]. However this does not imply that it is impossible for ΔWC > 0 to be against the odds. For example, imagine there are exactly three possible outcomes for ΔWC over the next turn: a 10% chance of +6%, a 30% chance of -26% and a 60% chance of +12%. The odds of getting +6% or less are 10% + 30% = 40%, so is against the odds: meanwhile (10 * 6) + (30 * -26) + (60 * 12) = 0, so this is consistent with [average ΔWC = 0]. Therefore a positive ΔWC can reflect a poor performance. (For future note - it is therefore also possible for ΔWC=0 to be against the odds.)

But the winchance must collapse to either 0% or 100% by the end of the match, and so it isn't possible to spend the whole match close to 50%. Even if the majority of the match takes place rather close to 50%, it will have to stray far from 50% near the end. And if a game naturally stays close to 50% for all but the late-game, the late-game would be much more consequential than the early-game. A player who cares about learning to play better would thus feel as though everything before the late-game was mostly a waste of time that they merely have to suffer through to get to the end, because the late-game is where stuff really starts to matter. If you don't want the player to have that feeling, if you want the whole game to be roughly consistently consequential, the player's ability to change the winchance should be about constant.
I think this is possibly the most interesting point to discuss of the thread so far! Granted, it isn't possible to spend the whole match at WC = 50%, and you have rightly identified that either the game tends to remain close to 50% until the late-game, or it doesn’t. The premise I disagree with is the following:

...if a game naturally stays close to 50% for all but the late-game, the late-game would be much more consequential than the early-game.
For a part of the game to be consequential, I assume we mean that decisions have a considerable likelihood of significantly impacting WC, which is to say that on any given turn, ΔWC has a considerable likelihood of becoming one of a wide range of values. If ΔWC has a 1% chance of being -99% and a 99% chance of being +1%, that turn is inconsequential, even given the impact on WC could be extremely significant, since, it being exceedingly unlikely that anything other than +1% could be achieved, the player’s decisions have little impact on WC. If ΔWC has a 50% chance of being -5%, a 40% chance of being 0 and a 10% chance of being +25%, this turn is consequential, since player decisions have a big impact on WC. This is so even if the player achieves the 40% likelihood of achieving ΔWC=0, thereby not actually affecting WC.

This latter example is a case where good play at a very consequential part of the game keeps WC constant. Therefore that WC stays close to 50% for all but the late-game does not imply that most, or any, of the game is inconsequential. Therefore such a game can be full of feedback.

Having discussed Hopenager’s point I would like to add something to a point of my own from my last post.

Having win-chance above or below 50% means the system is partially solved, to the extent that win-chance deviates from 50%. Therefore, given that strategy games should be maximally ambiguous, in a well-designed strategy game, win-chance should tend towards 50%.
This point is inaccurate. Having WC deviate from 50% does not inherently result in a more solved system: solution is independent of WC, except in the cases where WC = 100% or WC = 0%. How "solved" a system is, on any given turn, is equivalent to how consequential that turn is, as already discussed. Moving WC from 80% to 81% is just as consequential as moving it from 50% to 51%: it means, in either case, that for every 100 games, you will win one extra on average. However, moving away from 50% does mean there is less room to move in that direction, and therefore the rest of the game, considered as a whole, tends to be less consequential. When WC = 50%, you have room to move up to 50% in either direction, and let’s say a certain move could result in a 50% swing upwards. But if WC rises to 60%, then that same move is now only worth the maximum 40% swing in WC - therefore, the player’s decisions relating to that move are less consequential. For every 100 games, you now only win 40 more on average instead of 50 more. (This thought needs more development - it seems unlikely that a single move could swing WC by 50%...)
 
#52
No, because the winchance at the orange rectangle is only 50% once the player has already committed to choosing the lower of the two circles. If they haven't yet decided which to pick then the winchance in the orange rectangle would be less than 50%, and thus the decision to go for the lower of the two circles would increase their winchance.
You can only sensibly calculate win-rate if you have a probability distribution for the player's decisions. In this case it was distributed such that the player picked one option 100% of the time, but, as I mentioned, you can also construct examples where the player has a non-zero chance of picking each available option but where at least one of the options still leaves their win-rate unchanged and at least two of the options differ from each other in win-rate.
 
Last edited: