The problem of win-chance debalancing in binary systems

#1
Following on from a discussion later on in this thread, wherein Hopenager claimed that systems with binary goals are bad because they require win-chance to remain at ½, but that this would make most of the game inconsequential. We had much more discussion following on from that, but regardless, after some thought I believe there is one serious concern with binary goals that we touched on in the thread, which I am calling win-chance debalancing, but this problem has several solutions. I don’t believe there’s anything revolutionary here, but neither have I heard the problem formally described or addressed. I’d love it if you would like to critique the following arguments.


SOMETIMES, CHOICES ARE REBALANCED WHEN WIN-CHANCE CHANGES.

Let’s analyse what it means for choices to be rebalanced when win-chance changes. This means that the rate of change of the expected change in win-chance, or, to misuse a term, expected value (EV) of a given option in a choice, with respect to win-chance; is non-zero. To put it symbolically:
temp.png
Given that EV is equal to expected benefit, EB, minus expected cost, EC; and d(a-b) / dx = (da - db) / dx; our formula becomes the suddenly much more helpful
temp1.png
This is clearly the case, to take a simple example, if there is some action which, when taken, has a fixed cost with respect to win-chance, but a declining benefit. Imagine, for example, a racing game with a special ability to fire booster rockets, giving you a temporary speed boost, but which have a 1 in 10 chance to set your car on fire, ending your race. If you are already in the lead, then there’s no point taking the risk for the relatively small benefit of the speed boost; while if you’re far behind, there’s no point not taking the risk, since you have nothing to lose. Only somewhere in the middle is the choice balanced to be maximally ambiguous. This is win-chance rebalancing.

It seems obvious to me that this problem is a highly pervasive one which designers should be bearing in mind when designing binary strategy games. Anywhere where an option’s EV could be affected by WC, which is pretty well everywhere, this problem will loom. Now, three solutions I can offer (besides scrapping a binary goal):

ENSURE THAT THE EV’S OF EACH OPTION IN ALL CHOICES ARE PROPORTIONAL WITH REGARD TO WIN-CHANCE, i.e. that, for all choices, even as win-chance changes, the relative EV of each option within the choice remains in the same proportion. No one option becomes relatively better or worse, compared to the alternatives, as win-chance changes. This you might do with proportional or exponential rewards, for example, earning more money in a city-builder allows you to build more new buildings, for the same proportion of your wealth, than if you had earned less money. However, this would tend to lead to positive feedback loops which are disliked by some designers, and would be a gargantuan effort to balance. I personally don't necessarily mind positive feedback loops, so I'd be on board for this method, provided it was practical.

PRESENT DIFFERENT CHOICES AT DIFFERENT WIN-CHANCES, or have many choices such that there is ambiguous decision-making balanced for any given win-chance. One might complain that if there are qualitatively different decisions being made at different win-chances, the player’s learning will be disrupted, according to a similar line of thought that explains why systemic complexity is evil. And again, there is the practical concern of the amount of balancing work required to design such a system. I personally would steer away from this method for those reasons, but if those obstacles were overcome it certainly solves the problem of win-chance rebalancing.

ADOPT A "KNIFE-EDGE" MATCH STRUCTURE, whereby win-chance tends to remain around ½ (given ½ is the ideal win-chance for a match to start at), and when win-chance deviates from ½, there tends to be either a swift return to even odds or a swift resolution. (This is the solution we spent a lot of time discussing in the other thread.) By this “solution”, the problem is dodged rather than solved, since, with most of the match spent at one win-chance, the fact that decisions will be unambiguous at other win-chances is of little concern. I like this solution, despite it not truly being a solution at all - it gets rid of the problem and does so without the extra balancing work required by the other two solutions.

Please let me know what you think of this, if you disagree in part or in entirety, or if you have anything to add.
 
#2
Why is it a problem for choices to be rebalanced? In other words, why should we want the rate of change of the expected change in win-chance to be 0?
 
#3
In order to be maximally ambiguous, each option in a choice must have an approximately equal expected value. (Of course, that’s not necessarily the only condition, but it is still a required condition.) If the expected value of an option is not constant - which follows if the rate of change of EV with regard to win-chance is non-zero - then at some point, options may not be balanced in this way so as to be maximally ambiguous. See the racing game example case above for a demonstration.
 
#4
In order to be maximally ambiguous, each option in a choice must have an approximately equal expected value.
I don't think that's desirable. The whole point of games is to learn how to make better choices, but if all options have the same expected value no choices are better than the others and there is nothing to learn. How are you defining "ambiguous" and why is it a good thing?

Also, the only way for all options to have approximately equal Δwinchance is for all of them to have approximately 0 Δwinchance. This is because of a fact that we discussed in the last thread: the average expected change in winchance over a period of time must be 0. It's not possible, for instance, for all options to provide a +2% change in winchance. So arguing for equal Δwinchances for all options implies that the choice is inconsequential, since none of the options will change the winchance by much.
 
#5
Firstly let me define “ambiguous”. It’s a term of Keith’s, whereby the more or less uncertain the player is about which is the best option in a given choice, the more or less ambiguous that choice is. Ambiguous choices are also called (interesting) decisions. Ambiguity is valuable without compare since there are only two preceding conditions that can lead to learning: uncertainty and mistaken certainty.

Everything you say is true but is based on a misunderstanding of expected value as I define it, as the expected change in win-chance of a given option. Options must, as you say, have diverse and significant actual values in order for the player to learn anything; but in order for the decision to be ambiguous, the expected values must be more or less homogenous.
 
#6
Ah, so by "expected value" you just mean the value that the player actually expects? That's a confusing way to use the term, I assumed you were using "expected" in the normal statistical sense (i.e. synonymous with "average"), as in what would be "expected" by someone with perfect knowledge.

In the sense you are using the word, I don't think you can maintain a state where every option has approximately equal expected value for long. Perhaps you can have that the first time a player plays the game, but as soon as the player learns anything the expected values would change away from the equilibrium.
 
#7
Also, I have a more fundamental objection: I think you are wrong about the nature of the connection between ambiguity and equilibrium (I'm using "equilibrium" as a shorthand for "the state where the player thinks that all the options in a choice have the same value"). It's true, I think, that maximum ambiguity implies equilibrium, since if the player is entirely uncertain they will probably default to thinking that the options are about equal. However, equilibrium does not necessarily imply ambiguity. It could be possible, of course, for the player to believe that all the options were equally valuable, and to be correct in that belief. Such a choice would be in equilibrium, but there would be no ambiguity since the player has no uncertainty (or false certainty).

So you can't just enforce equilibrium and expect that to create ambiguity, because equilibrium does not lead to ambiguity; it's the other way around. Trying to create ambiguity by forcing equilibrium is like saying "Healthy people have a lower heartrate. Being in a coma causes a lower heartrate, and so we can make people healthier by putting them into comas".
 
#8
Ah, so by "expected value" you just mean the value that the player actually expects? That's a confusing way to use the term, I assumed you were using "expected" in the normal statistical sense (i.e. synonymous with "average"), as in what would be "expected" by someone with perfect knowledge.

In the sense you are using the word, I don't think you can maintain a state where every option has approximately equal expected value for long. Perhaps you can have that the first time a player plays the game, but as soon as the player learns anything the expected values would change away from the equilibrium.
Sorry for the poor use of the word "expected", I confess I'm not a trained statistician.

I agree that as the player learns, choices necessarily tend away from equilibrium. This is a general problem with games, remembering that it is a problem because, as choices tend away from equilibrium, the amount of learning the player does deteriorates. Rookies learn much more every match than grandmasters. I'm not yet convinced that this problem is solvable.


...equilibrium does not necessarily imply ambiguity.
I agree and always have agreed. Regard the second, bracketed, sentence here:

In order to be maximally ambiguous, each option in a choice must have an approximately equal expected value. (Of course, that’s not necessarily the only condition, but it is still a required condition.)
But this highlights an important point for further inquiry; that all I am discussing is how to put choices into equilibrium, but that that concern needs to be weighed with any other (as yet unidentified) conditions of ambiguity: just as having a low heart-rate needs to be considered along with the many other concerns that make up a person’s health.