Determinacy of borel games




















We therefore have an obvious definition of a limiting tree : its first levels are given by the first levels of. Then the first levels of every tree from onwards will agree with the first levels of. How do we map to one of the trees? Well, let us write for the restriction of a tree to the first levels.

Then we need a way of mapping to. Now suppose we are given a strategy for. We need to map it to a strategy for. Let us write for the part of the strategy that says what does for the first moves of the game and thus the first moves of the player whose strategy it is.

And what about when? In that case we have a map that takes strategies for to strategies for. Moreover, what does for the first moves depends only on what does for the first moves at last we see this property coming in in an absolutely critical way, though it is a pretty natural property anyway , so it makes sense to talk about. So the natural definition for a strategy for built out of the strategy for is as follows.

If you want to know what does for the first moves of , you take the strategy , consider it as a strategy for the first moves of , and then map it to a strategy for the first moves of using the map that lifts to which is a composition of maps that lift to.

In a similar way we can define a strategy for each game just by regarding as the beginning of the sequence and doing exactly what we did above — a -lifting is in particular a -lifting.

Of course, there is something to check if we define it this way: we need to know that what we claim the strategy does for the first moves is consistent with what we claim it does for the first moves when. In other words, we need to check that is the strategy for the first moves of. But this is indeed the case: to work out we take the strategy , consider it as a strategy for the first moves of , and map that to a strategy for the first moves of.

Instead of the first stage, we could take the strategy , consider it as a strategy for the first moves of , map that to a strategy for the first moves of , restrict it to a strategy for the first moves, and then map it to a strategy for the first moves of.

The result would be the same as both the partial strategies that we want to show are equal. The same argument shows that all the strategies are well-defined. How about the lifting property? We would now like to find a path in consistent with that maps to. Recall that the first levels of are the same as the first levels of , and that for each the lifting from to is a -lifting.

Therefore, to find the path we seek, we simply lift to a path in that is consistent with the strategy , lift that to a path in that is consistent with the strategy , and so on. The first points in the path do not change after we have reached , so we end up with a well-defined limiting sequence.

For every , the first terms of this sequence are consistent with the strategy , which tells us precisely that the sequence as a whole is consistent with. An additional remark we need to make is that because we need -liftings in our inductive hypothesis, we need them in the conclusion as well.

This can be achieved as follows. What we have shown is that if each is -lifted to , then every is lifted to.

A simple modification to the argument shows that if each is -lifted to , then every is -lifted to. The rest of the argument is now quite easy. Suppose we have a tree and sets all of which give rise to games that can be lifted to clopen games.

Suppose also that we know that any inverse image under a continuous map of one of these sets has the same property. This is true in particular if belongs to some Borel class for which we have proved that all sets in the class can be lifted appropriately. We then construct a sequence of games as follows. First we 1-lift to in such a way that becomes a clopen set. All the other are mapped to continuous inverse images of themselves in the new tree.

We then 2-lift to in a way that makes or rather, its inverse image clopen. Then we 3-lift to to make clopen, and so on. Finally, we let be the inverse limit discussed in the previous section. What we have now is a lifting of the original game to a game such that all the inverse images of the sets are clopen. It follows that the union of these inverse images is at least open. We can therefore do one final lift to a game that makes the union clopen.

We have therefore lifted the original union in to a clopen set, which is what we were trying to do. Again, we need something a bit stronger: for any we need to be able to -lift to some. But that is straightforward, given the remarks above. But that follows trivially from the fact that inverse images preserve complements and the fact that complements of clopen sets are clopen. I was planning to end this series of posts here, but have subsequently had some further thoughts about liftings that shed a little light — for me at least — on the proof.

I have collected those into a fifth post, which I will put up in a few days. This entry was posted on September 10, at am and is filed under complexity. You can follow any responses to this entry through the RSS 2. You can leave a response , or trackback from your own site. You are commenting using your WordPress. You are commenting using your Google account.

Nevertheless, I think one can go further in that direction and speculate about how Martin came up with his proof. Understanding a proof in that kind of detailed way takes some effort, and usually everybody has to make that kind of effort for themselves.

I have got far enough to be confident that I will actually manage to write that fourth post. One thing I believe quite strongly is that it is better to be in a position where you can remember the ideas in a way that makes it easy to reconstruct the details than to be in a position where you have all the details in front of you but are less certain of the underlying ideas. So I may skimp on some of the details, but only if I am confident that they really have been reduced to easy exercises.

This takes some time. I need to say what a Borel set is, what a game is, what it means for a game to be determined, and why not all games are determined. Let us write for the set of all infinite sequences of positive integers.

If is a subset of , then we can define a two-player game in a simple way as follows. Player I chooses a positive integer , then Player II chooses a positive integer , then Player I chooses a positive integer , and so on. At the end of the game, when they have between them chosen an infinite sequence , Player I wins if and Player II wins if. To make this definition slightly more intuitive, let me make the simple observation that if you choose a suitable set , then you have the game of chess.

Let me do it in a very crude way. Choose an enumeration of all the possible positions in chess. Call a pair of integers a legal move for white if and are both associated with chess positions in this enumeration, and there is a valid chess move for white that takes the board from position to position.

Similarly, we can define a legal move for black. Now let be the set of all sequences with the following properties. Roughly speaking, the sequence either corresponds to a normal game of chess or to a game of chess that continues as normal until black cheats and white refuses to play on. Since there are only finitely many possible positions, there is an upper bound on the number of possible moves in any game of chess.

Now define to be the set of infinite sequences that have an initial segment in. If Player I wants to get a sequence to be in , then she must play integers corresponding to legal chess moves for white until either she wins the game of chess or Player II cheats — which we count as a win for Player I. Similarly, if Player II wants to get a sequence not to be in , then he must play integers corresponding to legal chess moves and must win the corresponding game of chess.

I realize after writing all that that there are of course draws in chess. However, in the discussion of abstract games, it is usual to have just a set and its complement, so that one player always wins.

A key definition is that of a strategy. Roughly speaking, this means a way of deciding what to do in any given situation. We can formalize that very easily: a strategy for Player I is a function that maps sequences to integers. We allow to be zero, so the sequences may be empty. Player I plays according to the strategy if at the end of the game we have a sequence such that for every. In other words, Player I uses the function to decide what positive integer to play, given the sequence so far.

A strategy for Player II is defined similarly, but now the function is defined on odd-length sequences. Note that if is a strategy for Player I and is a strategy for Player II, then the two strategies define a unique infinite sequence — the sequence that results if Player I plays with strategy and Player II plays with strategy. This sequence is denoted by.

Equivalently, is a winning strategy for Player I if every sequence that satisfies the condition for every belongs to.

Similarly, we can define a winning strategy for Player II. Most people, when they meet this definition for the first time, have a strong intuition that every game must be determined. If Player I does not have a winning strategy, then trivially, or so it seems, Player II has a way of defeating Player I — that is, a winning strategy.

And yet, if you believe in the axiom of choice, the conclusion of this argument is false. To answer that, we need to try to understand as precisely as possible what the argument actually is that underlies our feeling that one or other of the two players must have a winning strategy.

Here is one possibility. Then we can define a strategy for Player II as follows. But then this observation can be repeated: whatever Player I does next, there must be a move for Player II that results in a position from which Player I does not have a winning strategy. But does that imply that Player I loses the game?

Not necessarily. Then if Player II decides to cooperate and they both play nothing but 1s, there is no stage at which Player I has a winning strategy, and yet Player I ends up winning. What the proof does show is that a certain important class of games is determined: the class of open games.

Informally, an open game is one with the property that if Player I wins, then there must be a finite stage by which she has already won, in the sense that all extensions of the sequence reached so far are in. A simple but not too simple example of an open game is the set of sequences with at least 1s.

Player I can win this game easily, by simply choosing 1 every move. Note, however, that Player II can postpone this moment for as long as he likes, by choosing sufficiently large. Open games are called open games because the winning sets are open in the product topology on , where itself has the discrete topology. The basic open sets in this topology can be described as follows.

Let be a finite sequence. Then we can define an open set by taking all sequences with as an initial segment. These are the basic open sets, so an open set is a union of sets of the form. If is an open set and is an infinite sequence that belongs to , then there must be a finite sequence such that and.

That is, there must be some initial segment of such that every sequence that starts with belongs to. Suppose that is an open game, and that Player I does not have a winning strategy. Then consider what the earlier argument tells us.

In particular, Player II can play in such a way that it is never the case that every continuation of the sequence so far belongs to in which all possible strategies for Player I would be winning strategies. But if is open, this means that the sequence reached at the end of the game does not belong to.

The result that open games are determined is due to Gale and Stewart in The proof is easy, but the paper of Gale and Stewart was also the one where infinite games of this kind were first introduced and where the question of Borel determinacy was first posed. If you still feel funny about this statement, then a shorter argument against the obviousness may be helpful.

If this is false, then we can conclude that for every there exists such that. But for Player II to have a winning strategy we need something potentially far stronger: that can be chosen independently of. It turns out that for some games, declaring your strategy in advance puts you at a huge disadvantage just as it does if you want to play the who-can-name-the-larger-number game. If you give yourself the luxury of the axiom of choice, then showing that there are nondetermined games is straightforward.

Note first that the set of possible strategies has the same cardinality as that of the continuum, since a strategy is a function from a countable set of all positive integer sequences of even length or of all positive integer sequences of odd length to a countable set of positive integers.

Therefore, we can well-order the set of all strategies in such a way that each strategy has fewer than continuum-many predecessors.

We now build a set in such a way that no strategy is a winning strategy. Chatterjee, K. Clairambault, P. In: LICS, pp. In: Coecke, B. Abramsky Festschrift. LNCS, vol. Springer, Heidelberg Google Scholar. Gale, D. Martin, D. Nielsen, M. In: Handbook of Logic in Computer Science, pp. Oxford University Press Google Scholar.



0コメント

  • 1000 / 1000