(Part I of this post can be found here.)

Greetings, loyal blog readers! I'm afraid life took over for a bit after writing Part I, but we're now back in the peanut gallery watching Alice and Bob battle wits.

In the last post, we talked about a fun game which Matt Baker used to prove the uncountability of the reals. (We'll call this the Nested-Intervals Game.) In his post about this game, he asked a question which (still weeks later) is vexing me:

Some background:

Never mind exactly what the full, formal definition of a "game" is here. The important parts are that players take turns, each "does something", and eventually the game is over and based on what has been played, someone is declared the winner. (For simplicity, we assume there are no draws.) The classical proof of Theorem 2 uses a game which is related to the Nested-Intervals Game, but a little easier to explain.

What we will do is construct \(T\) so that neither Alice nor Bob can guarantee a win. In particular, we have to guarantee that for any strategy that Alice might choose, Bob could play a counter-strategy and win the game. But if Alice knew what counter-strategy Bob was going to choose, she could change

So what is "a strategy" in the context of this game? Basically, a strategy for Alice is a function which takes the state of the game so far and decides her next move: that is a function \[0.a_1 b_2 a_3 \cdots b_{2k} \mapsto a_{2k + 1}.\]Likewise, a strategy for Bob is a function \[0.a_1 b_2 a_3 \cdots a_{2k - 1} \mapsto b_{2k}. \]

So what we'll try to do is run through the set of all strategies, and for each one, choose a number which witnesses that that strategy loses, and put it into \(T\) (if the strategy was Bob's) or not (if the strategy was Alice's).

But this is easier said than done! In particular, how do we know that there's a coherent way of choosing which points are in or out of \(T\)? That's where the Axiom of Choice comes in (and in fact set theorists have shown that, if we allow the Axiom of Choice to fail in an artificial set-theoretic universe, then we can rig it so this construction cannot succeed).

We'll use the version of AC which (outside of logic) is least popular: the Well-Ordering Principle. (We'll do this so that we can define \(T\) via recursion, in a step-by-step way.) The WOP says that we can take the set \( \mathcal{S} \) of all strategies and write it as\[ \mathcal{S} = \{ s_\alpha \colon \alpha < \mathfrak{c} \} \]where \( \mathfrak{c} \) is the cardinality of the real line, which is also the cardinality of \( \mathcal{S} \). (This is easy to prove if you know a bit more than I've told you in this post. If not, just take it on faith.) One important thing about this ordering is that, for any strategy \(s_\alpha\), there are

So let's see how the construction goes. At each stage in the recursion, we'll have one pile of points (fewer than \(\mathfrak{c} \)) which are definitely in \(T\), another pile points (also fewer than \(\mathfrak{c}\)) which are definitely out of \(T\), and the rest are in a giant not-yet-decided pile. Let's say we're at stage \(\alpha \), and \(s_\alpha\) is an Alice-strategy. There are continuum-many possible \(t\) which could result from Bob's play, depending on what strategy he uses. Pick some strategy \(r\) such that, if Alice plays \( s_\alpha \) and Bob plays \(r\), then the resulting point \(t\) is currently undecided. (Self-check exercise: why is this possible?) Put \(t\) into the "definitely not in \(T\)" pile. (If \(s_\alpha\) had been a Bob-strategy, we would have put \(t\) into the "definitely in \(T\)" pile.)

What happens after we've recursed over all \(s_\alpha \)? Well, we've now got \(\mathfrak{c}\)-many points which are definitely in \(T\), another \(\mathfrak{c}\)-many which are definitely not in \(T\), and maybe some leftovers. We don't care about the leftovers, so we'll just leave them out of \(T\). What happens if Alice and Bob play? Well, imagine if Alice has to tell Bob what strategy she's using: then Bob goes and looks up what the number \(\alpha \) of that strategy was, finds the \(r\) which was found in the recursion at stage \(\alpha\), and plays according to \(r\). By construction, the resulting point is not in \(T\), so Bob wins.

But what if we did it the other way: that Bob had to declare his strategy first, and Alice could modify hers based on what Bob said? Well then, by the same logic, Alice would win.

So no one has a winning strategy. Q.E.D.

What does this have to do with

Why is the "rational endpoints" restriction important? Well, it isn't, in the sense that there are

Question 0, on the other hand, doesn't have this feature. In fact, each player has \( \mathfrak{c}^\mathfrak{c} = 2^\mathfrak{c} > \mathfrak{c} \) strategies to choose from, so we have to worry about running out of target points before we run out of strategies.

My instinct is that the answer to Question 0 is still yes -- but we'll need a new method of proof to get there.

Greetings, loyal blog readers! I'm afraid life took over for a bit after writing Part I, but we're now back in the peanut gallery watching Alice and Bob battle wits.

In the last post, we talked about a fun game which Matt Baker used to prove the uncountability of the reals. (We'll call this the Nested-Intervals Game.) In his post about this game, he asked a question which (still weeks later) is vexing me:

**Question 0:**Does there exist a target set \(T\) such that neither player has a winning strategy for the Nested-Intervals Game targeting \(T\)?Some background:

**Old Theorem 1:**For every game which must always end after some fixed finite number of turns, one of the two players has a winning strategy.**Old Theorem 2:**If the Axiom of Choice holds, then there exists a game (one which can go for infinitely many turns) with no winning strategy.Never mind exactly what the full, formal definition of a "game" is here. The important parts are that players take turns, each "does something", and eventually the game is over and based on what has been played, someone is declared the winner. (For simplicity, we assume there are no draws.) The classical proof of Theorem 2 uses a game which is related to the Nested-Intervals Game, but a little easier to explain.

**Proof of Old Theorem 2:**The game which we will show has no winning strategy is called the Digit-Choosing Game. Once again, it is played with some set \(T \subseteq [0,1]\) of targets in the background. Here are the rules: Alice and Bob take turns choosing the decimal digits of a number \(t\). This goes on for countably many turns, like this: \[ t = 0.a_1 b_2 a_3 b_4 \cdots \]After all the digits have been chosen, we check whether \(t \in T\); if so, Alice wins, and if not, Bob wins.What we will do is construct \(T\) so that neither Alice nor Bob can guarantee a win. In particular, we have to guarantee that for any strategy that Alice might choose, Bob could play a counter-strategy and win the game. But if Alice knew what counter-strategy Bob was going to choose, she could change

*her*strategy and win. Etc. It's a bit of a mindfuck, I know. Stay with me.So what is "a strategy" in the context of this game? Basically, a strategy for Alice is a function which takes the state of the game so far and decides her next move: that is a function \[0.a_1 b_2 a_3 \cdots b_{2k} \mapsto a_{2k + 1}.\]Likewise, a strategy for Bob is a function \[0.a_1 b_2 a_3 \cdots a_{2k - 1} \mapsto b_{2k}. \]

So what we'll try to do is run through the set of all strategies, and for each one, choose a number which witnesses that that strategy loses, and put it into \(T\) (if the strategy was Bob's) or not (if the strategy was Alice's).

But this is easier said than done! In particular, how do we know that there's a coherent way of choosing which points are in or out of \(T\)? That's where the Axiom of Choice comes in (and in fact set theorists have shown that, if we allow the Axiom of Choice to fail in an artificial set-theoretic universe, then we can rig it so this construction cannot succeed).

We'll use the version of AC which (outside of logic) is least popular: the Well-Ordering Principle. (We'll do this so that we can define \(T\) via recursion, in a step-by-step way.) The WOP says that we can take the set \( \mathcal{S} \) of all strategies and write it as\[ \mathcal{S} = \{ s_\alpha \colon \alpha < \mathfrak{c} \} \]where \( \mathfrak{c} \) is the cardinality of the real line, which is also the cardinality of \( \mathcal{S} \). (This is easy to prove if you know a bit more than I've told you in this post. If not, just take it on faith.) One important thing about this ordering is that, for any strategy \(s_\alpha\), there are

*strictly fewer than*\( \mathfrak{c} \) strategies which come before it in the ordering.So let's see how the construction goes. At each stage in the recursion, we'll have one pile of points (fewer than \(\mathfrak{c} \)) which are definitely in \(T\), another pile points (also fewer than \(\mathfrak{c}\)) which are definitely out of \(T\), and the rest are in a giant not-yet-decided pile. Let's say we're at stage \(\alpha \), and \(s_\alpha\) is an Alice-strategy. There are continuum-many possible \(t\) which could result from Bob's play, depending on what strategy he uses. Pick some strategy \(r\) such that, if Alice plays \( s_\alpha \) and Bob plays \(r\), then the resulting point \(t\) is currently undecided. (Self-check exercise: why is this possible?) Put \(t\) into the "definitely not in \(T\)" pile. (If \(s_\alpha\) had been a Bob-strategy, we would have put \(t\) into the "definitely in \(T\)" pile.)

What happens after we've recursed over all \(s_\alpha \)? Well, we've now got \(\mathfrak{c}\)-many points which are definitely in \(T\), another \(\mathfrak{c}\)-many which are definitely not in \(T\), and maybe some leftovers. We don't care about the leftovers, so we'll just leave them out of \(T\). What happens if Alice and Bob play? Well, imagine if Alice has to tell Bob what strategy she's using: then Bob goes and looks up what the number \(\alpha \) of that strategy was, finds the \(r\) which was found in the recursion at stage \(\alpha\), and plays according to \(r\). By construction, the resulting point is not in \(T\), so Bob wins.

But what if we did it the other way: that Bob had to declare his strategy first, and Alice could modify hers based on what Bob said? Well then, by the same logic, Alice would win.

So no one has a winning strategy. Q.E.D.

What does this have to do with

**Question 0**that we started with today?**Exercise 3:**Prove that, if Alice and Bob play the Nested-Intervals Game, but are restricted to choosing only rational endpoints, then the same construction as in the proof of Theorem 2 constructs a \(T\) for which neither player has a winning strategy.Why is the "rational endpoints" restriction important? Well, it isn't, in the sense that there are

*lots and lots of potential modifications*that have the same conclusion. What all those modifications have in common, however, is that the number of strategies is the same as the size of the continuum. We used that in the proof above when we well-ordered the set of all strategies in order-type \( \mathfrak{c} \).Question 0, on the other hand, doesn't have this feature. In fact, each player has \( \mathfrak{c}^\mathfrak{c} = 2^\mathfrak{c} > \mathfrak{c} \) strategies to choose from, so we have to worry about running out of target points before we run out of strategies.

My instinct is that the answer to Question 0 is still yes -- but we'll need a new method of proof to get there.

## No comments:

## Post a Comment