Matrix Games

Definition of Two-person Zero-Sum Game in Normal Form

Definition: The system: \[\Gamma(X,Y,K)\] Where \(X,Y\) are non-empty sets, and the function \(K:X\times Y\rightarrow \mathbb{R}^1\), is called a two-person zero-sum game in normal form.

Definition: Two-person zero-sum games in which both players have finite sets of strategies are called matrix games.

Max-min and Min-max strategies

The lower value of the game \(\Gamma\):

\[\underline{v}=\sup_{x\in X} \inf_{y\in Y}K(x,y)\] The principal of constructing strategy \(x\) based on the maximization of the minimal payoff is called the maximin principle, and the strategy \(x\) selected by this principle is called the maximin strategy of Player \(1\).

The upper value of the Game \(\Gamma\):

\[\overline{v}=\inf_{y\in Y}\sup_{x\in X}K(x,y)\] The principal of constructing strategy \(x\) based on the minimization of the maximum losses, is called the minimax principle, and the strategy \(x\) selected by this principle is called the minimax strategy of Player \(2\).

Consder the \((m\times n)\) matrix game \(\Gamma_A\). Then the extrema are reached and the lower and upper values of the game are, respectively equal to:

\[ \begin{aligned} \underline{v}=\max_{1\leq i\leq m}\min_{1\leq j\leq n} a_{ij}\\ \overline{v}=\min_{1\leq j\leq n}\max_{1\leq i\leq m} a_{ij} \end{aligned} \]

The following assertion holds for ang game \(\Gamma=(X,Y,K)\).

Lemma 1-2-1: In two-person zero-sum game \(\Gamma\): \[\underline{v}\leq\overline{v}\] or \[\sup_{x\in X} \inf_{y\in Y}K(x,y)\leq \inf_{y\in Y}\sup_{x\in X}K(x,y) \tag{1.2.1}\]

Proof: Let \(x\in X\) be an arbitrary strategy of Player \(1\). Then we have:

\[K(x,y)\leq \sup_{x\in X}K(x,y)\]

Hence we have:

\[\inf_{y\in Y}K(x,y)\leq \inf_{y\in Y}\sup_{x\in X}K(x,y)\]

Since the R.H.S of the latter inequality is a constant, and the value \(x\in X\) has been chosen arbitrary. Therefore, the following inequality holds:

\[\sup_{x\in X}\inf_{y\in Y}K(x,y)\leq \inf_{y\in Y}\sup_{x\in X}K(x,y)\] End Proof

Saddle Points

Definition: In the two-person zero-sum game \(\Gamma=(X,Y,K)\), the point \((x^*,y^*)\) is called an equilibrium point or saddle point if:

\[ \begin{aligned} K(x,y^*)\leq K(x^*,y^*)\\ K(x^*,y)\geq K(x^*,y^*) \end{aligned} \]

For all \(x\in X\) and \(y\in Y\).

The set of all equilibrium points in the game \(\Gamma\) will be denoted as:

\[Z(\Gamma),Z(\Gamma)\subset X\times Y\]

In matrix game \(\Gamma_A\), the equilibrium points are saddle points of the payoff matrix \(A\). i.e. the points \((i^*,j^*)\) for which for all \(i\in M\) and \(j\in N\) the following inequalities are satisfied:

\[a_{ij^*}\leq a_{i^*j^*}\leq a_{i^*j}\]

And the element of the matrix \(a_{i^*j^*}\) at the saddle point is simultaneously the minimum of its row and the maximum of its column.

Theorem 1-3-1: Let \((x_1^*,y_1^*),(x_2^*,y_2^*)\) be two arbitrary saddle points in the two-person zero-sum game \(\Gamma\). Then:

  1. \(K(x_1^*,y_1^*)=K(x_2^*,y_2^*)\)
  2. \((x_1^*,y_2^*)\in Z(\Gamma),(x_2^*,y_1^*)\in Z(\Gamma)\)

Proof:

From the definition of a saddle point for all \(x\in X\) and \(y\in Y\), we have:

\[ \begin{aligned} K(x,y_1^*)\leq K(x_1^*,y_1^*)\leq K(x_1^*,y)\\ K(x,y_2^*)\leq K(x_2^*,y_2^*)\leq K(x_2^*,y) \end{aligned} \]

Just do some tricks(substitute some points), we can get the following inequality:

\[K(x_2^*,y_1^*)\leq K(x_1^*,y_1^*)\leq K(x_1^*,y_2^*)\leq K(x_2^*,y_2^*)\leq K(x_2^*,y_1^*)\]

Hence we can get:

\[K(x_2^*,y_1^*)= K(x_1^*,y_1^*)= K(x_1^*,y_2^*)= K(x_2^*,y_2^*)\]

Show the validity of the second statement. Consider the point \((x_2^*,y_1^*)\):

\[K(x,y_1^*)\leq K(x_1^*,y_1^*)=K(x_2^*,y_1^*)=K(x_2^*,y_2^*)\leq K(x_2^*,y)\]

For all \(x\in X, y\in Y\). Same way, we can prove \((x_1^*,y_2^*)\in Z(\Gamma)\).

End Proof

Definition: Let \((x^*,y^*)\) be a saddle point in the game \(\Gamma\), Then the number \(v=K(x^*,y^*)\) is called the value of the game \(\Gamma\).

Corollary 1-3-1-1: Denote, \[ \begin{aligned} X^*=\{x^*|x^*\in X,\exists y^*\in Y,(x^*,y^*)\in Z(\Gamma)\}\\ Y^*=\{y^*|y^*\in Y,\exists x^*\in X,(x^*,y^*)\in Z(\Gamma)\} \end{aligned} \] Then set \(Z(\Gamma)\) can be represented as Cartesian product

\[Z(\Gamma)=X^*\times Y^*\]

Hints about Proof: Proof by contradiction

Definition: The set \(X^*(Y^*)\) is called the set of optimal strategies of Player \(1(2)\) in the game \(\Gamma\), and their elements-optimal strategies of the \(1(2)\) Player.

Lemma 1-3-1(on Scale): Let \(\Gamma=(X,Y,K)\ \text{and}\ \Gamma'=(X,Y,K')\) be two zero-sum games and:

\[K'=\beta K+\alpha,\beta >0, a=const,\beta=const\]

Then:

\[Z(\Gamma')=Z(\Gamma),v_{\Gamma'}=\beta v_{\Gamma}+\alpha\]

Proof:

Let \((x^*,y^*)\) be a saddle point in the game \(\Gamma\). Then we have:

\[ \begin{aligned} K'(x^*,y^*)=\beta K(x^*,y^*)+\alpha\leq \beta K(x^*,y)+\alpha\leq K'(x^*,y)\\ K'(x,y^*)=\beta K(x,y^*)+\alpha \leq \beta K(x^*,y^*)+\alpha=K'(x^*,y^*) \end{aligned} \]

For all \(x\in X,y\in Y\). Therefore \((x^*,y^*)\in Z(\Gamma'),Z(\Gamma)\subset Z(\Gamma')\).

Conversely, let \((x,y)\in Z(\Gamma')\). Then:

\[K(x,y)=\frac{1}{\beta}K'(x,y)-\frac{\alpha}{\beta}\]

By the similar reasoning, we have that \((x,y)\in Z(\Gamma)\). Therefore, \(Z(\Gamma)=Z(\Gamma')\), and we have:

\[v_{\Gamma'}=K'(x^*,y^*)=\beta K(x^*,y^*)+\alpha=\beta v_{\Gamma}+\alpha\]

End Proof

Theorem 1-3-2: For the existence of the saddle point in the game \(\Gamma=(X,Y,K)\), it is necessary and sufficient that the quantities

\[ \min_{y}\sup_{x}K(x,y),\max_{x}\inf_{y}K(x,y) \tag{1.3.1} \]exist and the following equality holds:

\[ \underline{v}=\max_{x}\inf_{y}K(x,y)=\min_{y}\sup_{x}K(x,y)=\overline{v}\tag{1.3.2} \]

Proof: Necessity. Let \((x^*,y^*)\in Z(\Gamma)\). Then for all \(x\in X,y\in Y\). The following inequality holds:

\[ K(x,y^*)\leq K(x^*,y^*)\leq K(x^*,y) \tag{1.3.3} \]and hence

\[ \sup_{x}K(x,y^*)\leq K(x^*,y^*) \tag{1.3.4} \] At the same time, we have:

\[ \inf_{y}\sup_{x}K(x,y)\leq \sup_{x}K(x,y^*) \tag{1.3.5} \]

Comparing \((1.3.4)\) and \((1.3.5)\), we get:

\[ \inf_{y}\sup_{x}K(x,y)\leq \sup_{x}K(x,y^*)\leq K(x^*,y^*) \tag{1.3.6} \]

In the similar way we get the inequality:

\[ K(x^*,y^*)\leq \inf_{y} K(x^*,y)\leq \sup_{x} \inf_{y}K(x,y) \tag{1.3.7} \]From \((1.3.5)\) and \((1.3.7)\) we have:

\[ \inf_{y}\sup_{x}K(x,y)\leq \sup_{x} \inf_{y}K(x,y) \tag{1.3.8} \]

From \((1.2.1)\) and \((1.3.8)\), we get:

\[ \inf_{y}\sup_{x}K(x,y)= \sup_{x} \inf_{y}K(x,y) \tag{1.3.9} \] And finally we get:

\[ \begin{aligned} \min_{y} \sup_{x}K(x,y)=\sup_{x}K(x,y^*)=K(x^*,y^*)\\ \max_{x} \inf_{y}K(x,y)=\inf_{y}K(x^*,y)=K(x^*,y^*) \end{aligned} \]

i.e. the exterior extrema of the min-sup and max-inf are reached at the points \(y^*\) and \(x^*\) respectively.

Sufficiency. Suppose there exist the min-sup and max-inf: \[ \min_{y} \sup_{x}K(x,y)=\sup_{x}K(x,y^*) \tag{1.3.10} \]\[ \max_{x} \inf_{y}K(x,y)=\inf_{y}K(x^*,y) \tag{1.3.11} \] And equation \((1.3.2)\) holds. We need to show that \((x^*,y^*)\) is a saddle point. Indeed,

\[ K(x^*,y^*)\geq \inf_{y}K(x^*,y)=\max_{x} \inf_{y}K(x,y)\tag{1.3.12} \]\[ K(x^*,y^*)\leq \sup_{x}K(x,y^*)=\min_{y} \sup_{x}K(x,y) \tag{1.3.13} \]

By \((1.3.2),(1.3.12)\) and \((1.3.13)\) we have:

\[ K(x^*,y^*)=\inf_{y}K(x^*,y)\leq K(x^*,y) \tag{1.3.14} \]\[ K(x^*,y^*)=\sup_{x}K(x,y^*)\geq K(x,y^*) \tag{1.3.15} \]for all \(x\in X,y\in Y\). i.e. \((x^*,y^*)\in Z(\Gamma)\).

End Proof

From the proof of Theorem (1-3-2) yield the following statement:

Corollary 1-3-2-1: If the min-sup and max-inf in \((1.3.1)\) exist and are reached on \(\overline{y}\) and \(\overline{x}\), respectively, then: \[\max_{x}\inf_{y}K(x,y)=K(\overline{x},\overline{y})=\min_{y}\sup_{x}K(x,y)\]

The games, in which saddle points exist, are called strictly determined. Therefore, this theorem establishs the criterion for strict determination of the game. And can be restated as follows: For the game to be strictly determined it is necessary and sufficient that the min-sup and max-inf in \((1.3.1)\) exist and the equality \((1.3.2)\) is satisfied.

(Note that, in the game \(\Gamma_A\), the extrema in \((1.3.1)\) are always reached and the theorem may be reformulate as the following form:)

Corollary 1-3-2-2: For the \((m\times n)\) matrix game to be strictly determined it is necessary and sufficient that the following equalities hold: \[\min _{j=1,2, \ldots, n} \max _{i=1,2, \ldots, m} \alpha_{i j}=\max _{i=1,2, \ldots, m} \min _{j=1,2, \ldots, n} \alpha_{i j}\]

Mixed Extension of a Game

Assume that the game \(\Gamma_A\) has no saddle point. Then, by Theorem 1-3-2 and Lemma 1-2-1-1, we have:

\[\min_{j}\max_{i}a_{ij}-\max_{i}\min_{j}a_{ij}> 0\] In this case, the max-min and min-max strategies are not optimal.

In this case, each player can guesses the choice of his opponent, and alters his strategy, then his payoff could increase and his opponent could decrease.

To keep the information about the choice of the strategy in secret from the opponent, it may be wise to choose the strategy using some random device. In this case the opponent cannot learn the player’s particular strategy in advance, since the player does not know it himself until strategy will actually be chosen at random.

Definition: The random variables whose values are strategies of a player is called a mixed strategy of the player.

Thus, for the matrix game \(\Gamma_{A},\) a mixed strategy of Player \(1\) is a random variable whose values are the row numbers \(i \in M, M=\) \(\{1,2, \ldots, m\} .\) A similar definition applies to Player \(2\)’s mixed strategy whose values are the column numbers \(j \in N\) of the matrix \(A\). Considering the above definition of mixed strategies, the former strategies will be referred to as pure strategies. Since the random variable is characterized by its distribution, the mixed strategy will be identified in what follows with the probability distribution over the set of pure strategies (Feller, 1971 ).

Thus, Player \(1\)’s mixed strategy \(x\) in the game is the \(m\) -dimensional vector:

\[ x=\left(\xi_{1}, \ldots, \xi_{m}\right), \sum_{i=1}^{m} \xi_{i}=1, \xi_{i} \geq 0, i=1, \ldots, m \]Similarly, Player \(2\)’s mixed strategy \(y\) is the \(n\) -dimensional vector: \[ y=\left(\eta_{1}, \ldots, \eta_{n}\right), \sum_{j=1}^{n} \eta_{j}=1, \eta_{j} \geq 0, j=1, \ldots, n \] In this case, \(\xi_i\geq 0\) and \(\eta_j\geq 0\) are the probabilities of choosing the pure strategies \(i\in M\) and \(j\in N\), respectively, when the players use mixed strategies \(x\) and \(y\).

Denote by \(X\) and \(Y\) the sets of mixed strategies for the first and second players, respectively. It can easily be seen that the set of mixed strategies of each player is a compact set in the corresponding finite Euclidean space (closed, bounded set).

Definition: Let \(x=\left(\xi_{1}, \ldots, \xi_{m}\right) \in X\) be a mixed strategy of Player \(1\). The set of indices \[ M_{x} \triangleq\left\{i | i \in M, \xi_{i}>0\right\} \] is called the spectrum of strategy \(x\). where \(M=\{1,2, \ldots, m\}\). Similarly, for the mixed strategy \(y=\left(\eta_{1}, \ldots, \eta_{n}\right) \in Y\) of Player \(2\), the spectrum \(N_{y}\) is determined as follows: \[ N_{y} \triangleq\left\{j | j \in N, \eta_{j}>0\right\} \]where \(N=\{1,2, \ldots, n\} .\) Thus, the spectrum of mixed strategy is composed of such pure strategies that are chosen with positive probabilities.

The player’s mixed strategies is the extension of his set of pure strategies.

Definition: The pair \((x,y)\) of mixed strategies in the matrix game \(\Gamma_A\) is called the situation in mixed strategies.

For the \((m \times n)\) matrix game \(\Gamma_{A}\), define the payoff of Player 1 at the situation \((x, y)\) in mixed strategies as the mathematical expectation of his payoff provided that the players use mixed strategies \(x\) and \(y,\) respectively. Since, the players choose their strategies independently; therefore the mathematical expectation of payoff \(K(x, y)\) in mixed strategies \(x=\left(\xi_{1}, \ldots, \xi_{m}\right), y=\left(\eta_{1}, \ldots, \eta_{n}\right)\) is equal to \[ K(x, y)=\sum_{i=1}^{m} \sum_{j=1}^{n} \xi_{i} a_{i j} \eta_{j}=(x A) y=x(A y) \]The function \(K(x,y)\) is continuous in \(x\in X\) and \(y\in Y\). When one player uses a pure strategy (\(i\) or \(j\) respectively) and the other uses a mixed strategy. (\(y\) or \(x\)). The payoffs \(K(i,y),K(x,j)\) are computer by formulars:

\[ \begin{aligned} K(i,y)\triangleq K(u_i,y)=\sum_{j=1}^{n} a_{i j}\eta_{j}=a_iy, i=1,\cdots,m\\ K(x,j)\triangleq K(x,w_j)=\sum_{i=1}^{m} \xi_{i} a_{i j}=xa^j, j=1,\cdots,n \end{aligned} \]Where \(a_i,a^j\) are respectively the \(i\)th row and \(j\)th column of the \((m\times n)\) matrix \(A\). And \(u_{i} =\left(\xi_{1}, \ldots, \xi_{m}\right) \in X,\) where \(\xi_{i}=1,\) \(\xi_{j}=0, j \neq i, i=1,2, \ldots, m .\) Indeed, mixed strategy \(u_i\in X\) is equivalent to the pure strategy \(i\in M\). Similarly, \(w_{j} =\left(\eta_{1}, \ldots, \eta_{j}, \ldots, \eta_{n}\right) \in Y\). Where \(\eta_{j}=1, \eta_{i}=0, i \neq j, j=1, \ldots, n\) with the pure strategy \(j \in N\) of Player \(2 .\)

Thus, from the matrix game \(\Gamma_{A}=(M, N, A)\), we have arrived at a new game \(\overline{\Gamma}_{A}=(X, Y, K).\) The game \(\overline{\Gamma}_{A}\) will be called a mixed extension of the game \(\Gamma_{A} .\) The game \(\Gamma_{A}\) is a subgame for \(\overline{\Gamma}_{A},\) i.e. \(\Gamma_{A} \subset \overline{\Gamma}_{A}\).

Definition. The point \(\left(x^{*}, y^{*}\right)\) in the game \(\overline{\Gamma}_{A}\) forms a saddle point(equilibrium situation) and the number \(v=K\left(x^{*}, y^{*}\right)\) is the value of the game \(\overline{\Gamma}_{A}\) if for all \(x \in X\) and \(y \in Y\) \[ K\left(x, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, y\right)\tag{1.4.1} \]The strategies \(\left(x^{*}, y^{*}\right)\) appearing in the saddle point are called optimal.

Moreover, by Theorem (1-3-2) the strategies \(x^{*}\) and \(y^{*}\) are respectively the maximin and minimax strategies, since the exterior extrema in \((1.3.1)\) are reachable (Since the function \(K(x, y)\) is continuous on the compact sets \(X\) and \(Y\) ).

Lemma 1-4-1: Let \(\Gamma_A\) and \(\Gamma_{A'}\) be two \((m\times n)\) matrix games, where: \[ A'=\alpha A+B,\alpha >0,\alpha=const,B=\{b_{ij}\},b_{ij}=\beta\ \forall i,j \]Then \(Z(\overline{\Gamma}_{A'})=Z(\overline{\Gamma}_A)\), \(\overline{v}_{A'}=\alpha \overline{v}_{A}+\beta\). Where \(\overline{\Gamma}_{A'}\) and \(\overline{\Gamma}_A\) are the mixed extensions of the games \({\Gamma}_{A'}\) and \({\Gamma}_A\), respectively, and \(\overline{v}_{A'}, \overline{v}_{A} \text { are the values of the games } \overline{\Gamma}_{A'} \text { and } \overline{\Gamma}_{A}\).

Proof:

Both matrics \(A\), \(A'\) are of dimensional \(m\times n\); therefor the sets of mixed strategies in the game \(\Gamma_{A'}\) and \(\Gamma_A\) coincide.

We shall show that for any situation in mixed strategies \((x,y)\) the following equality holds:

\[ K'(x,y)=\alpha K(x,y)+\beta \]Where \(K'\) and \(K\) are Player \(1\)’s payoffs in the games \(\overline{\Gamma}_{A'}\) and \(\overline{\Gamma}_{A}\), respectively.

Indeed, \(\forall x\in X\text{ and } y\in Y\) we have:

\[ K'(x,y)=xA'y=\alpha(xAy)+xBy=\alpha K(x,y)+\beta \]

From the Lemma 1-3-1(on scale), it then follows that \(Z(\overline{\Gamma}_{A'})=Z(\overline{\Gamma}_A)\), \(\overline{v}_{A'}=\alpha \overline{v}_{A}+\beta\).

End Proof

T.B.A (Convex sets and Systems of Linear inequalities)

Min-Problem \[ \min cx,\ s.t.\ x A \geq b, x \geq 0 \tag{1.5.1} \]where \(A\) is an \((m \times n)\) matrix, \(c \in R^{m}, x \in R^{m}, b \in R^{n}\) is called a direct problem of linear programming in standard form (Don’t care about the name) \[ \max by,\ s.t.\ A y \leq c, y \geq 0 \tag{1.5.2} \]is called a dual linear programming problem for \((1.5.1)\). Where \(y \in R^{n},\).

Theorem (Duality Theorem): If both problems \((1.5.1),(1.5.2)\) have feasible solutions, then both of them have optimal solutions \(\overline{x},\overline{y}\), respectively. And:

\[c\overline{x}=b\overline{y}\]

Proof: T.B.AEnd Proof

Existence of a Solution of the Matrix Game in Mixed strategies

Theorem 1-6-1: Any matrix games has a saddle in mixed strategies. [Proven by von Neumann(1928)]

Proof: Let \(\Gamma_A\) be an arbitrary \((m\times n)\) game with strictly positive matrix \(A=\{a_{ij}\},\forall i,j\). To shown that in this case the theorem is true. We shall consider an auxiliary linear programming problem: \[ \min xu,\ s.t.\ xA\geq w,x\geq 0 \tag{1.6.1} \] and its dual problem: \[ \max yw,\ s.t.\ Ay\leq u,y\geq 0 \tag{1.6.2} \]where \(u=(1, \ldots, 1) \in R^{m}, w=(1, \ldots, 1) \in R^{n} .\) Since \(A\) is a strict positivity matrix. It follows that there exists a vector \(x>0\) s.t. \(x A\geq w\).

i.e. problem \((1.6.1)\) has a feasible solution. On the other hand, the vector \(y=0\) is a feasible solution of problem \((1.6.2)\).

Therefore, by the duality theorem of linear program\(\operatorname{ming},\) both problems \((1.6 .1)\) and \((1.6 .2)\) have optimal solutions \(\overline{x}, \overline{y},\) respectively, and: \[\overline{x}u=\overline{y}w\triangleq \Theta >0 \tag{1.6.3}\] Consider vectors \(x^*,y^*\) are strategies for the Player \(1\) and Player \(2\) in the game \(\overline{\Gamma}_A\), respectively. Therefore, we have \(x^*u=y^*w=1\). By \((1.6.3)\) we have:

\[ x^*u=\frac{\overline{x}u}{\Theta}=\frac{\overline{y}w}{\Theta}=y^*w=1 \]Now, we’ll shown that \(x^*=\frac{\overline{x}}{\Theta},y^*=\frac{\overline{y}}{\Theta}\) are optimal strategies for the Player \(1\) and \(2\) in the game \(\overline{\Gamma}_{A}\), respectively and the value of the game is equal to \(\frac{1}{\Theta}\).

And from feasible solution \(\overline{x},\overline{y}\) for problems \((1.6.1),(1.6.2)\). It follows that \(x^*=\frac{\overline{x}}{\Theta}\geq 0, y^*=\frac{\overline{y}}{\Theta}\geq 0\), i.e. \(x^*\) and \(y^*\) are the mixed strategies of Players \(1\) and \(2\) in the game \(\Gamma_A\).

Let us compute a payoff to Player \(1\) at \((x^*,y^*)\):

\[ K(x^*,y^*)=x^*Ay^*=\frac{(\overline{x}A\overline{y})}{\Theta^2} \tag{1.6.4} \]From the feasibility of vectors \(\overline{x},\overline{y}\) for problems \((1.6.1),(1.6.2)\) and equality \((1.6.3)\), we have:

\[ \Theta=w \overline{y}\leq (\overline{x}A)\overline{y}=\overline{x}(A\overline{y})\leq \overline{x}w=\Theta \tag{1.6.5} \]Thus we have \(\overline{x}A\overline{y}=\Theta\) and \((1.6.4)\) implies that:

\[ K(x^*,y^*)=\frac{1}{\Theta} \tag{1.6.6} \]Let \(x \in X\) and \(y \in Y\) be arbitrary mixed strategies for Players \(1\) and \(2\). The following inequalities hold: \[ K\left(x^{*}, y\right)=\left(x^{*} A\right) y=\frac{(\overline{x} A) y}{\Theta} \geq\frac{w y}{\Theta}=\frac{1}{\Theta} \tag{1.6.7} \]\[K\left(x, y^{*}\right)=x\left(A y^{*}\right)=\frac{x(A \overline{y})}{\Theta} \leq \frac{x u}{\Theta}=\frac{1}{\Theta} \tag{1.6.8} \]Comparing \((1.6 .6)-(1.6 .8),\) we have that \(\left(x^{*}, y^{*}\right)\) is a saddle point and \(\frac{1}{\Theta}\) is the value of the game \(\Gamma_{A}\) in mixed strategies with a strictly positive matrix \(A\).

Now consider the \((m\times n)\) game \(\Gamma_{A'}\) with an arbitrary matrix \(A'=\{a'_{ij}\}\). Then there exist such constant \(\beta>0\) that the matrix \(A=A'+B\) is strictly positive. Where \(B\) with the same definition as previous. In the game \(\Gamma_A\) there exists a saddle point \((x^*,y^*)\) in mixed strategies, and the value of the game equals \(v_A=\frac{1}{\Theta}\), where \(\Theta\) is determined as in \((1.6.3)\).

From Lemma 1-4-1, it follows that \((x^*,y^*)\in Z(\overline{\Gamma}_{A'})\) is a saddle point in the game \(\Gamma_{A'}\) in mixed strategies and the value of the game is equal to \(v_{A'}=v_A-\beta=\frac{1}{\Theta}-\beta\).

End Proof

In a sense, the linear programming problem is equivalent to the matrix game \(\Gamma_A\). In deed, consider the following direct and dual problems of linear programming.

\[\tag{1.6.9} \begin{aligned} \min x u \\ x A \geq w \\ x \geq 0 \end{aligned} \]\[\tag{1.6.10} \begin{aligned} \max y w \\ A y \leq u \\ y \geq 0 \end{aligned} \]

Let \(\overline{X}\) and \(\overline{Y}\) be the sets of optimal solutions of the problems \((1.6.9)\) and \((1.6.10),\) respectively. Denote, \[\bigg(\frac{1}{\Theta}\bigg) \overline{X} \triangleq\bigg\{\frac{\overline{x}}{\Theta} | \overline{x} \in \overline{X}\bigg\},\quad \bigg(\frac{1}{\Theta}\bigg) \overline{Y} \triangleq\bigg\{\frac{\overline{y}}{\Theta} | \overline{y} \in \overline{Y}\bigg\},\quad \Theta>0\]

Theorem 1-6-2(*): Let \(\Gamma_{A}\) be the \((m \times n)\) game with the positive matrix \(A\) and let there be given two dual problems of linear programming \((1.6 .9)\) and \((1.6 .10) .\) Then the following statements hold.

  1. Both linear programming problems have a solution \((\overline{X} \neq \oslash\) and \(\overline{Y} \neq \oslash),\) in which case \[ \Theta=\min _{x} x u=\max _{y} y w \]
  2. The value \(v_{A}\) of the game \(\Gamma_{A}\) is \[ v_{A}=\frac{1}{\Theta} \]and the strategies \[ x^{*}=\frac{\overline{x}}{\Theta}, \quad y^{*}=\frac{\overline{y}}{\Theta} \]are optimal, where \(\overline{x} \in \overline{X}\) is an optimal solution of the direct problem \((1.6.9)\) and \(\overline{y} \in \overline{Y}\) is the solution of the dual problem \((1.6.10)\)
  3. Any optimal strategies \(x^{*} \in X^{*}\) and \(y^{*} \in Y^{*}\) of the players can be constructed as shown above, i.e. \[ X^{*}=(\frac{1}{\Theta}) \overline{X},\quad Y^{*}=(\frac{1}{\Theta}) \overline{Y} \]

Proof: Statements \(1,2\) and inclusion \((\frac{1}{\Theta}) \overline{X}\subset X^*, (\frac{1}{\Theta}) \overline{Y}\subset Y^*\), immediately follow from the proof of Theorem 1-6-1. To show statement \(3\), we just need to show the inverse inclusion.

To do this, consider the vectors \(x^*=(\xi^*_1,\cdots,\xi^*_m)\in X^*\) and \(\overline{x}=(\overline{\xi}_1,\cdots,\overline{\xi}_m)\), where \(\overline{x}=\Theta x^*\). Then for all \(j\in N\), we have:

\[ \overline{x}a^j=\Theta x^*a^j\geq \Theta \bigg(\frac{1}{\Theta}\bigg)=1 \]In which case \(\overline{x}\geq 0\), since \(\Theta>0\) and \(x^*\geq 0\). Therefore, \(\overline{x}\) is a feasible solution to problem \((1.6.9)\) (dircet problem).

Let us compute the value of the objective function:

\[ \overline{x}u=\Theta x^*u=\Theta=\min_{x}xu \]i.e. \(\Theta x^*\in \overline{X}\) is an optimal solution to problem \((1.6.9)\), \(X^*\subset (\frac{1}{\Theta})\overline{X}\).

The inclusion \(Y^*\subset (\frac{1}{\Theta})\overline{Y}\) can be probed in a similar manner.

End Proof

Properties of Optimal Strategies and Value of the Game

Let \((x^*,y^*)\in X\times Y\) be a saddle point in mixed strategies for the game \(\Gamma_A\). To test the point \((x^*,y^*)\) for a saddle, we do not need to test inequalities \((1.4.1)\) for all \(x\in X\) and \(y\in Y\). We just need to test only for \(i\in M\) and \(j\in N\). Since the following statement is true.

Theorem 1-7-1: For the situation \((x^*,y^*)\) to be an equilibrium (saddle point) in the game \(\Gamma_A\) in mixed strategies (i.e. \((x^*,y^*)\) is a saddle point in the game \(\overline{\Gamma}_A\)), and the number \(v=K(x^*,y^*)\) be the value, it is necessary and sufficient that the following inequalities hold for all \(i\in M\) and \(j\in N\):

\[ K\left(i, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, j\right) \tag{1.7.1} \]

Proof: Necessity Let \((x^*,y^*)\) be a saddle point in the game \(\overline{\Gamma}_A\), Then: \[ K\left(x, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, y\right) \]For all \(x\in X,y\in Y\). Hence, in particular, for \(u_i\in X\) and \(w_j\in Y\) we have: \[ K\left(i, y^{*}\right) \triangleq K\left(u_{i}, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, w_{j}\right) \triangleq K\left(x^{*}, j\right) \]For all \(i\in M\) and \(j\in N\).

Sufficiency: Let \((x^*,y^*)\) be a pair of mixed strategies for which the inequalities \((1.7.1)\) hold. Also, let \(x=(\xi_1,\cdots,\xi_m)\in X\) and \(y=(\eta_1,\cdots,\eta_n)\in Y\) be arbitrary mixed strategies for Player \(1\) and \(2\), respectively. Multiplying the first and second inequalities \((1.7.1)\) by \(\xi_i\) and \(\eta_j\), respectively, and summing. We get:

\[ \sum_{i=1}^{m} \xi_{i} K\left(i, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \sum_{i=1}^{m} \xi_{i}=K\left(x^{*}, y^{*}\right)\tag{1.7.2} \]\[\sum_{j=1}^{n} \eta_{j} K\left(x^{*}, j\right) \geq K\left(x^{*}, y^{*}\right) \sum_{j=1}^{n} \eta_{j}=K\left(x^{*}, y^{*}\right)\tag{1.7.3} \]

In this case, we have:

\[ \sum_{i=1}^{m} \xi_{i} K\left(i, y^{*}\right)=K\left(x, y^{*}\right)\tag{1.7.4} \]\[ \sum_{j=1}^{n} \eta_{j} K\left(x^{*}, j\right)=K\left(x^{*}, y\right)\tag{1.7.5} \] Substituting \((1.7.4),(1.7.5)\) into \((1.7.2)\) and \((1.7.3),\) respectively, and taking into account the arbitrariness of strategies \(x \in X\) and \(y \in Y\), we obtain saddle point conditions for the pair of mixed strategies \(\left(x^{*}, y^{*}\right)\). i.e.:

\[ K\left(x, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, y\right) \] End Proof

Corollary 1-7-1-1 Let \(\left(i^{*}, j^{*}\right)\) be a saddle point in the game \(\Gamma_{A} .\) Then the situation \(\left(i^{*}, j^{*}\right)\) is also a saddle point in the game \(\overline{\Gamma}_{A}\)

Proof: Let \(\left(i^{*}, j^{*}\right)\) be a saddle point in the game \(\Gamma_{A}\). Then we have:

\[ K\left(i, j^{*}\right) \triangleq K\left(i, w_j^{*}\right)\leq K\left(i^{*}, j^{*}\right)\triangleq K\left(u_i^{*}, w_j^{*}\right)\leq K\left(i^{*}, j\right)\triangleq K\left(u_i^{*}, j\right)\tag{1.7.6} \] Let \(x=(\xi_1,\cdots,\xi_m)\in X\) and \(y=(\eta_1,\cdots,\eta_n)\in Y\) be arbitrary mixed strategies for Player \(1\) and \(2\), respectively. Multiplying the first and second inequalities \((1.7.6)\) by \(\xi_i\) and \(\eta_j\), respectively, and summing. We get:

\[ \sum_{i=1}^{m} \xi_{i} K\left(i, w_j^{*}\right)=K\left(x, w_j^{*}\right) \]\[ \sum_{j=1}^{n} \eta_{j} K\left(u_i^{*}, j\right)=K\left(u_i^{*}, y\right) \]Then we have:

\[ K\left(x, w_j^{*}\right) \leq K\left(u_i^{*}, w_j^{*}\right) \leq K\left(u_i^{*}, y\right) \]i.e. The situation \(\left(i^{*}, j^{*}\right)\) is also a saddle point in the game \(\overline{\Gamma}_{A}\)

End Proof

Theorem 1-7-2(*): Let \(\Gamma_{A}\) be an \((m \times n)\) game. For the situation in mixed strategies, let \(\left(x^{*}, y^{*}\right)\) be an equilibrium (saddle point) in the game \(\overline{\Gamma}_{A},\) it is necessary and sufficient that the following equality holds \[ \max _{1 \leq i \leq m} K\left(i, y^{*}\right)=\min _{1 \leq j \leq n} K\left(x^{*}, j\right)\tag{1.7.7} \]

Proof: Necessity. If \((x^*,y^*)\) is a saddle point, then, by Theorem 1-7-1, we have:

\[ K\left(i, y^{*}\right) \leq K\left(x^{*}, y^{*}\right) \leq K\left(x^{*}, j\right) \]For all \(i\in M,j\in N\). Therefore: \[ K\left(i, y^{*}\right) \leq K\left(x^{*}, j\right) \]for each \(i,j\).

Suppose \((1.7.7)\) is not true. Then: \[ \max _{1 \leq i \leq m} K\left(i, y^{*}\right)<\min _{1 \leq j \leq n} K\left(x^{*}, j\right) \]Consequently, the following inequalities hold:

\[ \begin{aligned} K\left(x^{*}, y^{*}\right)=\sum_{i=1}^{m} \xi_{i}^{*} K\left(i, y^{*}\right) \leq \max _{1 \leq i \leq m} K\left(i, y^{*}\right)\\ K\left(x^{*}, y^{*}\right)=\sum_{j=1}^{n} \eta_{j}^{*} K\left(x^{*}, j\right)\geq \min _{1 \leq j \leq n} K\left(x^{*}, j\right) \end{aligned} \]i.e. \(K\left(x^{*}, y^{*}\right)<K\left(x^{*}, y^{*}\right)\). Which is contraict with the fact. The obtained contradiction proves the necessity of the Theorem assertion.

Sufficiency: Let a pair of mixed strategies \((\tilde{x}, \tilde{y})\) satisfying \(\max\limits_{i}K(i,\tilde{y})=\min\limits_{j}K(\tilde{x},j)\). Show that in this case \((\tilde{x}, \tilde{y})\) is a saddle point in the game \(\overline{\Gamma}_A\).

The following relations hold:

\[ \begin{aligned} \min _{1 \leq j \leq n} K(\tilde{x}, j) & \leq \sum_{j=1}^{n} \tilde{\eta}_{j} K(\tilde{x}, j)=K(\tilde{x}, \tilde{y}) \\ &=\sum_{i=1}^{m} \tilde{\xi}_{i} K(i, \tilde{y}) \leq \max _{1 \leq i \leq m} K(i, \tilde{y}) \end{aligned} \]Hence we have: \[ K(i, \tilde{y}) \leq \max _{1 \leq i \leq m} K(i, \tilde{y})=K(\tilde{x}, \tilde{y})=\min _{1 \leq j \leq n} K(\tilde{x}, j) \leq K(\tilde{x}, j) \]for all \(i\in M,j\in N\). Then by Theorem 1-7-1, \((\tilde{x}, \tilde{y})\) is the saddle point in the game \(\overline{\Gamma}_A\).

End Proof

Theorem 1-7-3(*): The following relation holds for the matrix game \(\Gamma_A\): \[ \max _{x} \min _{j} K(x, j)=v_{A}=\min _{y} \max _{i} K(i, y) \]in which case the extrema are achieved on the players’ optimal strategies.

Proof: Let \((x^*,y^*)\) be the optimal strategies in the game \(\overline{\Gamma}_A\), then we have:

\[ \max_x\min_jK(x,j)\geq \min_jK(x^*,j) \]\[ \min _{y} \max _{i} K(i, y)\leq \max_iK(i,y^*) \]Since, \((x^*,y^*)\) is a optimal strategies in game \(\overline{\Gamma}_A\). By Theorem 1-7-2, the equation (1.7.7) holds.

Then we have:

\[ \min _{y} \max _{i} K(i, y)\leq \max_iK(i,y^*)=\min_jK(x^*,j)\leq \max_x\min_jK(x,j) \]Hence \[ \min _{y} \max _{i} K(i, y)\leq\max_x\min_jK(x,j) \]For all \(j\in N\), we have: \[ \min_{j}K(x,j)\leq K(x,y),\forall y\in Y \] Therefore:

\[ \max_x\min_jK(x,j)\leq\max_x\min_yK(x,y)\leq \min_y\sup_xK(x,y)=\overline{v}_A \]

Similarly, we can get: \[ \underline{v}_A=\max{x}\inf_{y}K(x,y)\leq \min_{y}\max_{x}K(x,y) \leq \min _{y} \max _{i} K(i, y) \]

By Theorem 1-3-2, we have \(\underline{v}_A=\overline{v}_A\), then:

\[ \min _{y} \max _{i} K(i, y)= \max_iK(i,y^*)=\min_jK(x^*,j)= \max_x\min_jK(x,j)=v_A \]

End Proof

Application of Theorem 1-7-3 - a geometric solution to the games with two strategies for one of the players \((2\times n)\) and \((m\times 2)\) games.

Theorem 1-7-4(*): In the matrix game \(\Gamma_{A}\) the players’ sets of optimal mixed strategies \(X^{*}\) and \(Y^{*}\) are convex polyhedra.

Proof: By Theorem 1-7-1 the set \(X^{*}\) is the set of all solutions of the system of inequalities \[ \begin{array}{c} x a^{j} \geq v_{A}, j \in N \\ x u=1 \\ x \geq 0 \end{array} \] where \(u=(1, \ldots, 1) \in R^{m}, v_{A}\) is the value of the game. Thus, \(X^{*}\) is a convex polyhedral set (From the definition of convex polyhedral set).On the other hand, \(X^{*} \subset X\) where \(X\) is a convex polyhedron \((1.5 .3) .\) Therefore, \(X^{*}\) is bounded. Consequently, by Theorem on representation of a polyhedral set((In this page, I didn’t add some details about this Theorem)) the set \(X^{*}\) is a convex polyhedron. In a similar manner, it may be proved that \(Y^{*}\) is a convex polyhedron. End Proof

Theorem 1-7-5: Let \(x^{*}=\left(\xi_{1}^{*}, \ldots, \xi_{m}^{*}\right)\) and \(y^{*}=\left(\eta_{1}^{*}, \ldots, \eta_{n}^{*}\right)\) be optimal strategies in the game \(\overline{\Gamma}_{A}\) and \(\overline{v}_{A}\) be the value of the game. Then for any \(i,\) for which \(K\left(i, y^{*}\right)<\overline{v}_{A},\) there must be \(\xi_{i}^{*}=0,\) and for any \(j\) such that \(\overline{v}_{A}<K\left(x^{*}, j\right)\) there must be \(\eta_{j}^{*}=0\).

Conversely, if \(\xi_{i}^{*}>0,\) then \(K\left(i, y^{*}\right)=\overline{v}_{A}\), and if \(\eta_{j}^{*}>0,\) then \(K\left(x^{*}, j\right)=\overline{v}_{A}\).

Proof: Suppose that for some \(i_0\in M\), \(K(i_0,y^*)< \overline{v}_A\) and \(\xi_{i0}^*\neq 0\). Then we have: \[ K(i_0,y^*)\xi_{i0}^*< \overline{v}_A\xi_{i0}^* \tag{1.7.8} \]For all other \(i\in M,K(i,y^*)\leq v_A\), therefor: \[ K\left(i, y^{*}\right) \xi_{i}^{*} \leq \overline{v}_{A} \xi_{i}^{*} \tag{1.7.9} \]Consequently, add \((1.7.8),(1.7.9)\) we have \(K(x^*,y^*)<\overline{v}_A\), which contradicts to the fact that \(\overline{v}_A\) is the value of the game. Thus, for any \(i\), for which \(K\left(i, y^{*}\right)<\overline{v}_{A},\) there must be \(\xi_{i}^{*}=0,\) and for any \(j\) such that \(\overline{v}_{A}<K\left(x^{*}, j\right)\) there must be \(\eta_{j}^{*}=0\).

The fact is for all \(i\in M,K(i,y^*)\leq v_A\), to show the second part of this theorem, we just to show the statement \(K(i,y^*)< v_A,\forall i\in M\) is nor true. Suppose that for some \(\xi_{i0}^*>0\) and \(K(i_0,y^*)<\overline{v}_A\), and from the first part of this theorem, there must be \(\xi_{i0}^*=0\). Which contradicts to the our assumption. Consequently, the second part of this theorem is been proven.

End Proof

Definition. Player \(1\)’s (\(2\)’s) pure strategy \(i \in M(j \in N)\) is called an essential or active strategy if there exists the player’s optimal strategy \(x^{*}=\left(\xi_{1}^{*}, \ldots, \xi_{m}^{*}\right), \left(y^{*}=\left(\eta_{1}^{*}, \ldots, \eta_{n}^{*}\right)\right)\) for which \(\xi_{i}^{*}>0\left(\eta_{j}^{*}>0\right)\). From the definition, and from the latter theorem, it follows that for each essential strategy \(i\) of Player 1 and any optimal strategy \(y^{*} \in Y^{*}\) of Player 2 in the game \(\Gamma_{A}\) the following equality holds: \[ K\left(i, y^{*}\right)=a_{i} y^{*}=v_{A} \] A similar equality holds for any essential strategy \(j \in N\) of Player 2 and for the optimal strategy \(x^{*} \in X^{*}\) of Player 1 \[ K\left(x^{*}, j\right)=a^{j} x^{*}=v_{A} \]If the equality \(a_{i} y=v_{A}\) holds for the pure strategy \(i \in M\) and mixed strategy \(y \in Y\), then the strategy \(i\) is the best reply to the mixed strategy \(y\) in the game \(\Gamma_{A}\). Thus, using this terminology, the theorem can be restated as follows. If the pure strategy of the player is essential, then it is the best reply to any optimal strategy of the opponent. A knowledge of the optimal strategy spectrum simplifies to finding a solution of the game. Indeed, let \(M_{x^{*}}\) be the spectrum of Player 1’s optimal strategy \(x^{*}\). Then each optimal strategy \(y^{*}=\) \(\left(\eta_{1}^{*}, \ldots, \eta_{n}^{*}\right)\) of Player 2 and the value of the game \(v\) satisfy the system of inequalities \[ \begin{aligned} a_{i} y^{*} &=v, i \in M_{x^{*}} \\ a_{i} y^{*} & \leq v, i \in M \backslash M_{x^{*}} \\ \sum_{j=1}^{n} \eta_{j}^{*} &=1, \eta_{j}^{*} \geq 0, j \in N \end{aligned} \]Thus, only essential strategies may appear in the spectrum \(M_{x^{*}}\) of any optimal strategy \(x^{*}\).

Dominance of Strategies

The complexity of solving a matrix game increases as the dimensions of the matrix A increase. In some cases, however, the analysis of payoff matrices permits a conclusion that some pure strategies do not appear in the spectrum of optimal strategy. This can result in replacement of the original matrix by the payoff matrix of a smaller dimension.

Definition: Strategy \(x'\) for Player \(1\) is said to dominate strategy \(x''\) in the \((m\times n)\) game \(\Gamma_A\), if the following inequalities hold. For all pure strategies \(j\in \{1,\cdots,n\}\) of Player \(2\):

\[ x'a^j\geq x''a^j \tag{1.8.1} \]Similarly, strategy \(y'\) of Player \(2\) dominates his strategy \(y''\) if for all pure strategies \(i\in \{1,\cdots,m\}\) of Player \(1\): \[ a_iy'\leq a_iy'' \tag{1.8.2} \] If inequalities \((1.8.1)\), \((1.8.2)\) are satisfied as strict inequalities, then we are dealing with a strict dominance. A special case of the dominance of strategies is their equivalence.

Definition: Strategies \(x'\) and \(x''\) of Player \(1\) are equivalent in the game \(\Gamma_{A}\) if for all \(j \in\{1, \ldots, n\}\) \[ x' a^{j}=x'' a^{j} \]We shall denote this fact by \(x' \sim x''\). For two equivalent strategies \(x'\) and \(x''\) the following equality holds (for every \(y \in Y\) ) \[ K\left(x', y\right)=K\left(x'', y\right) \]Similarly, strategies \(y'\) and \(y''\) of Player \(2\) are equivalent \(\left(y' \sim y''\right)\) in the game \(\Gamma_{A}\) if for all \(i \in\{1, \ldots, m\}\) \[ y' a_{i}=y'' a_{i} \]Hence we have that for any mixed strategy \(x \in X\) of Player \(1\) the following equality holds \[ K\left(x, y'\right)=K\left(x, y''\right) \]For pure strategies the above definitions are transformed as follows. If Player \(1\)’s pure strategy \(i'\) dominates strategy \(i''\) and Player \(2\)’s pure strategy \(j'\) dominates strategy \(j''\), then for all \(i=1, \ldots, m ; j=1, \ldots, n\) the following inequalities hold: \[ a_{i' j} \geq a_{i'' j}, a_{i j'} \leq a_{i j''} \]This can be written in vector form as follows: \[ a_{i'} \geq a_{i''}, a^{j'} \leq a^{j''} \]Equivalence of the pairs of strategies \(i', i''\left(i' \sim i''\right) \quad\) and \(j', j''\left(j' \sim j''\right)\) implies that the conditions \(a_{i'}=a_{i''}(a^{j'}=a^{j''})\) are satisfied.

Definition. The strategy \(x''\left(y''\right)\) of Player \(1\)(\(2\)) is dominated if there exists a strategy \(x' \neq x''\left(y' \neq y''\right)\) of this player which dominates \(x''\left(y''\right) ;\) otherwise strategy \(x''\left(y''\right)\) is an undominated strategy.

Similarly, strategy \(x''\left(y''\right)\) of Player \(1\)(\(2\)) is strictly dominated if there exists a strategy \(x'\left(y'\right)\) of this player which strictly dominates \(x''\left(y''\right)\).

i.e. for all \(j=\overline{1, n}(i=\overline{1, m})\) the following inequalities hold \[ x' a^{j}>x'' a^{j}, a_{i} y'<a_{i} y'' \]otherwise strategy \(x''\left(y''\right)\) of Player 1(2) is not strictly dominated.

Theorem 1-8-1: If in the game \(\overline{\Gamma}_A\), strategy \(x'\) of one of the players dominates an optimal strategy \(x^*\), then strategy \(x'\) is also optimal.

Proof: Let \(x'\) and \(x^*\) be strategies of Player \(1\). Then by dominance: \[ x'a^j\geq x^*a^j, \forall j=\overline{1,n}. \]Hence, using the optimality of strategy \(x^*\)(Theorem 1-7-3), we get: \[ v_A=\min_{j}x^*a^j\geq \min_{j}x'a^j\geq \min_{j}x^*a^j=v_A,\forall j=\overline{1,n}. \]Therefore, by Theorem 1-7-3, strategy \(x'\) is also optimal.

End Proof

Thus, an optimal strategy can be dominated only by another optimal strategy. On the other hand, no optimal strategy is strictly dominated; hence the players when playing optimally must not use strictly dominated strategies.

Theorem 1-8-2: If, in the game \(\overline{\Gamma}_A\), strategy \(x^*\) of one of the players is optimal, then strategy \(x^*\) is not strictly dominated.

Proof: For definiteness, let \(x^*\) be an optimal strategy of Player \(1\). Assume that \(x^*\) is strictly dominated. i.e. there exist such strategy \(x'\in X\) that:

\[ x'a^j> x^*a^j, \forall j\in \overline{1,n}. \]Hence: \[ \min_{j}x'a^j>\min_{j}x^*a^j. \]However, by the optimality of \(x^*\in X\Rightarrow \min_{j}x^*a^j=v_A\). Therefore, the strict inequality: \[ \max_x\min_jxa^j>v_A \]holds and this contradicts to the fact that \(v_A\) is the value of the game (Theorem 1-7-3). The contradiction proves the theorem.

End Proof

Theorem 1-8-3: Let \(\Gamma_{A}\) be an \((m \times n)\) game. We assume that the \(i\)th row of matrix \(A\) is dominated (i.e. Player \(1\)’s pure strategy is dominated) and let \(\Gamma_{A'}\) be the game with the matrix \(A'\) obtained from \(A\) by deleting the \(i\)th row. Then the following assertions hold.

  1. \(v_{A}=v_{A'}\)
  2. Any optimal strategy \(y^{*}\) of Player 2 in the game \(\Gamma_{A'}\) is also optimal in the game \(\Gamma_{A}\)
  3. If \(x^{*}\) is an arbitrary optimal strategy of Player \(1\) in the game \(\Gamma_{A'}\) and \(\overline{x}_{i}^{*}\) is the extension of strategy \(x^{*}\) at the \(i\)th place, then \(\overline{x}_{i}^{*}\) is an optimal strategy of that player in the game \(\Gamma_{A}\).
  4. If the \(i\)th row of the matrix \(A\) is strictly dominated, then an arbitrary optimal strategy \(\overline{x}^{*}\) of Player \(1\) in the game \(\Gamma_{A}\) can be obtained from an optimal strategy \(x^{*}\) in the game \(\Gamma_{A'}\) by the extension at the \(i\)th place.

Proof: Without loss of generality, we assume, that the last \(m\)th row is dominated. Let \(x=(\xi_1,\cdots,\xi_m)\) be a mixed strategy which dominates the row \(m\), If \(\xi_m=0\), then from the dominance condition, for all \(j=\overline{1,n}\) we get:

\[ \sum_{i=1}^m\xi_ia_{ij}=\sum_{i=1}^{m-1}\xi_ia_{ij}\geq a_{mj},\sum_{i=1}^{m-1}\xi_i=1,\xi_i\geq 0,i=1,\cdots,m-1 \tag{1.8.3} \] Otherwise \((\xi_m>0)\), consider the vector \(x'=(\xi_1',\cdots,\xi_m')\), where: \[ \xi_i'=\begin{cases} \frac{\xi_i}{1-\xi_m} & i\neq m\\ \tag{1.8.4} 0 & i=m \end{cases} \]Components of the vector \(x\) are non-negative,\((\xi_i'\geq 0,\forall i)\) and \(\sum_{i=1}^{m}\xi_i'=1\). Therefore, for all \(i=1,2,\cdots,n\) we have: \[ \frac{1}{1-\xi_m}\sum_{i=1}^{m}\xi_ia_{ij}\geq a_{mj}\frac{1}{1-\xi_m}\sum_{i=1}^{m}\xi_i \]Or: \[ \frac{1}{1-\xi_m}\sum_{i=1}^{m-1}\xi_ia_{ij}\geq a_{mj}\frac{1}{1-\xi_m}\sum_{i=1}^{m-1}\xi_i \]Consider \((1.8.4)\) we get: \[ \sum_{i=1}^{m-1}\xi_i'a_{ij}\geq a_{mj}\sum_{i=1}^{m-1}\xi_i', j=1,\cdots,n, \sum_{i}^{m-1}\xi_i'=1,\xi_i'\geq 0,i=1,\cdots,m-1 \tag{1.8.5} \]Thus, from the dominance of the \(m\)th row, it always follows that it does not exceed a convex combination of the remaining \(m-1\) rows.\((1.8.5)\).

Let \((x^*,y^*)\in Z(\Gamma_{A'})\) be a saddle point in the game \(\Gamma_{A'}\), \(x^*=(\xi^*_1,\cdots,\xi^*_{m-1})\), \(y^*=(\eta_1^*,\cdots,\eta_n^*)\). To prove assertions \(1,2,3\) of the theorem, it suffices to show that \(K(x^*_m,y^*)=v_{A'}\) and: \[ \sum_{j=1}^{n} \alpha_{i j} \eta_{j}^{*} \leq v_{A'} \leq \sum_{i=1}^{m-1} \alpha_{i j} \xi_{i}^{*}+0 \cdot \alpha_{m j} \tag{1.8.6} \]for all \(i=1, \ldots, m, j=1, \ldots, n\). The first equality \(K(x^*_m,y^*)=v_{A'}\) is straightforward (From the optimality of Strategies \((x^*,y^*)\) in the game \(\Gamma_{A'}\)). From \((1.8.6)\) we have:

\[ \sum_{j=1}^{n} \alpha_{i j} \eta_{j}^{*} \leq v_{A'} \leq \sum_{i=1}^{m-1} \alpha_{i j} \xi_{i}^{*},i=\overline{1,m-1},j=\overline{1,n} \tag{1.8.7} \] We shall prove the first inequality. To do this, it suffices to show that: \[ \sum_{j=1}^{n}a_{mj}\eta^*_j\leq v_{A'} \]From inequalities \((1.8.3),(1.8.5)\) we obtain: \[ \sum_{j=1}^{n} \alpha_{m j} \eta_{j}^{*} \leq \sum_{j=1}^{n} \sum_{i=1}^{m-1} \alpha_{i j} \xi_{i}' \eta_{j}^{*} \leq \sum_{i=1}^{m-1} v_{A'} \xi_{i}'=v_{A'} \]Which proves the first part of the theorem.

To prove the second part of the theorem (assertion 4), it suffices to note that in the case of strict dominance of the \(m\)th row the inequalities \((1.8.3),(1.8.5)\) are satisfied as strict inequalities for all \(j=\overline{}1,n\), hence:

\[ \sum_{j=1}^{n} \alpha_{m j} \eta_{j}^{*} < \sum_{j=1}^{n} \sum_{i=1}^{m-1} \alpha_{i j} \xi_{i}' \eta_{j}^{*} \leq \sum_{i=1}^{m-1} v_{A'} \xi_{i}'=v_{A'} \] From Theorem 1-7-5, we have that the \(m\)th component of any optimal strategy of Player \(1\) in the game \(\Gamma_A\) in zero. i.e. \(\xi_m^*=0\)

End Proof

Theorem 1-8-4: Let \(\Gamma_{A}\) be an \((m \times n)\) game. Assume that the \(j\)th column of the matrix \(A\) is dominated and \(\Gamma_{A'}\) is the game having the matrix \(A'\) obtained from \(A\) by deleting the \(j\) th column. Then the following assertions are true.

  1. \(v_{A}=v_{A'}\)
  2. Any optimal strategy \(x^{*}\) of Player 1 in the game \(\Gamma_{A'}\) is also optimal in the game \(\Gamma_{A}\)
  3. If \(y^{*}\) is an arbitrary optimal strategy of Player \(2\) in the game \(\Gamma_{A'}\) and \(\overline{y}_{j}^{*}\) is the extension of strategy \(y\) at the \(j\)th place, then \(\overline{y}_{j}^{*}\) is an optimal strategy of Player 2 in the game \(\Gamma_{A}\).
  4. Further, if the \(j\)th column of the matrix A is strictly dominated, then an arbitrary optimal strategy \(\overline{y}^{*}\) of Player 2 in the game \(\Gamma_{A}\) can be obtained from an optimal strategy \(y^{*}\) in the game \(\Gamma_{A'}\) by extension at the \(j\)th place.

The proof is similar as Theorem 1-8-3.

Summary

Corollary 1-7-1-1, Theorem 1-7-3 is proven by myself, maybe not rigorous enough. According to section 1.5, I just do a briefly mentioned and not provide any proof. And the proofs of other theorem are written in combination with the original and my personal understanding.

Acknowledgement

Thanks for Professor Leon A. Petrosyan. (Petrosyan and Zenkevich 2016)

Reference

Petrosyan, L.A., and N.A. Zenkevich. 2016. Game Theory. World Scientific. https://books.google.ru/books?id=0dXACwAAQBAJ.