If the strategy sets are connected subsets of finite dimensional Euclidean Spaces and the payoff functions are continuous, then the game is considered continuous.

It is very intuitive and usefull from algorithmic point of view to reformulate the equilibrium concept as follows. For all players and define the mapping:

which is the set of the best choices of player with given strategies , of the other players. Note that does not depend on , it depends only on all other strategies , . There is no guarantee that maximum exists for all . Let be the subset of such that exists for all and . A simultaneous strategy vector is an equilibrium if and only if , and for all . By introducing the **best reply mapping**, we can further simplify the above reformulation:

**Theorem 10.2 **
*Vector is equilibrium if and only if and .*

Hence we have shown that the equilibrium-problem of -person games is equivalent to find fixed points of certain point-to-set mappings.

The most frequently used existence theorems of equilibria are based on fixed point theorems such as the theorems of Brouwer, Kakutani, Banach, Tarski etc. Any algorithm for finding fixed points can be successfully applied for computing equilibria.

The most popular existence result is a straightforward application of the Kakutani-fixed point theorem.

**Theorem 10.3 **
*Assume that in an -person game *

*1. the strategy sets are nonempty, closed, bounded, convex subsets of finite dimensional Euclidean spaces;*

*for all , *

*2. the payoff function are continuous on ; *

*3. is concave in with all fixed . *

*Then there is at least one equilibrium.*

**Example 10.4 **Consider a 2-person game, , with strategy sets , and payoff functions , and . We will first find the best responses of both players. Both payoff functions are concave parabolas in their variables with vertices

For all and these values are clearly feasible strategies, so

So is equilibrium if and only if it satisfies equations:

It is easy to see that the unique solution is

which is therefore the unique equilibrium of the game.

**Example 10.5 **Consider a portion of a sea-channel, assume it is the unit interval . Player is a submarine hiding in location , player is an airplane dropping a bomb at certain location resulting in a damage to the submarine. Hence a special two-person game is defined in which , and . With fixed , is maximal if , therefore the best response of player is . Player wants to minimize which occurs if is as large as possible, which implies that

Clearly, there is no such that and , consequently no equilibrium exists.

Define the **aggregation function**
as:

for all and from and some

**Theorem 10.4 **
*Vector is an equilibrium if and only if*

*for all .*

**Proof. **Assume first that is an equilibrium, then inequality (10.1) holds for all and . Adding the -multiples of these relations for we immediately have (10.5).

Assume next that (10.5) holds for all . Select any and , define , and apply inequality (10.5). All but the -th terms cancel and the remaining term shows that inequality (10.1) holds. Hence is an equilibrium.

Introduce function , then clearly is an equilibrium if and only if

Relation (10.6) is known as **Fan's inequality**. It can be rewritten as a variational inequality (see in “Iterative computation of equilibrium.” later), or as a fixed point problem. We show here the second approach. For all define

Since for all , relation (10.6) holds if and only if , that is is a fixed-point of mapping . Therefore any method to find fixed point is applicable for computing equilibria.

The computation cost depends on the type and size of the fixed point problem and also on the selected method.

**Example 10.6 **Consider again the problem of Example 10.4. In this case

so the aggregate function has the form with :

Therefore

and

Notice that this function is strictly concave in and , and is separable in these variables. At the stationary points:

implying that at the optimum

since both right hand sides are feasible. At the fixed point

giving the unique solution:

Assume that for all

where is a vector variable vector valued function which is continuously differentiable in an open set containing . Assume furthermore that for all , the payoff function is continuously differentiable in on with any fixed .

If is an equilibrium, then for all , is the optimal solution of problem:

By assuming that at the Kuhn-Tucker regularity condition is satisfied, the solution has to satisfy the Kuhn-Tucker necessary condition:

where is an -element column vector, is its transpose, is the gradient of (as a row vector) with respect to and is the Jacobian of function .

**Theorem 10.5 **
*If is an equilibrium, then there are vectors such that relations (10.9) are satisfied.*

Relations (10.9) for give a (usually large) system of equations and inequalities for the unknowns and (). Any equilibrium (if exists) has to be among the solutions. If in addition for all , all components of are concave, and is concave in , then the Kuhn-Tucker conditions are also sufficient, and therefore all solutions of (10.9) are equilibria.

The computation cost in solving system (10.9) depends on its type and the chosen method.

**Example 10.7 **Consider again the two-person game of the previous example. Clearly,

so we have

Simple differentiation shows that

therefore the Kuhn-Tucker conditions can be written as follows:

Notice that is concave in , is concave in , and all constraints are linear, therefore all solutions of this equality-inequality system are really equilibria. By systematically examining the combination of cases

and

it is easy to see that there is a unique solution

By introducing slack and surplus variables the Kuhn-Tucker conditions can be rewritten as a system of equations with some nonnegative variables. The nonnegativity conditions can be formally eliminated by considering them as squares of some new variables, so the result becomes a system of (usually) nonlinear equations without additional constraints. There is a large set of numerical methods for solving such systems.

Assume that all conditions of the previous section hold. Consider the following optimization problem:

The two first constraints imply that the objective function is nonnegative, so is the minimal value of it. Therefore system (10.9) has feasible solution if and only if the optimal value of the objective function of problem (10.10) is zero, and in this case any optimal solution satisfies relations (10.9).

**Theorem 10.6 **
*The -person game has equilibrium only if the optimal value of the objective function is zero. Then any equilibrium is optimal solution of problem (10.10). If in addition all components of are concave and is concave in for all , then any optimal solution of problem (10.10) is equilibrium.*

Hence the equilibrium problem of the -person game has been reduced to finding the optimal solutions of this (usually nonlinear) optimization problem. Any nonlinear programming method can be used to solve the problem.

The computation cost in solving the optimization problem (10.10) depends on its type and the chosen method. For example, if (10.10) is an LP, and solved by the simplex method, then the maximum number of operations is exponential. However in particular cases the procedure terminates with much less operations.

**Example 10.8 **In the case of the previous problem the optimization problem has the following form:

Notice that the solution , and is feasible with zero objective function value, so it is also optimal. Hence it is a solution of system (10.9) and consequently an equilibrium.

We have seen earlier that a finite game does not necessary have equilibrium. Even if it does, in the case of repeating the game many times the players wish to introduce some randomness into their actions in order to make the other players confused and to seek an equilibrium in the stochastic sense. This idea can be modeled by introducing probability distributions as the strategies of the players and the expected payoff values as their new payoff functions.

Keeping the notation of Section 10.1 assume that we have players, the finite strategy set of player is . In the mixed extension of this finite game each player selects a discrete probability distribution on its strategy set and in each realization of the game an element of is chosen according to the selected distribution. Hence the new strategy set of player is

which is the set of all -element probability vectors. The new payoff function of this player is the expected value:

Notice that the original “pure” strategies can be obtained by selecting as the -th basis vector. This is a continuous game and as a consequence of Theorem 10.3 it has at least one equilibrium. Hence if a finite game is without an equilibrium, its mixed extension has always at least one equilibrium, which can be obtained by using the methods outlined in the previous sections.

**Example 10.9 **Consider the two-person case in which , and as in section 10.1 introduce matrices and with elements and . In this special case

The constraints of can be rewritten as:

so we may select

The optimization problem (10.10) now reduces to the following:

where , and , .

Notice this is a quadratic optimization problem. Computation cost depends on the selected method. Observe that the problem is usually nonconvex, so there is the possibility of stopping at a local optimum.

Mixed extensions of two-person finite games are called **bimatrix games**. They were already examined in Example 10.9. For notational convenience introduce the simplifying notation:

We will show that problem (10.15) can be rewritten as quadratic programming problem with linear constraints.

Consider the objective function first. Let

then the objective function can be rewritten as follows:

The last two constraints also simplify:

implying that

so we may rewrite the objective function again:

since

Hence we have the following quadratic programming problem:

where the last two conditions are obtained from (10.17) and the nonnegativity of vectors , .

**Theorem 10.7 **
*Vectors and are equilibria of the bimatrix game if and only if with some and , is optimal solution of problem (10.18). The optimal value of the objective function is zero.*

This is a quadratic programming problem. Computation cost depends on the selected method. Since it is usually nonconvex, the algorithm might terminate at local optimum. We know that at the global optimum the objective function must be zero, which can be used for optimality check.

and

Then

so problem (10.18) has the form:

where and . We also know from Theorem 10.7 that the optimal objective function value is zero, therefore any feasible solution with zero objective function value is necessarily optimal. It is easy to see that the solutions

are all optimal, so they provide equilibria.

One might apply relations (10.9) to find equilibria by solving the equality-inequality system instead of solving an optimization problem. In the case of bimatrix games problem (10.9) simplifies as

which can be proved along the lines of the derivation of the quadratic optimization problem.

The computation cost of the solution of system (10.19) depends on the particular method being selected.

**Example 10.11 **Consider again the bimatrix game of the previous example. Substitute the first and second constraints and into the third and fourth condition to have

It is easy to see that the solutions given in the previous example solve this system, so they are equilibria.

We can also rewrite the equilibrium problem of bimatrix games as an equality-inequality system with mixed variables. Assume first that all elements of and are between and . This condition is not really restrictive, since by using linear transformations

where , and is the matrix all elements of which equal , the equilibria remain the same and all matrix elements can be transformed into interval .

**Theorem 10.8 **
*Vectors , are an equilibrium if and only if there are real numbers and zero-one vectors , and such that*

*where denotes the vector with all unit elements.*

**Proof. **Assume first that , is an equilibrium, then with some and , (10.19) is satisfied. Define

Since all elements of and are between and , the values and are also between and . Notice that

which implies that (10.20) holds.

Assume next that (10.20) is satisfied. Then

If , then , (where is the -th basis vector), and if , then . Therefore

implying that . We can similarly show that . Thus (10.19) is satisfied, so, is an equilibrium.

The computation cost of the solution of system (10.20) depends on the particular method being seleced.

**Example 10.12 **In the case of the bimatrix game introduced earlier in Example 10.10 we have the following:

Notice that all three solutions given in Example 10.10 satisfy these relations with

and

respectively.

In the special case of , bimatrix games are called **matrix games** and they are represented by matrix . Sometimes we refer to the game as matrix game. Since , the quadratic optimization problem (10.18) becomes linear:

From this formulation we see that the set of the equilibrium strategies is a convex polyhedron. Notice that variables and can be separated, so we have the following result.

**Theorem 10.9 **
*Vectors and give an equilibrium of the matrix game if and only if with some and , and are optimal solutions of the linear programming problems:*

Notice that at the optimum, . The optimal value is called the **value of the matrix game**.

Solving problem 10.22 requires exponential number of operations if the simplex method is chosen. With polynomial algorithm (such as the interior point method) the number of operations is only polynomial.

**Example 10.13 **Consider the matrix game with matrix:

In this case problems 10.22 have the form:

The application of the simplex method shows that the optimal solutions are , , , and .

We can also obtain the equilibrium by finding feasible solutions of a certain set of linear constraints. Since at the optimum of problem (10.21), , vectors , and scalers and are optimal solutions if and only if

The first phase of the simplex method has to be used to solve system (10.23), where the number of operations might be experimental. However in most practical examples much less operations are needed.

**Example 10.14 **Consider again the matrix game of the previous example. In this case system (10.23) has the following form:

It is easy to see that , , satisfy these relations, so , is an equilibrium.

Consider now a matrix game with matrix . The main idea of this method is that at each step both players determine their best pure strategy choices against the average strategies of the other player of all previous steps. Formally the method can be described as follows.

Let be the initial (mixed) strategy of player . Select (the st basis vector) such that

In any further step , let

and select so that

Let then

and select so that

By repeating the general step for two sequences are generated: , and . We have the following result:

**Theorem 10.10 **
*Any cluster point of these sequences is an equilibrium of the matrix game. Since all and are probability vectors, they are bounded. Therefore there is at least one cluster point.*

Assume, matrix is . In (10.24) we need multiplications. In (10.25) and (10.27) multiplications and divisions. In (10.26) and (10.28) multiplications. If we make iteration steps, then the total number of multiplications and divisions is:

The formal algorithm is as follows:

*
Matrix-Equilibrium(
)
*

1 2 define such that 3 4 5 6 define such that 7 8 9 define such that 10 11and 12`IF`

is equilibrium 13`THEN`

go back to 4`ELSE`

Here is a user selected error tolerance.

**Example 10.15 **We applied the above method for the matrix game of the previous example and started the procedure with . After 100 steps we obtained and . Comparing it to the true values of the equilibrium strategies we see that the error is below , showing the very slow convergence of the method.

A matrix game with skew-symmetric matrix is called symmetric. In this case and the two linear programming problems are identical. Therefore at the optimum , and the equilibrium strategies of the two players are the same. Hence we have the following result:

**Theorem 10.11 **
*A vector is equilibrium of the symmetric matrix game if and only if*

Solving system (10.29) the first phase of the simplex method is needed, the number of operations is exponential in the worst case but in practical case usually much less.

**Example 10.16 **Consider the symmetric matrix game with matrix . In this case relations (10.29) simplify as follows:

Clearly the only solution is and , that is the first pure strategy.

We will see in the next subsection that linear programming problems are equivalent to symmetric matrix games so any method for solving such games can be applied to solve linear programming problems, so they serve as alternative methodology to the simplex method. As we will see next, symmetry is not a strong assumption, since any matrix game is equivalent to a symmetric matrix game.

Consider therefore a matrix game with matrix , and construct the skew-symmetric matrix

where all components of vector equal 1. Matrix games and are equivalent in the following sense. Assume that , which is not a restriction, since by adding the same constant to all element of they become positive without changing equilibria.

*1. If is an equilibrium strategy of matrix game then with , and is an equilibrium of matrix game with value ; *

*2. If is an equilibrium of matrix game and is the value of the game, then*

*is equilibrium strategy of matrix game .*

**Proof. **Assume first that is an equilibrium strategy of game , then , , , so

First we show that , that is . If , then (since is a probability vector) and , contradicting the second inequality of (10.30). If , then , and by the third inequality of (10.30), must have at least one positive component which makes the first inequality impossible.

Next we show that . From (10.30) we have

and by adding these inequalities we see that

and combining this relation with the third inequality of (10.30) we see that .

Select , then , so both , and are probability vectors, furthermore from (10.30),

So by selecting and , and are feasible solutions of the pair (10.22) of linear programming problems with , therefore is an equilibrium of matrix game . Part 2. can be proved in a similar way, the details are not given here.

In this section we will show that linear programming problems can be solved by finding the equilibrium strategies of symmetric matrix games and hence, any method for finding the equilibria of symmetric matrix games can be applied instead of the simplex method.

Consider the primal-dual linear programming problem pair:

Construct the skew-symmetric matrix:

**Theorem 10.13 **
*Assume is an equilibrium strategy of the symmetric matrix game with . Then*

*are optimal solutions of the primal and dual problems, respectively.*

**Proof. **If is an equilibrium strategy, then , that is,

Since and , both vectors , and are nonnegative, and by dividing the first two relations of (10.32) by ,

showing that and are feasible for the primal and dual, respectively. From the last condition of (10.32) we have

However

consequently, , showing that the primal and dual objective functions are equal. The duality theorem implies the optimality of and .

**Example 10.17 **Consider the linear programming problem:

First we have to rewrite the problem as a primal problem. Introduce the new variables:

and multiply the -type constraint by . Then the problem becomes the following:

Hence

and so matrix becomes:

The fictitious play method is an iteration algorithm in which at each step the players adjust their strategies based on the opponent's strategies. This method can therefore be considered as the realization of a discrete system where the strategy selections of the players are the state variables. For symmetric matrix games John von Neumann introduced a continuous systems approach when the players continuously adjust their strategies. This method can be applied to general matrix games, since–as we have seen earlier–any matrix game is equivalent to a symmetric matrix game. The method can also be used to solve linear programming problems as we have seen earlier that any primal-dual pair can be reduced to the solution of a symmetric matrix game.

Let now be a skew-symmetric matrix. The strategy of player , is considered as the function of time . Before formulating the dynamism of the system, introduce the following notation:

For arbitrary probability vector solve the following nonlinear initial-value problem:

Since the right-hand side is continuous, there is at least one solution. The right hand side of the equation can be interpreted as follows. Assume that . If player selects strategy , then player is able to obtain a positive payoff by choosing the pure strategy , which results in a negative payoff for player . However if player increases to one by choosing the same strategy its payoff becomes zero, so it increases. Hence it is the interest of player to increase . This is exactly what the first term represents. The second term is needed to ensure that remains a probability vector for all .

The computation of the right hand side of equations (10.34) for all requires multiplications. The total computation cost depends on the length of solution interval, on the selected step size, and on the choice of the differential equation solver.

**Theorem 10.14 **
*Assume that is a positive strictly increasing sequence converging to , then any cluster point of the sequence is equilibrium strategy, furthermore there is a constant such that*

**Proof. **First we have to show that is a probability vector for all . Assume that with some and , . Define

Since is continuous and , clearly , and for all , . Then for all ,

and the Lagrange mean-value theorem implies that with some ,

which is a contradiction. Hence is nonnegative. Next we show that for all . Let , then

so satisfies the homogeneous equation

with the initial condition . Hence for all , , showing that is a probability vector.

Assume that for some , . Then

By multiplying both sides by and adding the resulted equations for we have:

The first term is zero, since is skew-symmetric. Notice that this equation remains valid even as except the break-points (where the derivative of does not exist) since (10.36) remains true.

Assume next that with a positive , . Then for all , . Since equation (10.37) can be rewritten as

with

we see that satisfies a homogeneous equation with zero initial solution at , so the solution remains zero for all . Therefore showing that , that is, is equilibrium strategy.

If for all , then , and clearly

that is

Integrate both sides in interval to have

with , which implies that

By using the Cauchy–Schwartz inequality we get

which is valid even at the break points because of the continuity of functions . And finally, take a sequence with increasingly converging to . The sequence is bounded (being probability vectors), so there is at least one cluster point . From (10.40), by letting we have that showing that is an equilibrium strategy.

**Example 10.18 **Consider the matrix game with matrix

which was the subject of our earlier Example 10.13 In order to apply the method of von Neumann we have to find first an equivalent symmetric matrix game. The application of the method given in Theorem 10.12. requires that the matrix has to be positive. Without changing the equilibria we can add 2 to all matrix elements to have

and by using the method we get the skew-symmetric matrix

The differential equations (10.34) were solved by using the th order Runge–Kutta method in the interval with the step size and initial vector . From we get the approximations

of the equilibrium strategies of the original game. Comparing these values to the exact values:

we see that the maximum error is about .

Consider an -person continuous game and assume that all conditions presented at the beginning of Subsection 10.2.3 are satisfied. In addition, assume that for all , is bounded, all components of are concave and is concave in with any fixed . Under these conditions there is at least one equilibrium (Theorem 10.3). The uniqueness of the equilibrium is not true in general, even if all are strictly concave in . Such an example is shown next.

**Example 10.19 **Consider a two-person game with and . Clearly both payoff functions are strictly concave and there are infinitely many equilibria: .

Select an arbitrary nonnegative vector and define function

where , and is the gradient (as a row vector) of with respect to . The game is said to be **diagonally strictly concave** if for all , and for some ,

**Theorem 10.15 **
*Under the above conditions the game has exactly one equilibrium.*

**Proof. **The existence of the equilibrium follows from Theorem 10.3. In proving uniqueness assume that and are both equilibria, and both satisfy relations (10.9). Therefore for ,

and the second equation can be rewritten as

where and are the th components of and , respectively. Multiplying (10.43) by for and by for and adding the resulted equalities for we have

Notice that the sum of the first two terms is positive by the diagonally strict concavity of the game, the concavity of the components of implies that

and

Therefore from (10.44) we have

where we used the fact that for all and ,

This is an obvious contradiction, which completes the proof.

In practical cases the following result is very useful in checking diagonally strict concavity of -person games.

**Theorem 10.16 **
*Assume is convex, is twice continuously differentiable for all , and is negative definite with some , where is the Jacobian of . Then the game is diagonally strictly concave.*

**Proof. **Let , . Then for all , and

Integrate both side in to have

and by premultiplying both sides by we see that

completing the proof.

**Example 10.20 **Consider a simple two-person game with strategy sets , and payoff functions

and

Clearly all conditions, except diagonally strict concavity, are satisfied. We will use Theorem 10.16 to show this additional property. In this case

so

with Jacobian

We will show that

is negative definite with some . For example, select , then this matrix becomes

with characteristic polynomial

having negative eigenvalues , .

We have see earlier in Theorem 10.4 that is an equilibrium if and only if

for all , where is the aggregation function (10.4). In the following analysis we assume that the -person game satisfies all conditions presented at the beginning of Subsection 10.2.9 and (10.42) holds with some positive .

We first show the equivalence of (10.45) and a variational inequality.

**Theorem 10.17 **
*A vector satisfies (10.45) if and only if*

*for all , where is defined in (10.41).*

**Proof. **Assume satisfies (10.45). Then as function of obtains maximum at , therefore

for all , and since is , we proved that satisfies (10.46).

Assume next that satisfies (10.46). By the concavity of in and the diagonally strict concavity of the game we have

so satisfies (10.45).

Hence any method available for solving variational inequalities can be used to find equilibria.

Next we construct a special two-person, game the equilibrium problem of which is equivalent to the equilibrium problem of the original -person game.

**Theorem 10.18 **
*Vector satisfies (10.45) if and only if is an equilibrium of the two-person game where
*

Assume first that satisfies (10.45). Then it satisfies (10.46) as well, so

We need in addition to show that

In contrary assume that with some , . Then

where we used (10.42) and (10.46). This is a clear contradiction.

Assume next that is an equilibrium of game . Then for any ,

The first part can be rewritten as

showing that (10.46) is satisfied, so is (10.45).

Consider the following iteration procedure.

Let be arbitrary, and solve problem

Let denote an optimal solution and define . If , then for all ,

so by Theorem 10.17, is an equilibrium. Since , we assume that . In the general step we have already vectors , and scalers . Then the next vector and next scaler are the solutions of the following problem:

Notice that

and

so we know that .

The formal algorithm is as follows:

*
Continuous-Equilibrium
*

1 2 solve problem (10.47), let be optimal solution 34`IF`

is equilibrium 5`THEN`

6 7 solve problem (10.48), let be optimal solution 8`RETURN`

9`IF`

is equilibrium 10`THEN`

11`RETURN`

go to 5`ELSE`

Before stating the convergence theorem of the algorithm we notice that in the special case when the strategy sets are defined by linear inequalities (that is, all functions are linear) then all constraints of problem (10.48) are linear, so at each iteration step we have to solve a linear programming problem.

In this linear case the simplex method has to be used in each iteration step with exponential computational cost, so the overall cost is also exponential (with prefixed number of steps).

**Theorem 10.19 **
*There is a subsequence of generated by the method that converges to the unique equilibrium of the -person game.*

**Proof. **The proof consists of several steps.

First we show that as . Since at each new iteration an additional constraint is added to (10.48), sequence is nonincreasing. Since it is also nonnegative, it must be convergent. Sequence is bounded, since it is from the bounded set , so it has a convergent subsequence . Notice that from (10.48) we have

where the right hand side tends to zero. Thus and since the entire sequence is monotonic, the entire sequence converges to zero.

Let next be an equilibrium of the -person game, and define

By (10.42), for all . Define the indices so that

then for all , ?

which implies that

where we used again problem (10.48). From this relation we conclude that as . And finally, notice that function satisfies the following properties:

1. is continuous in ;

2. if (as it was shown just below relation (10.49));

3. if for a convergent sequence , , then necessarily .

By applying property with sequence it is clear that so . Thus the proof is complete.

**Exercises**

10.2-1 Consider a 2-person game with strategy sets , and payoff functions and . Show the existence of a unique equilibrium point by computing it. Show that Theorem 10.3. cannot be applied to prove existence.

10.2-2 Consider the “price war” game in which two firms are price setting. Assume that and are the strategies of the players, and the payoff functions are:

by assuming that . Is there an equilibrium? How many equilibria were found?

10.2-3 A portion of the sea is modeled by the unit square in which a submarine is hiding. The strategy of the submarine is the hiding place . An airplane drops a bomb in a location , which is its strategy. The payoff of the airplane is the damage occurred by the bomb, and the payoff of the submarine is its negative. Does this 2-person game have an equilibrium?

10.2-4 In the second-price auction they sell one unit of an item to bidders. They value the item as . Each of them offers a price for the item simultaneously without knowing the offers of the others. The bidder with the highest offer will get the item, but he has to pay only the second highest price. So the strategy of bidder is , so , and the payoff function for this bidder is:

What is the best response function of bidder ? Does this game have equilibrium?

10.2-5 Formulate Fan's inequality for Exercise 10.2-1.

10.2-6 Formulate and solve Fan's inequality for Exercise 10.2-2.

10.2-7 Formulate and solve Fan's inequality for Exercise 10.2-4.

10.2-8 Consider a 2-person game with strategy sets , and payoff functions

Formulate Fan's inequality.

10.2-9 Let , , . Formulate the Kuhn-Tucker conditions to find the equilibrium. Solve the resulted system of inequalities and equations.

10.2-10 Consider a 3-person game with , , and . Formulate the Kuhn-Tucker condition.

10.2-11 Formulate and solve system (10.9) for exercise 10.2-8.

10.2-12 Repeat the previous problem for the game given in exercise 10.2-1

10.2-13 Rewrite the Kuhn-Tucker conditions for exercise 10.2-8 into the optimization problem (10.10) and solve it.

10.2-14 Formulate the mixed extension of the finite game given in Exercise 10.1-1.

10.2-15 Formulate and solve optimization problem (10.10) for the game obtained in the previous problem.

10.2-16 Formulate the mixed extension of the game introduced in Exercise 10.2-3. Formulate and solve the corresponding linear optimization problems (10.22) with , , .

10.2-17 Use fictitious play method for solving the matrix game of exercise 10.2-16.

10.2-18 Generalize the fictitious play method for bimatrix games.

10.2-19 Generalize the fictitious play method for the mixed extensions of finite -person games.

10.2-20 Solve the bimatrix game with matrics and with the method you have developed in Exercise 10.2-18.

10.2-21 Solve the symmetric matrix game by linear programming.

10.2-22 Repeat exercise 10.2-21 with the method of fictitious play.

10.2-23 Develop the Kuhn-Tucker conditions (10.9) for the game given in Exercise 10.2-21 above.

10.2-24 Repeat Exercises 10.2-21, 10.2-22 and 10.2-23 for the matrix game (First find the equivalent symmetric matrix game!).

10.2-25 Formulate the linear programming problem to solve the matrix game with matrix .

10.2-26 Formulate a linear programming solver based on the method of fictitious play and solve the LP problem:

10.2-27 Solve the LP problem given in Example 8.17 by the method of fictitious play.

10.2-28 Solve Exercise 10.2-21 by the method of von Neumann.

10.2-29 Solve Exercise 10.2-24 by the method of von Neumann.

10.2-30 Solve Exercise 10.2-17 by the method of von Neumann.

10.2-31 Check the solution obtained in the previous exercises by verifying that all constraints of (10.21) are satisfied with zero objective function.

*Hint.* What and should be selected?

10.2-32 Solve exercise 10.2-26 by the method of von Neumann.

10.2-33 Let , , . Show that both payoff functions are strictly concave in and respectively. Prove that there are infinitely many equilibria, that is , the strict concavity of the payoff functions does not imply the uniqueness of the equilibrium.

10.2-34 Can matrix games be strictly diagonally concave?

10.2-35 Consider a two-person game with strategy sets , and payoff functions , . Show that this game satisfies all conditions of Theorem 10.16.

10.2-36 Solve the problem of the previous exercise by algorithm (10.47)–(10.48).